This page is located in archive. Go to the latest version of this course pages. Go the latest version of this page.

Labs How To

Basic info

If you are new to CTU, see the checklist for visiting students.

Important Links: Schedule | Forum | BRUTE


There is a discussion forum administered for this course that can be used to solicit help for the assignments. It is monitored by the lab assistants and it is the preferred form of communication for getting assistance for the assignments since all students can see the question and answer threads. If you find an error in the assignment or it is unclear/ambiguous, please write on the forum.

We also ask you to write remarks about lectures (typos, questions) to a separate thread

Python, IDEs

You will implement the labs in python. The assignment templates are prepared in python 3.7. The first lab will need python with standard packages (numpy, matplotlib).

For the case you are not too sure about your Python/NumPy skills, have a look here: http://cs231n.github.io/python-numpy-tutorial/.

We recommend using an IDE for python development. There is a professional PyCharm license available to the university https://download.cvut.cz. However, it could be somewhat slow (running on java and has many features).

We recommend Visual Studio Code. It is very easy to setup and convenient to work with. Follow these basic instructions and Wizards from within VS:

Jupyter Notebooks

Here's some lines of code useful in Jupyter notebooks.

   """ Resize the notbook to full width, to fit more code and images """
   from IPython.core.display import display, HTML
   display(HTML("<style>.container { width:100% !important; }</style>"))
   """ some basic packages and settings to show images inline """
   import numpy as np
   import importlib
   %matplotlib inline
   import matplotlib.pyplot as plt
   """ automatically reload included modules (need to run import command to trigger reloading) """
   %load_ext autoreload
   %autoreload 2
   """ Controls for figure sizes to change """
   plt.rcParams['figure.dpi'] = 200
   plt.rcParams['figure.figsize'] = [16, 8]
   plt.rcParams.update({'errorbar.capsize': 1})

If you need to debug an exception in Jupyter notebook, there is a %debug command that opens a console to see local variables and navigate call stack.

Google Colab

An easy way to try something with deep learning is Google Colab. Take a look at intro colab space. It has tensorflow and pytorch installed and you also get GPU acceleration. But it is harder to work with a bigger project with classes, debug, etc.

Remote Servers

See student GPU servers at the department.

  1. Follow the rules instructed there.
  2. All servers have configurable environment module system Lmod.
  3. ml spider torch shows all available versions of pytorch, check also 'ml spider torchvision'.
  4. Load the module e.g. with
     ml torchvision/0.11.1-foss-2021a-CUDA-11.3.1 
    (“loads” pytorch, python of the right versions and all other dependencies).
  5. Check which GPU is available with nvidia-smi or gpu-status script
  6. Run using this GPU:
     export CUDA_VISIBLE_DEVICES=3; python train.py --lr 0.001
  7. You can run a jupyter notebook on the server and access it as e.g. http://cantor.felk.cvut.cz:8888. If this does not work, port forwarding needs to be set up Running a Jupyter notebook from a remote server.
  8. For a more convenient work with servers you can setup passwordless access, mount the file system and save you module configuration (see below). Both scripts and notebooks can be run from VS Code, but need to setup environment variable CUDA_VISIBLE_DEVICES and program arguments somewhere in configuration files (instructions needed). Need volunteers to test these instructions and help to fill in the gaps.

Selecting a free GPU on servers is possible from within python / jupyter notebook:

# Scripts provided by lecturers of VIR subject (see 
# https://cw.fel.cvut.cz/b191/courses/b3b33vir/start
import os
import numpy as np
def get_free_gpu():
    os.system('nvidia-smi -q -d Memory |grep -A4 GPU|grep Free >tmp')
    memory_available = [int(x.split()[2]) for x in open('tmp', 'r').readlines()]
    index = np.argmax(memory_available[:-1])  # Skip the 7th card --- it is reserved for evaluation of CTU FEE subjects!
    return index   # Returns index of the gpu with the most memory available
if torch.cuda.is_available():
        dev = torch.device(get_free_gpu())
        dev = torch.device('cpu')


To not have to type your password each time you login, you can configure authentication on the server used pre-shared keys. You can do so using ssh-keygen and ssh-add.
This instruction for linux seems ok.


If you want to copy some data there and back to the server, it is convenient to mount your working directory on the server to your filesystem using sshfs. This tool works at first, but may be failing in certain cases, when you change network access point, sleep / wake computer, etc. I use the following settings:

sshfs -o defer_permissions,reconnect,ServerAliveInterval=120,ServerAliveCountMax=3,follow_symlinks,compression=yes -o kernel_cache,entry_timeout=5,sync_read shekhovt@cantor.felk.cvut.cz:/~ ~/cantor 

Bash Profile

You can configure the system environment you get when login to the server (see Controlling Modules During Login ). For this, create on the server ~/.bash_profile standardly containing

# .bash_profile

if [ -f ~/.bashrc ]; then
source ~/.bashrc
and '~/.bashrc' where you put your configurations, e.g.
module restore

The command module restore there loads in the modules configuration previously saved with module save. This configuration should (theoretically) work even for non-interactive shell, such as when you configure Pycharm to run your code there.

Remote Development with Pycharm

Pycharm Professional has a well-integrated support for remote development. You edit the code that is stored locally. When you need to run it, it is synchronized to a directory on the server (called deployment). You can run and even debug you code on the server from Pycharm.

See pycharm Configure an interpreter using SSH, See pycharm remote deployment, remote interpreter using SSH, remote debugging. Some other variants of configuring a remote interpreter are supported (e.g. remote Conda environment).

Remote development, virtual environment and modules

Everything may work and then you do not need to do anything. However, sometimes there happen to be path problems when you are trying to use virtual environment, load modules and at the same time do the remote development. Maybe this “trick” will help you…

  • find the modules you want to use (e.g. you would like to module load torchvision/0.10.0-fosscuda-2020b-PyTorch-1.9.0)
  • find the path to python using the “env” command (e.g. /mnt/appl/software/Python/3.8.6-GCCcore-10.2.0/bin/python3.8)
  • create your virtual environment (e.g. virtualenv -p /mnt/appl/software/Python/3.8.6-GCCcore-10.2.0/bin/python3.8 my_virt_env_name)
  • activate your virtual environment
  • create a new script for calling python in the virtual environment directory (e.g. by emacs my_virt_env_name/bin/python_ml) and edit it to contain:
    #!/bin/bash -l
    module load torchvision/0.10.0-fosscuda-2020b-PyTorch-1.9.0
    /absolute_path/my_virt_env_name/bin/python "$@"
  • whenever you are asked for an interpreter, use the created python_ml instead of just python.
courses/bev033dle/labs/0_howto/start.txt · Last modified: 2022/03/31 15:15 by shekhole