GPU
Some workloads need NVIDIA GPU access from inside a container. Install the current NVIDIA Container Toolkit and verify that Docker can pass the GPU through to a CUDA image.
Use the current CUDA image tag from NVIDIA’s container registry. This book uses a current CUDA 13 image.
1docker run --rm --gpus all nvidia/cuda:13.1.1-base-ubuntu24.04 nvidia-smi
You should see nvidia-smi output with the GPU, driver, and CUDA compatibility reported by the host.
The Dockerfile uses a current CUDA/cuDNN development image as the base image. It installs Python, Jupyter Lab, PyTorch, and supervisor. Before production use, confirm that the CUDA image, host driver, and framework wheels support the same CUDA generation.
1FROM nvidia/cuda:13.1.1-cudnn-devel-ubuntu24.04
2
3ENV DEBIAN_FRONTEND=noninteractive
4ENV JUPYTER_TYPE=lab
5
6RUN apt-get update -y && \
7 apt-get upgrade -y && \
8 apt-get install --no-install-recommends python3 python3-pip python3-venv supervisor -y && \
9 rm -rf /var/lib/apt/lists/*
10
11COPY jupyter.conf /etc/supervisor/conf.d/
12
13RUN python3 -m pip install --no-cache-dir --break-system-packages \
14 jupyterlab \
15 torch \
16 torchvision \
17 torchaudio
18
19VOLUME ["/ipynb"]
20
21EXPOSE 8888
22
23CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf", "-n"]
Build.
1docker build --no-cache -t gpu-jupyter:local .
Run. Note that we have to specify the --gpus all flag to give GPU access to the container.
1docker run \
2 -it \
3 --rm \
4 -p 8888:8888 \
5 --gpus all \
6 -v `pwd`/ipynb:/ipynb \
7 gpu-jupyter:local
You may now access the Jupyter Lab at http://localhost:8888.