Skip to main content
Enabling NVIDIA GPU support on Talos is bound by NVIDIA EULA. The Talos published NVIDIA drivers are bound to a specific Talos release. The extensions versions also needs to be updated when upgrading Talos.
We will be using the following NVIDIA system extensions:
  • nonfree-kmod-nvidia
  • nvidia-container-toolkit
To build a NVIDIA driver version not published by SideroLabs follow the instructions here
Create the boot assets which includes the system extensions mentioned above (or create a custom installer and perform a machine upgrade if Talos is already installed).
Make sure the driver version matches for both the nonfree-kmod-nvidia and nvidia-container-toolkit extensions. The nonfree-kmod-nvidia extension is versioned as <nvidia-driver-version>-<talos-release-version> and the nvidia-container-toolkit extension is versioned as <nvidia-driver-version>-<nvidia-container-toolkit-version>.

Proprietary vs OSS Nvidia driver support

The NVIDIA Linux GPU Driver contains several kernel modules: nvidia.ko, nvidia-modeset.ko, nvidia-uvm.ko, nvidia-drm.ko, and nvidia-peermem.ko. Two “flavors” of these kernel modules are provided, and both are available for use within Talos: The choice between Proprietary/OSS may be decided after referencing the Official NVIDIA announcement.

Enabling the NVIDIA modules and the system extension

Patch Talos machine configuration using the patch gpu-worker-patch.yaml:
machine:
  kernel:
    modules:
      - name: nvidia
      - name: nvidia_uvm
      - name: nvidia_drm
      - name: nvidia_modeset
Now apply the patch to all Talos nodes in the cluster having NVIDIA GPU’s installed:
talosctl patch mc --patch @gpu-worker-patch.yaml
The NVIDIA modules should be loaded and the system extension should be installed. This can be confirmed by running:
talosctl read /proc/modules
which should produce an output similar to below:
nvidia_uvm 1146880 - - Live 0xffffffffc2733000 (PO)
nvidia_drm 69632 - - Live 0xffffffffc2721000 (PO)
nvidia_modeset 1142784 - - Live 0xffffffffc25ea000 (PO)
nvidia 39047168 - - Live 0xffffffffc00ac000 (PO)
talosctl get extensions
which should produce an output similar to below:
NODE           NAMESPACE   TYPE              ID                                                                 VERSION   NAME                       VERSION
172.31.41.27   runtime     ExtensionStatus   000.ghcr.io-frezbo-nvidia-container-toolkit-510.60.02-v1.9.0       1         nvidia-container-toolkit   510.60.02-v1.9.0
talosctl read /proc/driver/nvidia/version
which should produce an output similar to below:
NVRM version: NVIDIA UNIX x86_64 Kernel Module  510.60.02  Wed Mar 16 11:24:05 UTC 2022
GCC version:  gcc version 11.2.0 (GCC)

Deploying NVIDIA GPU Operator

Follow the upstream instructions with only passing Helm chart values specific to Talos. Disable the driver and toolkit components of the GPU operator since we have already enabled them as system extensions on Talos. Further custom values may be needed to be passed to the helm chart depending on the cluster configuration.
kubectl create ns gpu-operator
kubectl label --overwrite ns gpu-operator pod-security.kubernetes.io/enforce=privileged
helm repo add nvidia https://helm.ngc.nvidia.com/nvidia
helm repo update
helm upgrade --wait --install -n gpu-operator gpu-operator nvidia/gpu-operator \
  --set driver.enabled=false \
  --set toolkit.enabled=false \
  --set hostPaths.driverInstallDir=/usr/local/glibc/usr/lib
The helm chart will create a runtimeclass named nvidia which can be used to schedule GPU workloads.

Testing the runtime class

Note the spec.runtimeClassName being explicitly set to nvidia in the pod spec.
Run the following command to test the runtime class:
kubectl run \
  nvidia-test \
  --restart=Never \
  -ti --rm \
  --image nvcr.io/nvidia/k8s/cuda-sample:vectoradd-cuda12.5.0 \
  --overrides '{"spec": {"runtimeClassName": "nvidia"}}'

Collecting NVIDIA GPU debug data

When debugging NVIDIA GPU issues (for example, NVRM: GPU has fallen off the bus messages in the kernel log), NVIDIA support will often ask for the output of nvidia-bug-report.sh. Talos does not allow direct shell access on the nodes, but you can still generate this report by following the steps below:
  1. Start a debug pod on the affected node:
 kubectl -n kube-system \
    run debug-gpu \
    --rm -it \
    --image=ubuntu \
    --overrides='{
      "spec": {
        "runtimeClassName": "nvidia",
        "hostPID": true,
        "hostNetwork": true,
        "containers": [{
          "name": "debug-gpu",
          "image": "ubuntu",
          "stdin": true,
          "tty": true,
          "securityContext": {
            "privileged": true
          },
          "volumeMounts": [{
            "name": "host-root",
            "mountPath": "/host"
          }]
        }],
        "volumes": [{
          "name": "host-root",
          "hostPath": {
            "path": "/"
          }
        }]
      }
    }' \
    --restart=Never \
    -- /bin/bash
  1. This will drop you into a shell inside a container running on the node. From here, you can install the necessary tools to run nvidia-bug-report.sh and generate the report.
apt update && apt install --no-install-recommends -y \
    dmidecode \
    pciutils \
    usbutils \
    mesa-utils \
    kmod \
    vulkan-tools \
    infiniband-diags \
    acpidump \
    mstflint
  1. Inside the debug container, run nvidia-bug-report.sh:
/host/usr/local/bin/nvidia-bug-report.sh
This will generate nvidia-bug-report.log.gz in the current directory.
  1. To copy the report of the cluster:
From your local machine, run the following command to copy the report from the debug pod to your local machine:
kubectl cp \
"kube-system/debug-gpu:/nvidia-bug-report.log.gz" \
./nvidia-bug-report.log.gz
You can now upload nvidia-bug-report.log.gz to NVIDIA support.