Enabling NVIDIA GPU support on Talos is bound by NVIDIA EULA. The Talos published NVIDIA OSS drivers are bound to a specific Talos release. The extensions versions also needs to be updated when upgrading Talos.We will be using the following NVIDIA OSS system extensions:
nvidia-open-gpu-kernel-modules
nvidia-container-toolkit
Make sure the driver version matches for both thenvidia-open-gpu-kernel-modules
andnvidia-container-toolkit
extensions. Thenvidia-open-gpu-kernel-modules
extension is versioned as<nvidia-driver-version>-<talos-release-version>
and thenvidia-container-toolkit
extension is versioned as<nvidia-driver-version>-<nvidia-container-toolkit-version>
.
Proprietary vs OSS Nvidia Driver Support
The NVIDIA Linux GPU Driver contains several kernel modules:nvidia.ko
, nvidia-modeset.ko
, nvidia-uvm.ko
, nvidia-drm.ko
, and nvidia-peermem.ko
.
Two “flavors” of these kernel modules are provided, and both are available for use within Talos:
- Proprietary, This is the flavor that NVIDIA has historically shipped.
- Open, i.e. source-published/OSS, kernel modules that are dual licensed MIT/GPLv2. With every driver release, the source code to the open kernel modules is published on https://github.com/NVIDIA/open-gpu-kernel-modules and a tarball is provided on https://download.nvidia.com/XFree86/.
Enabling the NVIDIA OSS modules
Patch Talos machine configuration using the patchgpu-worker-patch.yaml
:
Deploying NVIDIA device plugin
First we need to create theRuntimeClass
Apply the following manifest to create a runtime class that uses the extension:
(Optional) Setting the default runtime class as nvidia
Do note that this will set the default runtime class to nvidia
for all pods scheduled on the node.
Create a patch yaml nvidia-default-runtimeclass.yaml
to update the machine config similar to below:
Testing the runtime class
Note theRun the following command to test the runtime class:spec.runtimeClassName
being explicitly set tonvidia
in the pod spec.