How to get the CUDA version

How to Get the CUDA Version? – The Definitive 2025 Guide


Table of Contents

  1. Compelling Introduction
  2. Quick Answer
  3. Understanding CUDA Versions
  4. Method 1: Using nvidia‑smi
  5. Method 2: Using nvcc
  6. Method 3: Checking CUDA Library Files
  7. Method 4: Python – PyTorch
  8. Method 5: Python – TensorFlow
  9. Method 6: Windows Registry
  10. Method 7: Linux Package Managers
  11. Method 8: Checking cuDNN Version
  12. 🔧 Troubleshooting
  13. CUDA Compatibility Guide
  14. Docker & Container Environments
  15. FAQ
  16. Best Practices
  17. Summary & Quick‑Reference Table

Compelling Introduction

When you write a GPU‑accelerated application, knowing the CUDA version you are running is not optional—it is mandatory. The CUDA version dictates which GPU architectures you can target, which compiler flags are available, and whether your deep‑learning framework (PyTorch, TensorFlow, etc.) will load correctly. A mismatch between the driver, the CUDA runtime, and the toolkit can cause cryptic errors such as “CUDA driver version is insufficient for CUDA runtime version” or silent performance regressions.

This article walks you through eight reliable ways to check CUDA, from the one‑liner nvidia‑smi command to Python introspection, and it also covers the related cuDNN version. You’ll find step‑by‑step instructions for Windows, Linux, and macOS, real‑world command output, troubleshooting for the most common pitfalls, and a compatibility matrix for 2024‑2025 hardware and software stacks.

Why does this matter in 2025? NVIDIA’s release cadence has accelerated, with CUDA 12.4 and CUDA 12.5 already shipping. Simultaneously, deep‑learning frameworks are deprecating older runtimes. Keeping your environment in sync ensures you can leverage the latest Tensor Cores, NVENC, and RAPIDS libraries without wasted debugging time.


Quick Answer

If you just need the version now, run one of the following commands in a terminal:

# Shows driver‑attached CUDA version (works on Windows, Linux, macOS)
nvidia-smi --query-gpu=driver_version,cuda_version --format=csv,noheader

# Shows the toolkit version installed with the compiler
nvcc --version

Both commands print the CUDA version instantly.


Understanding CUDA Versions

ComponentWhat it isWhere you see itTypical file/command
CUDA DriverLow‑level kernel module that talks to the GPU hardware.nvidia-smi → “Driver Version”./proc/driver/nvidia/version (Linux)
CUDA RuntimeAPI library (libcudart.so, cudart.dll) used by compiled binaries.Linked at runtime, reported by nvcc --version.libcudart.so
CUDA ToolkitFull development suite (compiler nvcc, headers, libraries, samples).nvcc --version prints the toolkit version.nvcc executable
Compute CapabilityGPU‑specific architecture identifier (e.g., 8.6 for RTX 4090).nvidia-smi -q | grep "Compute Capability"deviceQuery sample

Why are there different version numbers? The driver is backward compatible: a newer driver can run binaries built for older runtimes. The toolkit, however, is forward compatible only when you re‑compile your code. Therefore, which version matters depends on the context:

  • Running pre‑compiled binaries → driver version is the decisive factor.
  • Compiling your own kernels → toolkit version (nvcc) matters.
  • Using a deep‑learning framework → both runtime and driver must be compatible with the framework’s compiled CUDA binaries.

Method 1: Using nvidia‑smi

nvidia-smi (NVIDIA System Management Interface) is shipped with every driver package. It queries the GPU driver directly, making it the most reliable way to discover the CUDA version that the driver advertises.

1.1. Windows

  1. Open Command Prompt (or PowerShell) with admin rights.
  2. Run:
nvidia-smi --query-gpu=driver_version,cuda_version --format=csv,noheader

Sample output

525.85.05, 12.4
  • The first column is the driver version; the second column is the CUDA version the driver supports.

1.2. Linux

$ nvidia-smi --query-gpu=driver_version,cuda_version --format=csv,noheader
525.85.05, 12.4

If you need a more verbose view, just run nvidia-smi without arguments:

$ nvidia-smi
Tue Dec 10 08:12:45 2025       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05    Driver Version: 525.85.05    CUDA Version: 12.4     |
|-------------------------------+----------------------+----------------------+
...

1.3. macOS (Legacy)

Apple discontinued official NVIDIA drivers after macOS 10.13, but if you are using a legacy CUDA‑enabled macOS system (e.g., via a Hackintosh), the same command works:

$ /usr/local/cuda/bin/nvidia-smi -L
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-xxxx)

Note: macOS does not ship nvidia-smi by default; you must install the CUDA driver from the NVIDIA website.

1.4. What the output means

  • Driver Version – the version of the kernel module.
  • CUDA Version – the highest CUDA runtime version the driver can support.
  • The driver may support multiple runtimes (e.g., a driver for CUDA 12.4 can also run CUDA 11.x binaries).

💡 Pro Tip: Use the --format=csv,noheader flags to script the output in CI pipelines.

⚠️ Warning: nvidia-smi reports the driver’s capability, not the toolkit you have installed. If you installed a newer toolkit without updating the driver, nvcc --version may show a higher number than nvidia-smi.


Method 2: Using nvcc

nvcc is the CUDA C++ compiler bundled with the CUDA Toolkit. It prints the toolkit version it belongs to, which is useful when you compile custom kernels.

2.1. Windows

Open Developer Command Prompt for VS (or PowerShell) and type:

nvcc --version

Sample output

nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Mon_Jun_24_21:12:55_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.4, V12.4.99

The line release 12.4 is the CUDA Toolkit version.

2.2. Linux & macOS

$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Mon Jun 24 21:12:55 PDT 2024
Cuda compilation tools, release 12.4, V12.4.99

2.3. When nvcc is not found

  • Windows – Add C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin to %PATH%.
  • Linux – Ensure /usr/local/cuda/bin is in $PATH or create a symlink:
sudo ln -s /usr/local/cuda-12.4 /usr/local/cuda
export PATH=/usr/local/cuda/bin:$PATH

💡 Pro Tip: Run which nvcc (Linux/macOS) or where nvcc (Windows) to verify the path.

⚠️ Warning: If you have multiple toolkits installed, the first nvcc found in $PATH will be used, which may not be the version you intended.


Method 3: Checking CUDA Library Files

Sometimes the command‑line tools are unavailable (e.g., in a minimal container). You can read the version from the installation directories.

3.1. Linux – version.txt

$ cat /usr/local/cuda/version.txt
CUDA Version 12.4.0

If you have several installations, the /usr/local/cuda symlink often points to the default version. Inspect the symlink:

$ ls -l /usr/local/cuda
lrwxrwxrwx 1 root root 12 Jan 10 2025 /usr/local/cuda -> cuda-12.4

3.2. Windows – Registry & Folder

The default install folder is:

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4

Inside, you will find version.txt as well:

Get-Content "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\version.txt"

Output:

CUDA Version 12.4.0

3.3. macOS – Header Check

$ cat /Developer/NVIDIA/CUDA-10.2/version.txt
CUDA Version 10.2.89

💡 Pro Tip: Use find /usr/local -maxdepth 2 -name version.txt to locate all CUDA installations on Linux.


Method 4: Python – PyTorch

PyTorch bundles its own CUDA runtime, but it also respects the system‑installed toolkit for compilation.

import torch

# Is a CUDA‑capable device present?
print("CUDA available:", torch.cuda.is_available())

# CUDA version used by the PyTorch binary
print("PyTorch CUDA version:", torch.version.cuda)

# Detailed GPU info
for i in range(torch.cuda.device_count()):
    print(f"Device {i}: {torch.cuda.get_device_name(i)}")
    print("  Compute Capability:", torch.cuda.get_device_capability(i))

Sample output

CUDA available: True
PyTorch CUDA version: 12.4
Device 0: NVIDIA GeForce RTX 4090
  Compute Capability: (8, 6)

If torch.version.cuda returns None, PyTorch was installed without CUDA support.

⚠️ Warning: The version shown by PyTorch may differ from the system nvcc version because PyTorch ships a pre‑compiled runtime (e.g., CUDA 12.1).


Method 5: Python – TensorFlow

TensorFlow also reports its compiled CUDA version.

import tensorflow as tf
from tensorflow.python.platform import build_info as tf_build

print("TensorFlow built with CUDA:", tf_build.cuda_version)
print("TensorFlow built with cuDNN:", tf_build.cudnn_version)

# List physical GPUs
gpus = tf.config.list_physical_devices('GPU')
print("Detected GPUs:", gpus)

Sample output

TensorFlow built with CUDA: 12.4
TensorFlow built with cuDNN: 8.9
Detected GPUs: [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]

If TensorFlow cannot find a GPU, ensure the driver (nvidia-smi) reports a compatible version.


Method 6: Windows Registry Method

The installer writes the CUDA path and version to the Windows Registry.

6.1. Using Registry Editor

  1. Press Win + R, type regedit, and press Enter.
  2. Navigate to:
HKEY_LOCAL_MACHINE\SOFTWARE\NVIDIA Corporation\Installed Products\CUDA
  1. Look for the Version string value – e.g., 12.4.

6.2. PowerShell One‑Liner

Get-ItemProperty -Path "HKLM:\SOFTWARE\NVIDIA Corporation\Installed Products\CUDA" |
Select-Object -ExpandProperty Version

Output:

12.4

💡 Pro Tip: Combine this with Get-ItemProperty for InstallDir to locate the toolkit folder automatically.


Method 7: Linux Package Managers

If you installed CUDA via a package manager, you can query the package database.

7.1. Debian/Ubuntu (dpkg)

dpkg -l | grep cuda-toolkit

Sample output

ii  cuda-toolkit-12-4   12.4.99-1   NVIDIA CUDA Toolkit 12.4

7.2. RHEL/CentOS (rpm)

rpm -qa | grep cuda

Sample output

cuda-12-4-12.4.99-1.x86_64
cuda-driver-525.85.05-1.x86_64

7.3. Using apt or yum for a concise view

apt list --installed | grep cuda

or

yum list installed | grep cuda

Method 8: Checking cuDNN Version

cuDNN (CUDA Deep Neural Network library) is versioned independently from CUDA. Knowing its version is crucial for deep‑learning framework compatibility.

8.1. Linux – Header File

$ cat /usr/include/cudnn_version.h | grep CUDNN_MAJOR -A2
#define CUDNN_MAJOR 8
#define CUDNN_MINOR 9
#define CUDNN_PATCHLEVEL 2

The version is 8.9.2.

8.2. Windows – Header File

Get-Content "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\include\cudnn_version.h" |
Select-String "CUDNN_MAJOR|CUDNN_MINOR|CUDNN_PATCHLEVEL"

Output:

#define CUDNN_MAJOR 8
#define CUDNN_MINOR 9
#define CUDNN_PATCHLEVEL 2

8.3. Using Python (TensorFlow)

TensorFlow’s tf.sysconfig.get_build_info() also reveals the cuDNN version (shown in Method 5).

💡 Pro Tip: Keep the cuDNN minor version aligned with the CUDA toolkit (e.g., cuDNN 8.9 works with CUDA 12.x).


🔧 Troubleshooting

Below are the most frequent issues encountered when trying to check CUDA. Each problem includes three concrete solutions.

1️⃣ nvcc: command not found

SolutionSteps
Add Toolkit to PATH (Windows)1. Open System Properties → Advanced → Environment Variables.
2. Edit Path → New → C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4\bin.
3. Open a new CMD and run nvcc --version.
Create /usr/local/cuda symlink (Linux)bash sudo rm -f /usr/local/cuda
bash sudo ln -s /usr/local/cuda-12.4 /usr/local/cuda
bash export PATH=/usr/local/cuda/bin:$PATH
Install the CUDA ToolkitIf you only have the driver, download the CUDA Toolkit from [official NVIDIA docs] and run the installer. Verify with nvcc --version after reboot.

⚠️ Warning: Adding a wrong version to PATH may cause compilation against an older runtime, leading to runtime errors.

2️⃣ nvidia-smi: command not found

SolutionSteps
Install NVIDIA Driver (Linux)bash sudo apt-get update
bash sudo apt-get install nvidia-driver-525
Reboot, then run nvidia-smi.
Add driver bin to PATH (Windows)C:\Program Files\NVIDIA Corporation\NVSMI is the default location. Add it to Path via System Properties.
Verify GPU detectionRun lspci | grep -i nvidia (Linux) or dxdiagDisplay tab (Windows). If no device appears, the GPU may be disabled in BIOS.

💡 Pro Tip: On headless servers, nvidia-smi -L works even without an X server.

3️⃣ Version mismatch between nvidia-smi and nvcc

  • Explanationnvidia-smi reports the driver’s highest supported runtime (e.g., 12.4). nvcc shows the toolkit version you installed (e.g., 12.5). This is normal as long as the driver is newer than the toolkit.
  • When to worry – If the driver version is older than the toolkit (e.g., driver 525 supports up to CUDA 12.2 but you installed toolkit 12.4), you will see errors like CUDA error: unknown error.
FixSteps
Upgrade the driverDownload the latest driver from NVIDIA and install. Verify with nvidia-smi.
Downgrade the toolkitRemove the newer toolkit (sudo apt purge cuda-12-5) and install a version matching the driver (sudo apt install cuda-12-2).
Use a container with matching versionsPull an NVIDIA Docker image that matches the driver (e.g., nvcr.io/nvidia/cuda:12.2-runtime-ubuntu22.04).

4️⃣ CUDA shows a lower version than expected

CauseRemedy
Multiple installationsLocate all version.txt files. Remove or rename older directories (/usr/local/cuda-11.8). Update the /usr/local/cuda symlink to point to the newest version.
PATH priorityecho $PATH (Linux) or echo %PATH% (Windows). Ensure the path to the newest bin appears first.
Environment modules (common on HPC)Load the correct module: module load cuda/12.4. Use module list to confirm.
Conda environments with cudatoolkitConda can shadow system nvcc. Deactivate the env or install matching cudatoolkit version (conda install cudatoolkit=12.4).

💡 Pro Tip: After cleaning up, run hash -r (Linux) to flush the command cache.


CUDA Compatibility Guide

GPU Compute Capability Matrix

GPU FamilyExample ModelCompute CapabilityMax Supported CUDA Toolkit
RTX 40xxRTX 40908.612.x (2025)
RTX 30xxRTX 30808.612.x (backward compatible)
GTX 16xxGTX 16607.512.x (but performance limited)
Tesla V100V1007.012.x (via driver ≥ 525)
Jetson AGX OrinAGX Orin8.712.x (embedded)

Driver ↔ Runtime Compatibility

Driver VersionHighest CUDA Runtime Supported
525.xxCUDA 12.4
515.xxCUDA 12.1
470.xxCUDA 11.8
460.xxCUDA 11.2

Rule of thumb: Never run a toolkit newer than the driver can support.

Framework‑Specific Requirements (2025)

FrameworkMinimum CUDA RuntimeRecommended Toolkit
PyTorch 2.311.812.4
TensorFlow 2.1612.212.4
RAPIDS 23.1212.012.4
JAX 0.4.3012.312.4

💡 Pro Tip: Check the official compatibility table on each framework’s site before upgrading.


Docker & Container Environments

Containers isolate the host driver from the toolkit. NVIDIA’s nvidia‑docker2 runtime mounts the driver into the container, so nvidia-smi inside the container reflects the host driver.

docker run --gpus all nvcr.io/nvidia/cuda:12.4-runtime-ubuntu22.04 nvidia-smi

Typical output

+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.05    Driver Version: 525.85.05    CUDA Version: 12.4     |
+-----------------------------------------------------------------------------+

To verify the toolkit inside the container:

docker run --gpus all nvcr.io/nvidia/cuda:12.4-devel-ubuntu22.04 nvcc --version

If you need a different CUDA version than the host driver, choose a base image that matches the driver (e.g., 12.2-runtime).

⚠️ Warning: Mixing a container built for CUDA 12.5 with a driver that only supports 12.4 will cause “CUDA driver version is insufficient” errors.


FAQ

#QuestionAnswer
1️⃣Do I need to install CUDA if I’m using PyTorch/TensorFlow?No, you can install the binary wheels (torch, tensorflow) that include a pre‑compiled CUDA runtime. However, a compatible NVIDIA driver is still required. If you want to compile custom kernels or use a newer toolkit than the wheel, install the full CUDA Toolkit.
2️⃣Can I have multiple CUDA versions installed?Yes. NVIDIA allows side‑by‑side installations (e.g., /usr/local/cuda-11.8 and /usr/local/cuda-12.4). Use environment variables (CUDA_HOME, PATH) or module systems to select the active version. Remember to update the cuda symlink if you rely on it.
3️⃣What’s the difference between CUDA and cuDNN?CUDA is the low‑level platform for GPU computing (kernels, memory management). cuDNN is a higher‑level library that implements deep‑learning primitives (convolutions, pooling). cuDNN is optional for generic CUDA code but required for most deep‑learning frameworks.
4️⃣Why does nvidia-smi show a different version than nvcc?nvidia-smi reports the driver’s supported CUDA runtime, while nvcc reports the toolkit version you installed. The driver is usually newer and can run older runtimes, so the numbers can differ without being an error.
5️⃣How do I update my CUDA version?1. Verify driver compatibility (see compatibility matrix). 2. Download the latest installer from NVIDIA. 3. Run the installer, choosing Custom to keep older versions if desired. 4. Update PATH and LD_LIBRARY_PATH (Linux) or system environment (Windows). 5. Reboot.
6️⃣What CUDA version does my GPU support?Check the Compute Capability of your GPU on the [official CUDA GPUs page]. The driver will expose the highest toolkit it can support. For example, an RTX 4090 (CC 8.6) supports CUDA 12.x and beyond.
7️⃣Can I use CUDA on AMD GPUs?No. CUDA is a proprietary NVIDIA platform. AMD offers ROCm (Radeon Open Compute) as an alternative, with a different API and tooling.
8️⃣What happens if CUDA versions don’t match?If the driver is older than the runtime required by an application, you will see errors like CUDA driver version is insufficient. If the toolkit is older than the driver, the application will still run, but you won’t have access to the newest compiler features.
9️⃣How do I switch between multiple CUDA versions?Adjust PATH, LD_LIBRARY_PATH, and CUDA_HOME to point to the desired version. On Linux, you can use module load cuda/12.4. On Windows, change the order of entries in the system Path variable.
🔟Do I need CUDA for CPU‑only machine learning?No. CPU‑only libraries (e.g., scikit‑learn, XGBoost without GPU support) run without any CUDA installation. Installing CUDA is unnecessary unless you plan to use GPU acceleration.

Best Practices

  1. Keep the driver up to date – The driver is the single component that guarantees backward compatibility. Use the NVIDIA CUDA‑compatible driver package for your OS.
  2. Document the environment – Store nvidia-smi, nvcc --version, and pip freeze outputs in a requirements.txt or env_report.txt.
  3. Leverage virtual environments – Conda or venv can isolate the cudatoolkit package from the system install, preventing version clashes.
  4. Check compatibility matrices before upgrading any component (GPU driver, CUDA Toolkit, cuDNN, framework).
  5. Prefer Docker for reproducibility. A single Dockerfile can lock the driver version (via host) and the toolkit version (via base image).
  6. Version management tips – Use the update-alternatives system on Linux to switch the default nvcc. On Windows, maintain separate shortcuts like CUDA 12.4 Command Prompt.

Summary & Quick‑Reference Table

MethodCommandReturnsWhen to use
nvidia‑sminvidia-smi --query-gpu=cuda_version --format=csv,noheaderDriver‑exposed CUDA runtime versionQuick check, no toolkit needed
nvccnvcc --versionInstalled CUDA Toolkit versionNeed compile‑time version
version.txtcat /usr/local/cuda/version.txtToolkit version from fileMinimal environment, containers
PyTorchtorch.version.cudaCUDA version baked into PyTorchInside a Python/ML project
TensorFlowtf.sysconfig.get_build_info()['cuda_version']TensorFlow’s CUDA runtimeTensorFlow projects
RegistryPowerShell `Get-ItemProperty …Version stringWindows admin scripts
Package managerdpkg -l | grep cudaInstalled package versionDebian/Ubuntu systems
cuDNNgrep CUDNN_MAJOR …/cudnn_version.hcuDNN major/minorWhen deep‑learning libraries need it

Final recommendation: Run nvidia-smi first to confirm driver health, then nvcc --version (or the language‑specific check) to verify the toolkit your code will compile against. Keep both in sync with the matrix above, and you’ll avoid the majority of CUDA‑related headaches.


META DESCRIPTION: Learn how to check your CUDA version using nvidia-smi, nvcc, and 8 other methods. Complete guide for Windows, Linux, and macOS with troubleshooting tips.


Suggested Alt‑Text Descriptions for Illustrative Images

  1. Screenshot of nvidia-smi output showing driver and CUDA version on a Linux workstation.
  2. Terminal window displaying nvcc --version with CUDA 12.4 highlighted.
  3. Windows PowerShell window querying the CUDA version from the registry.
  4. Python REPL output where PyTorch reports torch.version.cuda = 12.4.
  5. Docker container log showing nvidia-smi inside an NVIDIA‑enabled container.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *