Most of the frameworks have been build with a specific CUDA version. Please refer to PyTorch or TensorFlow to find the correct CUDA version.
Requirement | Version | Comment |
---|---|---|
CUDA Toolkit | ≥ 11.0 | |
CUDNN | ≥ 8.0 | see FAQ how to install |
whereis command |
any | |
file command |
any |
EnvVar | Default | Description |
---|---|---|
CUDA_HOME | “/usr/local/cuda” | Path to CUDA home dir |
NVCPATH | Used as include paths | |
NVC_INCLUDE_PATH | Used as include paths | |
NVCPLUS_INCLUDE_PATH | Used as include paths | |
NVLIBRARY_PATH | Used as library paths |
SOL tells me that support for NVIDIA is not available? | |
---|---|
This is usually caused by a version of SOL without NVIDIA
support. Please check if |
Which GPUs are supported? | |
---|---|
In general all CUDA capable GPU starting from the Kepler architecture (i.e. Tesla K40) are supported. |
SOL is unable to load CUDA/CUBLAS/CUDNN | |
---|---|
The most likely reason is a version mismatch. Please run the following commands:
The CUDA toolkit and the Python packages need to have same major version (e.g., “cu12” or “release 12.*”). Further, these also need to match the version your AI framework was compiled for. Please check the homepage of the AI framework for further details. |
SOL does not load CUDNN | |
---|---|
We do not bundle CUDNN with SOL. PyTorch installs it automatically using PYPI.
Please run tests described in previous FAQ entry. For TensorFlow or other
frameworks you need to install these manually. Either download it from https://developer.nvidia.com/cudnn (requires a free CUDA
developer account) or by running |