AssertionError: torch not compiled with Cuda enabled error occurs because of using cuda GPU enable syntax over normal PyTorch (CPU only ). There are multiple scenarios where you can get this error. Sometimes CUDA enablement is clear and visible. This is easy to fix by making it false or removing the same. But in some scenarios, It is indirectly calling Cuda which is explicitly not visible. Hence There we need to understand the internal working of such parameter or function which is causing the issue. Anyways in this article, we will go throw the most common reasons.
assertionerror: torch not compiled with cuda enabled ( Solution ) –
Solution 1: Switching from CUDA to normal version –
Usually while compiling any neural network in PyTorch, we can pass cuda enable. If we simply remove the same it will remove the error. Refer to the below example, If you are using a similar syntax pattern then remove Cuda while compiling the neural network.
from torch import nn net = nn.Sequential( nn.Linear(18*18, 80), nn.ReLU(), nn.Linear(80, 80), nn.ReLU(), nn.Linear(80, 10), nn.LogSoftmax() ).cuda()
The correct way is –
Solution 2: Installing cuda supported Pytorch –
See the bottom line is that if you are facing such an incompatibility issue either you adjust your code according to available libraries in the system. Or we install the compatible libraries in our system to get rid of the same error.
You may any package managers to install cuda supported pytorch. Use any of the below commands –
conda install pytorch==1.11.0 torchvision==0.12.0 torchaudio==0.11.0 cudatoolkit=11.3 -c pytorch
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0 --extra-index-url https://download.pytorch.org/whl/cu113
Solution 3: set pin_memory=False –
This is one of the same categories where CUDA is not visible directly. But Internally if it is True then it copies the tensors into CUDA space for processing. To Avoid the same we have to make it False. Once more thing, By Default it is True. Hence we have to explicitly make it False while using the get_iterator function in DataLoader class.
Benefits of CUDA with Torch –
CUDA is parallel processing framework which provide application interface to deal with graphic card utility of the system. In complex operation like deep learning model training where we have to run operations like backpropagation we need multiprocessing. GPU provide great support for multiprocessing for that we need CUDA (NVIDA ). PyTorch or Tensorflow or any other deep learning framework required GPU handling for high performance. Although it works fine with CPU in case of small dataset , less epochs etc. But Typically the dataset for any state of art algorithm is usually large in volume. Hence we need CUDA with PyTorch ( Python binding of Torch).
Data Science Learner Team
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.