AssertionError: no inf checks were recorded for this optimizer ( Fix )

AssertionError_ no inf checks were recorded for this optimizer

assertionerror: no inf checks were recorded for this optimizer error occurs mainly because of NaN of infinity values in converging the weights. Typically The aim of a neural network is to minimize the loss function by adjusting the coefficient weights. But while processing this,  if the framework ( PyTorch ) is not handling properly infinity values, The Interpreter throws the same error.  This is a general cause for this error. In this article, I will put a quick check which can fix this error. So let’s start-

 

assertionerror: no inf checks were recorded for this optimizer ( Solution ) –

Here are a few things which you can try out –

Solution 1: Using backward() on the scaled loss in PyTorch –

This is a common scenario to miss this backward() operation in the scaled loss. This ultimately fails the gradient calculation which ends up in the same error. Hence we must use backward() function.

Syntax –

 scaler.scale(loss).backward()

 

Solution 2: Skip the batch with NaN values –

This very straight approach is like the odd man out. Simply at the implementation level, we can make a check of NaN values and then pass the complete batch with training, Although this is not the best way to fix this issue. But if the data is too huge and one or two batches are that significant for accuracy and performance.  In that scenario, This is the easiest way to fix it.

Solution 3: Using TORCH.NAN_TO_NUM –

In the continuation of solution 2, If you do not want to skip any batch because of data limits etc, Then TORCH.NAN_TO_NUM is a good option.

torch not compiled with cuda enabled solution using nan_to_num
torch not compiled with cuda enabled solution using nan_to_num

It basically replaces positive infinity values with the highest values and negative infinity values with the least values.

var = torch.tensor([float('inf'), -float('inf'), 1.15])
torch.nan_to_num(var)

Here this positive and negative inf will get some numeric values. This will ultimately solve our problem. If you are still stuck, please let us know.

Thanks
Data Science Learner Team

Join our list

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for signup. A Confirmation Email has been sent to your Email Address.

Something went wrong.

Meet Abhishek ( Chief Editor) , a data scientist with major expertise in NLP and Text Analytics. He has worked on various projects involving text data and have been able to achieve great results. He is currently manages Datasciencelearner.com, where he and his team share knowledge and help others learn more about data science.
Share via
 
Thank you For sharing.We appreciate your support. Don't Forget to LIKE and FOLLOW our SITE to keep UPDATED with Data Science Learner