Softsign mathematical function is an activation function for deep neural networks. Softsign activation function is also quite similar to Hyperbolic tangent activation function. In this entire tutorial, you will learn to implement softsign activation method through step by step.
Softsign Activation function’s mathematical expression –
Here is the mathematical formula for the Softsign activation function.
Here is the derivative of the softsign activation function.
Graphical Interpretation of Softsign function –
Firstly, Let’s see the graph for the softsign activation function.
Similarly here is the graph for softsign deactivation derivative.
Softsign Implementation with TensorFlow Keras-
In addition to fundamentals, Let’s learn how to implement softsign function over any ordinary tensor object.
Step 1 :
Firstly, we have to import the TensorFlow module. After that let’s create a tensor object. The same object for which we need to compute softsign function.
import tensorflow as tf
input_tensor = tf.constant([-1.5, 9.0, 11.0], dtype = tf.float32)
Step 2:
Secondly, Compute the softsign of the generated tensor object.
gen_output = tf.keras.activations.softsign(input_tensor)
Step 3 :
Finally, Let’s convert the tensor object to a NumPy array.
gen_output.numpy()
Here is the final output for the implementation of the softsign function in the TensorFlow Keras module.
Significance of Softsign function –
Most importantly, we use activation functions for non-linearity in neural networks. It is to transform the input into non-linear distribution. This Non-linearity differentiate neural network to a linear regression algorithm.
Difference between Softsign Function and Hyperbolic tangent (tanh) –
However, I have mentioned that Softsign function is similar to the tanh function. Still, there are differences between them. The basic difference is Hyperbolic tangent function converges exponentially. In the case of Softsign function it converges polynomially. Most importantly, This thin line creates a great difference between two and their use cases.
Advantage of RELU function over Softsign?
There is a clear reason behind Relu’s popularity. The reason is its Complexity and computation cost. Relu is less computationally costly than tanh, softmax, and softsign functions. Moreover, As you already know deep learning neural networks are always highly computationally intensive. Therefore to achieve good performance standards, we prefer Relu function over Softsign and other activation functions.
Thanks
Data Science Learner Team
Join our list
Subscribe to our mailing list and get interesting stuff and updates to your email inbox.