Feature Scaling is some thing which really effects the Machine Learning Model in so many ways . I agree there are so many situations where Feature Scaling is optional or not required . Still there are so many Machine Learning Algorithms where Feature Scaling is must have process . For instances – Regression, KMean Clustering and PCA are those Machine Learning algorithms where Machine Learning is must to have technique. In the opposite side usually tree based algorithms need not to have Feature Scaling like Decision Tree etc . Today in this tutorial we will explore Top 4 ways for Feature Scaling in Machine Learning .
Feature Scaling in Machine Learning –
There are so many ways to scale the feature or column value . Its completely scenario oriented that which Scaler will be more performance oriented . Lets start exploring them one by one –

Standardization –
This is one of the most use type of scaler in data preprocessing . This is known as zscore . This re distribute the data in such a way that mean =0 and standard deviation =1 . Here is the below formula for calculation –
zscore = [current_value – mean of data(feature)]/standard_deviation
For the implementation , you may use sklearn.preprocessing. StandardScaler
Please refer here for complete documentation on Standard Scaler here .
The another use case of standardization is to remove the outlier from the data set. See once you transform your data set using the standard scaler . All the values which are out from [3,3] will be consider as outlier in data set / feature .
2. Mean Normalization –
Lets understand the formula first here –
normalizationscore = [current_value – mean of data(feature)]/[max(feature)min(feature)]
The range of normal distribution is [1,1] with mean =0. We need this feature scaling technique for zero centric data .
If you are interested to read more on this topic specially implementation . Here is the scikit learn implementation of Normalization .
3. MinMax Scaler Technique –
Specially when you need to transform the feature magnitude in [0,1] range . This MinMax feature scaling technique is one the best option . Here is the formula –
= [current_value – min(feature)]/[max(feature)min(feature)]
The official documentation of its ( MinMax Scaler ) implementation in scikitlearn is here .
4.Unit Vector –
This Feature Scaling is very useful when we need to transform the feature value into unit form.
For more information in Feature Scaling Techniques specially to cover the implementation area , please have a look on the scikit learn official documentation of preprocessing .
Conclusion –
Feature Scaling and related facts are usually creates confusion on data scientist while model development . This article was an effort to solve those issues . As I have already mention Feature Scaling is completely usecase oriented . In the very beginning we have explained where feature scaling is optional and where is required . But we are planning to create a detail article on this point – When to apply Feature Scaling .
Anyways how did you find this article – Top 4 ways for Feature Scaling in Machine Learning . If you find any difficulty while understanding , Please let us know .If you think you need to add some more information over this topic feature scaling which is currently not here . You may write in the form of guest posting .
Thanks
Data Science Learner Team