One of the major problems encountered in the field of Machine Learning is overfitting, or the inability of our model to make accurate predictions on general datasets. A model which is overfitted usually performs very well in the dataset it was trained on. However, when used on to predict on another dataset, the model performs poorly. Such a model is also said to have high variance. Regularization is a technique in Deep Learning that is used to reduce the variance or prevent overfitting of our network.
In machine learning models, especially neural networks, the loss function is a function that informs us about the accuracy of our model predictions. It does so by calculating the difference between the values predicted by the model and the true values of the dataset. Predictive models can be used for different tasks, including binary classification, multi class classification and regression. Similarly different loss functions are used to calculate the losses for each of these prediction tasks. When plotting such loss functions, many local minima and saddle points are observed. Neural network models strive to find the best set of parameters to decrease the loss function.
Bias and Variance are terms that are frequently used in Machine Learning that hold special importance in understanding the application and performance of
ML models on datasets. However, many aspiring Machine Learning Engineers or Data Scientists have difficulty in properly understanding the concept and definitions
of these terms. Hence, there is always some confusion with regards to the usage of Bias and Variance in Machine Learning.