Loss Functions Flashcards

1
Q

Loss functions

A

Loss functions are a crucial part of machine learning algorithms. They measure the inconsistency between predicted and actual outcomes and guide the learning process to adjust the model parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
  1. Mean Squared Error (MSE)
A

Used for regression tasks, MSE calculates the average squared difference between the actual and predicted values. This puts a high penalty on large errors. However, it’s sensitive to outliers since errors are squared before they’re averaged.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
  1. Mean Absolute Error (MAE)
A

Also used for regression, MAE calculates the average absolute difference between actual and predicted values. It is less sensitive to outliers compared to MSE and provides a linear penalty to errors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
  1. Binary Cross-Entropy (Log Loss)
A

Used in binary classification problems, it calculates the log of the likelihood of the true label given the predicted probability. It has the benefit of punishing the model heavily when it is confident and wrong.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
  1. Categorical Cross-Entropy
A

Used in multi-class classification problems, it is a generalization of Binary Cross-Entropy. It calculates the negative log likelihood of the true label given the predicted probability distribution over all classes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q
  1. Hinge Loss
A

Used in Support Vector Machines and some neural network classifiers. It penalizes misclassified examples and disregards correctly classified ones, maximizing the margin of the decision boundary.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
  1. Kullback-Leibler (KL) Divergence
A

It’s used when we want to compare two probability distributions. It is often used in unsupervised learning algorithms.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q
  1. Huber Loss
A

A combination of MSE and MAE. It behaves like MSE for small errors and MAE for large errors. It is less sensitive to outliers than MSE and is often used in robust regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
  1. Quantile Loss
A

Used in quantile regression, for predicting an interval instead of a single point. It penalizes underestimates more heavily for higher quantiles and overestimates more heavily for lower quantiles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q
  1. Focal Loss
A

An adaptation of the binary cross entropy loss, designed to address class imbalance by assigning more weights to hard-to-classify instances.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly