Machine Learning - Model Evaluation Flashcards Preview

Machine Learning > Machine Learning - Model Evaluation > Flashcards

Flashcards in Machine Learning - Model Evaluation Deck (20)
Loading flashcards...
1

Accuracy

Percent of all predictions that were correct.

2

Confusion Matrix

A matrix showing the predicted and actual classifications. A confusion matrix is of size LxL, where L is the number of different label values. Rows for each of the actual values cross-tabbed against columns of the predicted values.

3

Cross-validation: Overview

A method for estimating the accuracy of an inducer by dividing the data into k mutually exclusive subsets, or folds, of approximately equal size. The inducer is trained and tested k times. Each time it is trained on the data set minus a fold and tested on that fold. The accuracy estimate is the average accuracy for the k folds.

4

Cross-validation: How

Leave-one-out cross validation

K-fold cross validation

Training and validation data sets have to be drawn from the same population

The step of choosing the kernel parameters of a SVM should be cross-validated as well

5

Model Comparison

...

6

Model Evaluation: Adjusted R^2 (R-Square)

The method preferred by statisticians for determining which variables to include in a model. It is a modified version of R^2 which penalizes each new variable on the basis of how many have already been admitted. Due to its construct, R^2 will always increase as you add new variables, which result in models that over-fit the data and have poor predictive ability. Adjusted R^2 results in more parsimonious models that admit new variables only if the improvement in fit is larger than the penalty, which improves the ultimate goal of out-of-sample prediction. (Submitted by Santiago Perez)

7

Model Evaluation: Decision tables

simplest way of expressing output from machine learning, cells in table represent the resulting decision based on the row and column which represent the conditions

8

Model Evaluation: Mis-classification error

Define Test error as summed error for classification (false prediction)

9

Model Evaluation: Negative class

Negative means not having the symptoms

10

Model Evaluation: Positive class

presence of something we are looking for - 1

11

Model Evaluation: Precision

Of all predicted positives, how many are positive?

12

Model Evaluation: Recall

Of all positives how many were predicted as positive?

13

Model Evaluation: True negative

Hypotheses correctly predicts negative output.

14

Model Evaluation: True positive

Hypotheses correctly predicts positive output.

15

Model Selection Algorithm

algorithm that automatically selects a good model function for a dataset.

16

Precision

Percent of predicted Positives that were correct.

17

P-values

...

18

Receiver-Operator Curves

...

19

Sensitivity

aka Recall or True positive rate. Percent of actual Positives that were correctly predicted.

20

Specificity

aka True negative rate. Percent of actual Negatives that were correctly predicted.