AI-LEARNING Flashcards
(42 cards)
provides a computer with data, rather than explicit instructions. Using these data, the computer learns to recognize patterns and becomes able to execute tasks on its own.
Machine Learning
given a data set of input-output pairs, learn a function to map inputs to outputs
Supervised Learning
a task where the function maps an input to a discrete output.
Classification
algorithm that, given an input, chooses the class of the nearest data point to that input
Nearest-Neighbor Classification
algorithm that, given an input, chooses the most common class out of the k nearest data points to that input
K-Nearest-Neighbor Classification
Drawback of the k-nearest-neighbors algorithm
Using a naive approach, the algorithm will have to measure the distance of every single point to the point in question, which is computationally expensive.
Solution to the drawback of k-nearest-neighbors algorithm
Use data structures that enable finding neighbors more quickly or by pruning irrelevant observations.
for each data point, we adjust the weights to make our function more accurate.
Perceptron Learning Rule
sequences of numbers
Vector
The weights and values in Perceptron Learning are represented using?
Vectors
Drawback of Perceptron Learning
data are messy, and it is rare that one can draw a line and neatly divide the classes into two observations without any mistakes
unable to express uncertainty, since it can only be equal to 0 or to 1.
Hard Threshold
uses a logistic function which is able to yield a real number between 0 and 1, expressing confidence in the estimate
Soft Threshold
they are designed to find the maximum margin separator
Support Vector Machines
A boundary that maximizes the distance between any of the data points
Maximum Margin Separator
Benefit of Support Vector Machines
they can represent decision boundaries with more than two dimensions, as well as non-linear decision boundaries
supervised learning task of learning a function mapping an input point to a continuous value
Regression
this function gains value when the prediction isn’t correct and doesn’t gain value when it is correct
0-1 Loss Function
functions that can be used when predicting a continuous value
L1 and L2 loss functions
L1 Loss Function Formula
|actual - predicted|
L2 Loss Function Formula
(actual - predicted)^2
L1 vs L2
L₂ penalizes outliers more harshly than L₁ because it squares the the difference
when a model fits the training data so well that it fails to generalize to other data sets.
Overfitting
process of penalizing hypotheses that are more complex to favor simpler, more general hypotheses.
Regularization