Machine Learning - Supervised Flashcards Preview

Machine Learning > Machine Learning - Supervised > Flashcards

Flashcards in Machine Learning - Supervised Deck (84)
Loading flashcards...
1

Classification

We are trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.

2

Classification: Linear Binary

Binary classification problems arise when we seek to separate two sets of data points in R^n, each corresponding to a given class. We seek to separate the two data sets using simple ''boundaries'', typically hyperplanes. Once the boundary is found, we can use it to predict the class a new point belongs to, by simply checking on which side of the boundary it falls.

3

Decision Trees

A tree of questions to guide an end user to a conclusion based on values from a single vector of data. The classic example is a medical diagnosis based on a set of symptoms for a particular patient. A common problem in data science is to automatically or semi-automatically generate decision trees based on large sets of data coupled to known conclusions. Example algorithms are CART and ID3. (Submitted by Michael Malak)

4

Decision Trees

each node in the tree tests an attribute, decisions are represented by the edges leading away from that node with leaf nodes representing the final decision for all instances that reach that leaf node

5

Decision Trees: Best Uses

...

6

Decision Trees: Cons

1: Complex trees are hard to interpret; 2: Duplication within the same sub-tree is possible

7

Decision Trees: Dealing with missing values

1) have a specific edge for no value; 2) track the number of instances that follow each path and assign that instance to the most popular edge; 3) give instances weights and split the instance evenly down each possible edge. Once the value has reached the leaf nodes, recombine it using the weights

8

Decision Trees: Definition

Each node in the tree tests a single attribute, decisions are represented by the edges leading away from that node with leaf nodes representing the final decision.

9

Decision Trees: Example Applications

1: Star classification; 2: Medical diagnosis; 3: Credit risk analysis

10

Decision Trees: Flavors

CART, ID3

11

Decision Trees: Pros

1: Fast; 2: Robust to noise and missing values; 3: Accurate

12

Decision Trees: Pruining

since decision trees are often created by continuously splitting the instances until there is no more information gain, it is possible that some splits were done with too little information gain and result in overfitting of the model to the training data. Basic idea is to check child nodes to see if combining them with their parent would result in an increase in entropy below some threshold

13

Decision Trees: Replicated subtree problem

because only a single attribute is tested at a node, it is possible that two copies of the same subtree will need to be placed in a tree if the attributes from that subtree were not tested in the root node

14

Decision Trees: Restriction Bias

...

15

Decision Trees: Testing nominal values

if a node has edges for each possible value of that nominal value, then that value will not be tested again further down the tree. If a node groups the values into subsets with an edge per subset, then that attribute may be tested again

16

Decision Trees: Testing numeric values

can be tested for less than, greater than, equal to, within some range. Can result in just two edges or multiple edges. Can also have an edge that represents no value. May be tested multiple times in a single path

17

Deep Learning

Refers to a class of methods that includes neural networks and deep belief nets. Useful for finding a hierarchy of the most significant features, characteristics, and explanatory variables in complex data sets. Particularly useful in unsupervised machine learning of large unlabeled datasets. The goal is to learn multiple layers of abstraction for some data. For example, to recognize images, we might want to first examine the pixels and recognize edges; then examine the edges and recognize contours; examine the contours to find shapes; the shapes to find objects; and so on.

18

Ensemble Learning

...

19

Hidden Layer

The second layer of a three-layer network where the input layer sends its signals, performs intermediary processing

20

K-Nearest Neighbors: Cons

1: Performs poorly on high-dimensionality datasets; 2: Expensive and slow to predict new instances; 3: Must define a meaningful distance function;

21

K-Nearest Neighbors: Definition

K-NN is an algorithm that can be used when you have a objects that have been classified or labeled and other similar objects that haven't been classified or labeled yet, and you want a way to automatically label them.

22

K-Nearest Neighbors: Example Applications

1: Computer security: intrusion detection; 2: Fault detection in semiconducter manufacturing; 3: Video content retrieval; 4: Gene expression

23

K-Nearest Neighbors: Preference Bias

Good for measuring distance based approximations, good for outlier detection

24

K-Nearest Neighbors: Pros

1: Simple; 2: Powerful; 3: Lazy, no training involved; 4: Naturally handles multiclass classification and regression

25

K-Nearest Neighbors: Restriction Bias

Low-dimensional datasets

26

K-Nearest Neighbors: Type

Supervised learning, instance based

27

KNN

K-NN is an algorithm that can be used when you have a bunch of objects that have been classified or labeled in some way, and other similar objects that haven't gotten classified or labeled yet, and you want a way to automatically label them.

28

Linear Regression

Trying to fit a linear continuous function to the data. Univariate or Multivariate.

29

Linear Regression: Cons

1: Unable to model complex relationships, 2: Unable to capture nonlinear relationships without first transforming the inputs

30

Linear Regression: Definition

Trying to fit a linear continuous function to the data to predict results. Can be univariate or multivariate.