Week 2 Flashcards

1
Q

Define a discriminate function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Math def of dichotomizer discriminating function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Def linear discriminant function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a linear machine

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Categorisation rule for dichotomiser

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is ‘augmenting’ a vector

A

Extend a vector/matrix by adding elements to it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Generalised linear Discriminant function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Quadratic discriminant function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Generalised n dim discriminant function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Formulate problem of learning linear discriminant functions

A

Where each a is a vector of weights w

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

2 category linearly separable case correctly classified

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

2 category linearly separable case (sample normalisation)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

2 category linearly separable case - margin

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Gradient descent procedure

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Perceptron criterion function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Batch perceptron learning algorithm

A

The new weight vector is obtained by summing the feature vectors associated with all misclassified samples and adding some multiple of this to the currrent weight vector

17
Q

Sequential perceptron learning algo

18
Q

Multi class perceptron learning algorithm

19
Q

Compare Minimum Squared Error to perceptron learning procedure

20
Q

MSE via pseudo inverse

21
Q

MSE via gradient descent

22
Q

Widrow Hoff (LMS)

23
Q

Sequential widrow hoff

24
Q

What is an epoch

A

1 pass through all the training data

25
What is advantage of KNN
Provides a way to partition feature space without learning
26
Advantages of larger K value in KNN
Smoother decision boundaries More probabilistically accurate
27
Disadvantages of larger k for KNN
May include samples far from x Increases computational costs
28
Advantages of KNN
29
Disadvantages of KNN