Week 7 Flashcards

1
Q

Feature engineering - modify

A

Modify measurements to make more useful for classification, eg 09/08/2019 -> Friday

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Benefits of feature selection

A

Reduce risk of overfitting

Reduces training time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Feature extraction

A

Find features that are functions of raw data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Feature extraction example

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

KLT

A

PCA method

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

NNs for PCA

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Oja’s rule

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Sanger’s rule

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Oja’s subspace rule

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can Auto encoder do PCA

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Whitening transform

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Because PCA is unsupervised ?

A

It may eliminate discriminative dimensions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Linear Discriminant Analysis

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Independent component analysis

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Compare independence and correlation for PCA ICA

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

NNs for ICA

17
Q

Random projections

18
Q

Extreme learning machines

19
Q

Differences and similarities between random projections and sparse coding

20
Q

How to - sparse coding

21
Q

Sparse coding - dictionary based

22
Q

Generally, dictionary based methods