Week 4 Flashcards

1
Q

Goal of NNs

A

Classify objects by learning non linearity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Characteristics of NNs

A

Massive parallelism
Distributed representation and computation
Learning ability
Generalisation ability
Adaptivity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Difference between FF NNs and Feedback NNs

A

Feedback NNs AKA recurrent
Loop exists (dynamic system)

FF
No loop (static system)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define FF NN

A

1 input layer, some hidden layers, one output layer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Signum function

A

Symmetric hard limit transfer functio.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Which activation function is f(n) = n

A

Linear transfer function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Symmetric sigmoid transfer function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Logarithmic sigmoid transfer function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Radial basis transfer function

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Usual requirement of activation function

A

Continuous and differentiable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Initialisations for supervised learning

A

Network size
Number of hidden layers
Choose activation functions
Initialise weights with pseudo random values

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Network learning diagram (backprop)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Stochastic back prop algo

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Stopping criterion

A

Stochastic Back prop terminates when change in criterion function J(w) is smaller than some preset value ε

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Batch back prop algorithm

A

Where this time the criterion function J is taken overall all j_I for i = 1, 2, …, n samples in the batch

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

How do we use the validation set to avoid overfitting

A

Stop training When we reach a minimum error on the validation set

17
Q

Define RBF NN

A

Radial basis Function NN (3 layers usually)
Input unit: linear transfer function
Hidden unit: radial basis function
Output unit: any activation function

18
Q

Hidden units of rbf NN

19
Q

Commonly used basis functions for RBF NNs

20
Q

Input output mapping for RBF NNs

21
Q

RBF NN training goal

22
Q

2 phases of training RBF NNs

23
Q

Determination of centres for RBF NNs (fixed centres)

24
Q

Determination of centres (clustering) for RBF NNs

25
Determination of output weights for RBF NNs (LSE)
26
Determination of output weights for RBF NNs (matrices) LSE
27
Determination of output weights for RBF NNs (gradient descent/ backprop)
28
Compar similarities and differences of RBF and MLP networks