Week 4 Flashcards
Goal of NNs
Classify objects by learning non linearity
Characteristics of NNs
Massive parallelism
Distributed representation and computation
Learning ability
Generalisation ability
Adaptivity
Difference between FF NNs and Feedback NNs
Feedback NNs AKA recurrent
Loop exists (dynamic system)
FF
No loop (static system)
Define FF NN
1 input layer, some hidden layers, one output layer
Signum function
Symmetric hard limit transfer functio.
Which activation function is f(n) = n
Linear transfer function
Symmetric sigmoid transfer function
Logarithmic sigmoid transfer function
Radial basis transfer function
Usual requirement of activation function
Continuous and differentiable
Initialisations for supervised learning
Network size
Number of hidden layers
Choose activation functions
Initialise weights with pseudo random values
Network learning diagram (backprop)
Stochastic back prop algo
Stopping criterion
Stochastic Back prop terminates when change in criterion function J(w) is smaller than some preset value ε
Batch back prop algorithm
Where this time the criterion function J is taken overall all j_I for i = 1, 2, …, n samples in the batch
How do we use the validation set to avoid overfitting
Stop training When we reach a minimum error on the validation set
Define RBF NN
Radial basis Function NN (3 layers usually)
Input unit: linear transfer function
Hidden unit: radial basis function
Output unit: any activation function
Hidden units of rbf NN
Commonly used basis functions for RBF NNs
Input output mapping for RBF NNs
RBF NN training goal
2 phases of training RBF NNs
Determination of centres for RBF NNs (fixed centres)
Determination of centres (clustering) for RBF NNs