Week 3: Introduction to Neural Networks Flashcards

1
Q

Neuron

A

An information processing unit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Action potential

A

The signal outputted by a biological neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Firing rate

A

The number of action potentials emitted during a defined time-period.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Synapse

A

The connection between two neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Artificial Neural Network

A

A parallel architecture composed on many simple processing elements interconnected to achieve certain collective computational capabilities.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

De-noising Autoencoder

A

The network is trained so that the output, r, reconstructs the input, x. However, before encoding is performed the input is corrupted with noise. This mitigates overfitting.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Processing Units

A

For neural networks, they’re organised into layers and each have an activation function, as well as weights for each neuron.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Perceptron

A

A linear threshold unit.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Logical Functions

A

Can be AND, OR, XOR, or other arbitrary logical functions. Not all logical functions are linearly separable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Sequential (Online) Delta Learning Algorithm

A

For each sample, x_k, in the dataset in turn, w <- w + eta(t_k - H(wx_k))x_k^t. Do until the algorithm converges. One update of the parameters is based on 1 sample. The order in which the samples are used may affect the speed of convergence. It doesn’t necessarily outperform batch learning. This approach only depends on misclassified samples.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Batch Delta Learning Algorithm

A

For each iteration, go through all the all the samples, calcualte eta(t-y)x^t, then sum up the results. That’s the update for the weights. Keep iterating until the weights are unchanged. One update of the parameters is based on n samples. Sample order isn’t relevant in batch learning. This method only depends on misclassified samples. Batch learning can be faster than Sequential learning, because it processes the entire batch at once using vectorisation, which allows the parallel computing of each sample in the batch.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Hebbian Learning Rule

A

An unsupervised method which strengthens connections between active neuron and any active inputs. Sequential (Online) Learning Algorithm is w <- w + eta y x^t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Competitive Learning Algorithms

A

When output units compete for the right to respond to input, meaning that some neurons have their activity suppressed by other neurons.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Negative Feedback Networks

A

When output units compete to receive inputs, rather than compete to produce output.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Autoencoder Networks

A

When the output, r, is a reconstruction of the input using a separate neural population, with inhibitory feedback connections removed, and neural responses no longer having to be calculated iteratively. It can still learn weights as to minimise error between input and reconstruction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Neural Networks

A

Feedforward Networks: Single-layer Perceptron, Multilayer Perceptrons, Radial Basis Function Nets. Feedback/Recurrent Networks: Competitive Networks, Kohonen’s SOM (not covered), Hopfield Network (not covered), Negative Feedback. For classification tasks, there should be one neuron per class in the output.

17
Q

Mini-batch Delta Learning Algorithm

A

This method divides the samples into smaller batches and trains batch-by-batch. This method can optimally exploit the tradeoff between the vectorisation speed up from batch learning, the more frequent updates from sequential learning to increase accuracy in the estimation of the gradient.

18
Q

Softmax

A

A popular activation function used in output layers in classification tasks. y_i \leftarrow \frac{e^{\beta y_j}}{\sum_k e^{\beta j_k}}