8+9 Flashcards

1
Q

What is the main limit of perceptrons? What is a way to fix this?

A

A perceptron splits data into two half spaces, if the boundary needs to be non linear this is impossible. If we make the perceptron multi layered we can get around this.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a layer? What is a single layer network good for?

A

A number of neurons connected to the same inputs(with different weights) forms a layer. A single layer network is good for multi-class classification.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

How does error backpropagation work?

A

Take the error associated with output and distribute amongst units which provided input, the strongest connected inputs take the most blame. This continues until at the input layer. Once all blame has been distributed each neuron will change its weights and biases to reduce its share of blame.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Can multi layer networks be done with a hard limiting function?

A

No they need a sigmoid activation function(or other continuous function with a non zero derivative).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are all the steps in the backpropagation algorithm?

A
  1. Initial guess all parameter values.
  2. forward phase, make forward computation to determine output, save output at each layer.
  3. Compute cost based on network output and desired output.
  4. backpropagate blame for error in network and update parameters.
  5. go back to step 2.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the main activation functions?

A

Sigmoid(0,1) , tanh(-1, 1), linear, rectified linear unit(linear if value is not less than 0. 0 in this case.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which output neuron should be chosen for classification problems?

A

The one with the greatest confidence.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the cross-entropy cost?

A

A cost system better than MSE for classification problems.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the softmax activation function?

A

Neuron output from activity is normalised by total activity of all output neurons, producing a probability distribution over classes?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a feed forward neural network What about a recurrent neural network?

A

Feed forward is a directed acyclic graph, meaning the internal state only depends on the current input. A recurrent neural network includes cycles, allowing the model to show a dynamic temporal behaviour.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does a SRN involve?

A

Tanh activity function in hidden layer, softmax in output. Is a recurrent network, hidden layer feeds back in on self.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly