20. Artificial Neural Networks 1 Flashcards

(20 cards)

1
Q

Describe the McCulloch-Pitts neuron model.

A

Spikes=potentials, synaptic strength=weights, excitation=positive weights, inhibition=negative weights, activation occurs when sum exceeds threshold

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the steps in the Perceptron learning algorithm?

A
  1. Random weight initialization 2. Present input pattern 3. Compute potential 4. Compute output 5. Calculate error 6. Update weights 7. Repeat
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What are the three key elements of Artificial Neural Networks?

A
  1. Highly interconnected processing elements (neurons) 2. Configurable topology 3. Learning via synaptic adjustments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How does information flow in biological neurons?

A
  1. Dendrites receive signals 2. Soma integrates inputs 3. Axon transmits outputs 4. Synapses pass signals chemically
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the five key steps when working with ANNs?

A
  1. Data preparation 2. Architecture design 3. Neuron structure 4. Parameter initialization 5. Learning algorithm
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What are the key differences between biological and artificial neurons?

A

Biological: Analog, parallel, self-organizing. Artificial: Digital, often serial, designed architecture

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the weight update rule in Perceptrons?

A

w_i(t+1) = w_i(t) + ηx_{s,i}[d_s - o_s] where η=learning rate, d=desired output, o=actual output

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the XOR problem and why is it significant?

A

A non-linear classification problem that revealed limitations of single-layer Perceptrons

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the chain rule used for in backpropagation?

A

To calculate gradients through nested functions: ∂L/∂w_{j,k} = -2(d_k-o_k)o’(p_k)o_j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is content-addressable memory in neural networks?

A

Memory recalled by content rather than address - similar to human associative memory

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the weight update rule in backpropagation?

A

w_{ij}(t+1) = w_{ij}(t) + ηo_iδ_j where η=learning rate, o=neuron output, δ=error

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are three limitations of backpropagation?

A
  1. Vanishing gradient problem 2. High computational cost 3. Black-box nature
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are three advantages of backpropagation?

A
  1. Can learn complex functions 2. Handles noisy data 3. Good generalization
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the main components of a biological neuron?

A

Dendrites (input), Soma (cell body), Axon (output), Synapses (connections)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How is error calculated in output layer neurons during backpropagation?

A

δ_i = o_i’(p_i)[d_i - o_i] where o’ is derivative of activation function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What limitation did Minsky and Papert discover about Perceptrons?

A

They can only solve linearly separable problems (cannot solve XOR)

17
Q

What are the three phases of backpropagation?

A
  1. Forward pass 2. Error calculation 3. Backward weight update
18
Q

How is error calculated in hidden layer neurons during backpropagation?

A

δ_i = o_i’(p_i)∑w_{ij}δ_j - weighted sum of downstream errors

19
Q

What is the sigmoid activation function formula?

A

O(p_i) = 1/(1 + e^{-p_i}) - outputs values between 0 and 1

20
Q

What is the Perceptron convergence theorem?

A

Perceptrons converge only for linearly separable functions