20. Artificial Neural Networks 1 Flashcards
(20 cards)
Describe the McCulloch-Pitts neuron model.
Spikes=potentials, synaptic strength=weights, excitation=positive weights, inhibition=negative weights, activation occurs when sum exceeds threshold
What are the steps in the Perceptron learning algorithm?
- Random weight initialization 2. Present input pattern 3. Compute potential 4. Compute output 5. Calculate error 6. Update weights 7. Repeat
What are the three key elements of Artificial Neural Networks?
- Highly interconnected processing elements (neurons) 2. Configurable topology 3. Learning via synaptic adjustments
How does information flow in biological neurons?
- Dendrites receive signals 2. Soma integrates inputs 3. Axon transmits outputs 4. Synapses pass signals chemically
What are the five key steps when working with ANNs?
- Data preparation 2. Architecture design 3. Neuron structure 4. Parameter initialization 5. Learning algorithm
What are the key differences between biological and artificial neurons?
Biological: Analog, parallel, self-organizing. Artificial: Digital, often serial, designed architecture
What is the weight update rule in Perceptrons?
w_i(t+1) = w_i(t) + ηx_{s,i}[d_s - o_s] where η=learning rate, d=desired output, o=actual output
What is the XOR problem and why is it significant?
A non-linear classification problem that revealed limitations of single-layer Perceptrons
What is the chain rule used for in backpropagation?
To calculate gradients through nested functions: ∂L/∂w_{j,k} = -2(d_k-o_k)o’(p_k)o_j
What is content-addressable memory in neural networks?
Memory recalled by content rather than address - similar to human associative memory
What is the weight update rule in backpropagation?
w_{ij}(t+1) = w_{ij}(t) + ηo_iδ_j where η=learning rate, o=neuron output, δ=error
What are three limitations of backpropagation?
- Vanishing gradient problem 2. High computational cost 3. Black-box nature
What are three advantages of backpropagation?
- Can learn complex functions 2. Handles noisy data 3. Good generalization
What are the main components of a biological neuron?
Dendrites (input), Soma (cell body), Axon (output), Synapses (connections)
How is error calculated in output layer neurons during backpropagation?
δ_i = o_i’(p_i)[d_i - o_i] where o’ is derivative of activation function
What limitation did Minsky and Papert discover about Perceptrons?
They can only solve linearly separable problems (cannot solve XOR)
What are the three phases of backpropagation?
- Forward pass 2. Error calculation 3. Backward weight update
How is error calculated in hidden layer neurons during backpropagation?
δ_i = o_i’(p_i)∑w_{ij}δ_j - weighted sum of downstream errors
What is the sigmoid activation function formula?
O(p_i) = 1/(1 + e^{-p_i}) - outputs values between 0 and 1
What is the Perceptron convergence theorem?
Perceptrons converge only for linearly separable functions