Lecture 4 Lecture Notes Flashcards
(74 cards)
How do McCulloch and Pitts neurons function?
They sum the firing of incoming neurons multiplied by synapse weights and fire if the sum exceeds a threshold.
What does a perceptron consist of?
Sensory neurons connected to motor neurons.
What is the main learning rule for perceptrons?
Adjust weights based on the difference between actual and desired outputs.
What is the significance of the bias unit in a perceptron?
It allows the perceptron to create any dividing line needed for classification.
What boolean functions can perceptrons learn?
- AND
- OR
- NAND
- NOR
What is a limitation of perceptrons?
They can only learn linearly separable boolean functions.
Which boolean function cannot be learned by a simple perceptron?
XOR.
What is the equation used in perceptrons for output calculation?
~oi = step(ÂWij~xj).
What is a common value for the learning rate in perceptrons?
0.1.
What happens if the output neuron incorrectly produces a 1?
Decrease the weight for that neuron.
What happens if the output neuron incorrectly produces a 0?
Increase the weight for that neuron.
What are the outputs of a perceptron when given the inputs (0,1) and (1,0)?
1’s
What are the outputs of a perceptron when given the inputs (0,0) and (1,1)?
0’s
What type of functions can perceptrons learn?
Boolean functions which are linearly separable
For a 2-variable input, how can the 1’s and 0’s be divided?
With a straight line
For a 3-variable input, how can the 1’s and 0’s be divided?
With a plane
What did Minsky and Papert predict about advanced forms of perceptrons?
They were unlikely to escape the problem of linear separability
What was the impact of Minsky’s reputation on the field of neural networks?
It wiped out the entire field for over a decade
Can multilayer neural networks learn functions beyond linear separable ones?
Yes, they can learn any function
What is the perceptron considered in terms of classification?
A binary classification algorithm
What does a neural network do beyond classification?
Learns a continuous function from one multidimensional space to another
What are the two generalizations made to the single-layer perceptron?
- Change the step function to a differentiable function
- Define a formal learning algorithm in terms of gradient descent
Why must the initial weights of a multilayer neural network be small random values?
If all weights are 0, the network cannot learn
What is the delta rule used for in neural networks?
Modifying weights based on their contribution to the final outcome