Perceptron Flashcards

1
Q

What is the learning rate?

A

This is how large of a step the AI takes each iteration

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the effect of slower learning rate?

A

It takes longer for the data to converge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the effect of a higher learning rate?

A

> The AI learns faster

> It may become unstables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

How do human neurons fire, why?

A

> Neurons accumilate stimulus

> Neurons fire when the stimulous given to them is higher than a certain threshold

> Neurons are binary, they either fire or dont fire

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the model for the McCulloch and Pitts neuron?

A

> Several inputs

> Each input has a weight associated with it which indicates the importance of that input

> If the sum of weights and inputs is greater than a threshold, then it fires

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the equation for the weights and inputs of a neuron for the McCulloch and Pitts neuron?

A

∑ wixi = w∙x

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the threshold equation for the McCulloch and Pitts neuron?

A

o(hw) =

{ 1 if hw > θ

{ 0 if hw ≤ θ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the decision boundary of the McCulloch and Pitts neuron?

A

hw = θ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the issues with the McCulloch and Pitts model of a neuron?

A

> Other neuron characteristics are not taken into account

> The sum of weights and inputs is not neccissarily what the neuron is doing

> Actual neurons do not fire in a single spike, instead they fire in a spike train

> Artificial networks are synchronous where real biological netowrks are asynchronous

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is a functional aproximators and why is a neuron called a functional aproximator?

A

> It is something that can aproximate any function

> They are called it because they produce an aproximate function that is a boundary between classes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the two approaches of AI models?

A

> Generative model

> Discriminative model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a generative model?

A

A model that learns the underlying distribution of the data so it can generate more points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are examples (2) of using a generative model?

A

> 4k Upscaling

> Generating faces of not real people

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a discriminative model?

A

A model that only learns the decision boundary and not the distribution that underlays the data points

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an example of using a discriminative model?

A

> Image recognition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is a perceptron?

A

This is a collection (array) of neurons and input variables

17
Q

When should a perceptron be used?

A

When data is linearly separable by a hyper plane in n dimensions

18
Q

How is the number of neurons decided?

A

The number of classes being classified

19
Q

How do we calculate the number of parameters for a perceptron with multiple neurons?

A

Number of inputs × Number of neurons = number of connections

number of connections + number of outputs = number of parameters

20
Q

What is the basic error function?

A

> The total number of mistakes

> E(X) = ∑ | yn - tn |

  • E(X) = Error
  • yn = Result of perceptron
  • tn = target of perceptron
21
Q

How does the basic error function work for each example?

A

> E(X) = ∑ | yn - tn |

> When they are different:

  • yn - tn = 1 - 0 = 1
  • yn - tn = 0 - 1 = -1
  • | yn - tn | = 1

> When the same:

  • yn - tn = 0 - 0 = 0
  • yn - tn = 1 - 1 = 0
  • | yn - tn | = 0
22
Q

What do we want to do with the weights to optimise the perceptron?

A

Adjust the weights until the error is minimised

23
Q

How do we use biased input?

A

Instead of having the threshold associated with each neuron, we can instead include this as an input to the neuron. This would make the equation for the sum:

hw = ∑ wixi - θ

and the output:

o(hw) =

{ 1 if hw > 0

{ 0 if hw ≤ 0