2 - We Are All Just Numbers Here... Flashcards

(89 cards)

1
Q

Who was William Rowan Hamilton?

A

An Irish mathematician known for his work on quaternions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What significant event happened on October 16, 1843?

A

Hamilton had a flash of inspiration for the quaternion formula while walking along the Royal Canal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the fundamental formula for quaternion multiplication?

A

i² = j² = k² = ijk = -1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What did Hamilton etch on the stone of Brougham Bridge?

A

The fundamental formula for quaternion multiplication.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define a scalar quantity.

A

A stand-alone number that represents magnitude only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Define a vector.

A

A quantity that has both magnitude and direction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What are the components of a vector?

A

The x-component and y-component.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How can the magnitude of a vector be calculated?

A

Using the Pythagorean theorem: √(x² + y²).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What does Newton’s Second Law of Motion state?

A

Acceleration is proportional to the force acting on an object and they have the same direction.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What geometrical shape is used to represent vector addition?

A

A parallelogram.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the resultant vector in the example of a man walking from (0,0) to (6,9)?

A

The vector from (0,0) to (6,9).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the net distance in the xy coordinate space from the origin to (6,9)?

A

10.82 miles.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What happens when you subtract vectors?

A

It indicates if one force is acting against another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the effect of multiplying a vector by a scalar?

A

It scales the vector’s magnitude.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define a unit vector.

A

A vector with a magnitude of 1.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What is the dot product of two vectors?

A

The magnitude of one vector multiplied by the projection of another onto it.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What does a dot product of zero indicate?

A

The two vectors are orthogonal (at right angles).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How is the dot product calculated using vector components?

A

a.b = a1b1 + a2b2.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the significance of Hamilton’s work on quaternions for machine learning?

A

It laid foundational mathematical concepts important for vector analysis.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Fill in the blank: A ______ is a mathematical entity composed of four elements.

A

quaternion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

True or False: The magnitude of a vector can be negative.

A

False.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What does the projection of one vector onto another represent?

A

The ‘shadow cast’ by one vector onto another.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the equation for the scalar quantity when dealing with vectors a and b?

A

a.b = a 1 b 1 + a 2 b 2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What do the vectors i and j represent in the context of dot products?

A

Orthogonal vectors, where i.j and j.i are zero, and both i.i and j.j equal 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What does a perceptron output if the weighted sum of its inputs plus the bias term is greater than 0?
1
26
What is the output of a perceptron if the weighted sum is less than or equal to 0?
-1
27
In the perceptron model, how can the weights be represented?
As a vector w = (w1, w2)
28
What geometrical concept does the perceptron use to separate data points into clusters?
A linearly separating hyperplane
29
What is the relationship between the weight vector w and the separating hyperplane?
The vector w is orthogonal to the hyperplane
30
What does the dot product of a data point vector and the weight vector indicate?
The distance of the data point from the hyperplane
31
What happens when a data point lies on the hyperplane?
The dot product with the weight vector equals zero
32
What is the significance of the bias term in a perceptron?
It moves the hyperplane away from the origin without changing its orientation
33
Fill in the blank: The perceptron learning algorithm guarantees to find one separating hyperplane, but not necessarily the _____ one.
best
34
What is the mathematical representation of a one-column matrix with two elements?
A column matrix indexed by numbers 1 and 2
35
What is the process of flipping a column matrix on its side called?
Taking the transpose of a matrix
36
What is the notation for the transpose of matrix A?
A T
37
In the context of matrices, what is a vector?
A particular form of matrix with either one row or one column
38
What is the relationship between the number of columns in the first matrix and the number of rows in the second for taking a dot product?
They must be equal
39
How can the weighted sum of inputs in a perceptron be concisely written?
As the dot product w T x
40
What does the perceptron learn from a set of input data vectors?
The weight vector that represents a hyperplane separating the data into two clusters
41
What is the significance of the hyperplane in the context of classification?
It determines the classification of new data points based on their position relative to it
42
True or False: The perceptron can classify data points as 'obese' or 'not-obese' based on their position relative to the hyperplane.
True
43
What is the role of modern deep neural networks in relation to the perceptron?
They build upon the foundational concepts established by the perceptron
44
What is a perceptron learning algorithm?
A computationally viable algorithm for binary classification that involves finding a hyperplane to separate data into two groups.
45
What defines a 'solution' in the context of perceptrons?
A hyperplane that linearly separates the data into two groups.
46
Who developed a significant proof regarding the perceptron learning algorithm in 1962?
Henry David Block.
47
What did Block's proof establish?
Upper bounds for the number of mistakes made by the perceptron learning algorithm.
48
What is the focus of Minsky and Papert's book 'Perceptrons'?
A class of computations that make decisions by weighing evidence.
49
What was a notable criticism made by Block in his review of 'Perceptrons'?
He objected to Minsky and Papert's implication that cyberneticists should have known about earlier convergence proofs.
50
What is the significance of the term 'cybernetics'?
The study of control and communication in the animal and the machine.
51
What are the six variables used to categorize patients in the discussed pandemic scenario?
* x1 = age * x2 = body mass index * x3 = has difficulty breathing (yes = 1/no = 0) * x4 = has fever (yes/no) * x5 = has diabetes (yes/no) * x6 = chest CT scan (0 = clear, 1 = mild infection, 2 = severe infection)
52
What does the outcome 'y' represent for each patient?
y = -1 (did not need ventilator support) or y = 1 (needed ventilator support).
53
What is the goal of training a perceptron in this context?
To find a separating hyperplane for the data points.
54
What is the first step in the perceptron training algorithm?
Initialize the weight vector to zero: set w = 0.
55
What condition necessitates updating the weight vector in the perceptron algorithm?
If y w^T x ≤ 0.
56
How does the perceptron determine if the weights are correct?
If the expression y w^T x is positive.
57
What does the convergence proof by Minsky and Papert establish?
The perceptron will converge to a solution in a finite number of steps if one exists.
58
What is the significance of the dot product of weight vectors during training?
It indicates how closely the weight vector aligns with the desired weight vector.
59
What does the term 'XOR problem' refer to in perceptrons?
A problem that cannot be solved by a single layer of perceptrons, as it cannot be linearly separated.
60
What is the relationship between lower and upper bounds in computational complexity?
Lower bounds indicate what is impossible, while upper bounds measure resource limits for solutions.
61
What does the weight vector w represent in the perceptron model?
The parameters that define the hyperplane separating the data.
62
Fill in the blank: The perceptron learning algorithm updates the weight vector by adding _______.
y x.
63
True or False: The perceptron algorithm guarantees a solution for all types of data.
False.
64
What major assumption is made about the data in the context of perceptrons?
The data are linearly separable.
65
What is the bias term in the perceptron model denoted as?
w0.
66
What is a perceptron?
A simple type of artificial neuron used in machine learning
67
What problem did Minsky and Papert prove that a single layer of perceptrons could not solve?
The XOR problem
68
What are the four data points involved in the XOR problem?
* (0, 0) * (1, 0) * (1, 1) * (0, 1)
69
What must a perceptron output for the points (0, 0) and (1, 1)?
y = 1
70
What must a perceptron output for the points (1, 0) and (0, 1)?
y = -1
71
What does the term 'multi-layer perceptrons' refer to?
Perceptrons stacked such that the output of one feeds into the input of another
72
What algorithm was published by Rumelhart, Hinton, and Williams in 1986?
Backpropagation
73
What does the backpropagation algorithm rely on?
Calculus and optimization theory
74
What is the significance of the year 1982 in neural network research?
A physicist's unique solution to a biological problem re-energized the field
75
What is the goal of the perceptron algorithm?
To find a linearly separating hyperplane
76
What is the initial step in the perceptron algorithm?
Initialize the weight vector to zero: set w = 0
77
When does the weight vector get updated in the perceptron algorithm?
If y w^T x ≤ 0
78
What is the equation for updating the weight vector in the perceptron?
w_new = w_old + y x
79
What does γ (gamma) represent in the perceptron algorithm?
The distance between the linear separating hyperplane and the closest data point
80
What is the dot product of a vector with itself always greater than or equal to?
0
81
What happens to the dot product w^T w* after each update?
It grows by at least γ
82
What happens to the dot product w^T w after each update?
It grows by at most 1
83
How can the number of updates M required for the perceptron to converge be described?
M is always a finite quantity
84
What is the maximum number of updates required for convergence in the perceptron algorithm?
1 over γ²
85
True or False: The perceptron algorithm guarantees convergence in a finite number of steps.
True
86
In the context of the perceptron, what does the term 'linearly separating hyperplane' refer to?
A hyperplane that separates different classes of data points
87
What is a key limitation of a single layer perceptron?
It cannot solve problems like XOR that are not linearly separable
88
What is the significance of normalization in the perceptron algorithm?
It ensures all input data points have magnitudes less than or equal to 1
89
Fill in the blank: The perceptron will converge without fail in a finite number of _______.
steps