Midterm 3 Flashcards

(105 cards)

1
Q

Diagonal matrix

A

matrix where the only non-zero entries are on the diagonal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

similar matrices

A

a matrix A is similiar to a matrix D if A = PDP^-1
P is an invertible matrix
A and D have the same eigenvalues and determinants!!!
- they have the same characteristic polynomial and therefore the same eigenvalues

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

If two matrices have the same eigenvalues, does that necessarily mean they are similar to each other?

A

FALSE, only the converse is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Diagonalization

A

Splitting up matrix A into a diagonal matrix D and an invertible matrix P
- very useful for computing A^k with large ks
A^k = PD^kP^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Algebraic multiplicity

A

the number of repeats for an eigenvalue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Geometric multiplicity

A

the number of eigenvectors for a given eigenvalue
Dimension of the Null(A- λI) for a specific λ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Singular

A

NOT INVERTIBLE
free variables, linearly dependent columns
Nonsingular = invertible!

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Diagonalization Formula

A

A = PDP^-1
P: set of all linearly independent eigenvectors
D: the corresponding eigenvalues (in order)
Allows us to solve A^k for large k
A^k = PD^kP^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

The Diagonalization Theorem

A

An nxn matrix A is diagonalizable if and only if A has n linearly independent eigenvectors
Dimension of A = Dimension of P
A is diagonalizable if and only if there are enough eigenvectors to form a basis of Rn : eigenvector basis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Steps to Diagonalize a Matrix

A
  1. find the eigenvalues using the characteristic polynomials
    det(A - λI) = 0
  2. find the linearly independent eigenvectors of A
    (A - λI)v = 0, plug in λ
    - solve the null space in parametric vector form
    IF the number of total eigenvectors is NOT equal to the number of columns in A, then A is not diagonalizable
  3. Construct P from the eigenvectors
  4. Construct D using the corresponding eigenvalues
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Theorem - Eigenvalues and Diagonalizable

A

An nxn matrix with n distinct eigenvalues is diagonalizable
- if vi …vn are eigenvectors correspond to n distinct eigenvalues of matrix A. Then {vi … vn} is linearly independent, therefore A is diagonalizable
BUT it is not necessary for a nxn matrix to have n distinct eigenvalues to be diagonalizable

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Theorem - Matrices whose Eigenvalues are Not Distinct

A

Geometric multiplicity must be less than or equal to algebraic multiplicity of λ
A matrix is diagonalizable IF AND ONLY IF the sum of the dimensions of the eigenspaces (Nul(A -λI)) equals n (the number of columns)
Total geometric multiplicity = number of columns in matrix A THEREFORE geometric multiplicity has to equal algebraic multiplicity
characteristic polynomial of A factors completely into linear factors - can be real or imaginary

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

DIAGONALIZABILITY AND INVERTIBILITY

A

they have no correlation with each other
-a matrix can be diagonalizable but not invertible because it can have a eigenvalue of 0
- a matrix can be invertible but not diagonalizable
1,1
0,1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Complex number

A

a + bi
i = sqrt(-1)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Complex eigenvalue

A

eigenvalue that is a complex number a + bi
if b = 0, then λ is a real eigenvalue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Complex eigenvector

A

an eigenvector subsisting of a complex eigenvalue

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Complex number Space ℂn

A

the space of all complex numbers

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

ℂ2

A

complex number space with 2 entries
at least one entry is a complex number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Conjugate of a complex number

A

the conjugate for (a+bi) is (a-bi)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Complex conjugate of a vector x

A

x with a bar on top of it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Re x

A

the real parts of a complex vector x
an entry CAN be 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Im x

A

the imaginary parts of a complex vector x
an entry can be 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

We can identify ℂ with R2

A

a + bi <-> (a,b)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

we can add and multiply complex numbers

A

Add: like normal (2-3i)+(-1+i) = 1-2i, similiar to matrix addition
Multiply: FOIL!!! - no matrix multiplication

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
absolute value of a complex number a + bi
sqrt(a^2 + b^2)
26
we can write complex numbers in polar form
(a,b) = a + ib = r(cosφ + isinφ) a is the real part and b is the imarginary part
27
Argument of lambda = a + bi
the angle φ produced by a and b on their respective Re x and Im x axis
28
Finding complex eigenvalues and complex eigenvectors
1. det(A - λI) = 0 to get the eigenvalues λ; the complex roots are the complex eigenvalues 2. Solve (A-λI)x = 0 for x to get the eigenvectors should get one "free variable" 3. Find the other eigenvector - Find the conjugate of the other eigenvector!
29
Re x and Im x
xbar = vector whose entries are the complex conjugates of the entires in x for example: (3-1,i, 2) => (3,0,2) + i(-1,1,0) Re x is the first, Im x is the second xbar = (3+i,-i,2)
30
Properties of Complex Conjugate Matrixes
you can find the conjugates first and then multiply together for: rx Bx BC rB ??? r being scalars uppercase being matrixes and x being vectors conjugate of (x + y) = xbar + ybar conjugate of Av is equal to Avbar Im(xxbar) = 0 (xy)bar = xbarybar
31
Complex Eigenvalues and Complex Eigenvector Come in Pairs!!!
no such thing as an odd number of complex eigenvalues
32
Rotation Dilation Matrix
matrix in the form of a,-b b,a the eigenvalues are: a + bi, a - bi the length of the eigenvalue (r) is sqrt(a^2+b^2) the angle of the eigenvalue is tan^-1(b/a)
33
Euler's Formula
e^(iφ) = cosφ + isinφ multiplying two complex numbers = r1*r2e^(i(φ1+φ2))
34
Complex numbers and Polynomials
if lambda is a complex root of characteristic polynomial, then lambda bar is also a root of that real polynomial lambda bar is an eigenvalue of A as well with an eigenvector of vbar
35
Inner Product or Dot Product
a scalar u*v uTv
36
Vector Length
||v|| = sqrt(v*v) = sqrt(v1^2 + v2^2 +...+vn^2)
37
Unit Vector
vector whose length is 1
38
Vector normalization
Dividing a nonzero vector by its length to make it a unit vector (1/||v||)*v
39
Distance between two vectors
dist(u,v) = || u - v||
40
Orthogonal vectors
Two vectors are orthogonal if their dot products equals 0
41
Orthogonal complements
A set of vectors that are all orthogonal to a subspace W Representation as a line or plane depends on the nullspace of W
42
For a subspace to be in Rn
subspace (contains the zero vector and is closed under addition and multiplication) has n entries for each vector in it (dimension n) R1 means that the vectors have one entry Span of just [1]
43
Dot Product vs Cross Product
dot product gives you a number while cross product gives you a vector
44
Theorem: Dot Product Properties
u*v = v*u (Symmetry) (u + v) * w = u*w+ v*w (linearity) and vice versa c is a scalar: (cu)*v = c(u*v) = u * (cv) - can find the dot product of the two vectors first and then multiply by the scalar u*u >= 0 (Positivity!) u*u = 0 only if u = 0
45
Vector Length Properties
vector length is always positive ||cv|| = |c| ||v|| ||cv||^2 = c^2 ||v||^2
46
Normalizing a Vector
v(1/||v||) gives u, a unit vector u is in the same direction as v BUT it has a different magnitude (because the length changed)
47
Finding the Distance between Two Vectors
1. subtract the two vectors (u-v) 2. find the length of the resultant vector || u - v ||
48
Orthogonality Basics
Two vectors are orthogonal = two vectors are perpendicular to each other ||u - (- v) || = || u - v || u * v = 0 Zero vector is orthogonal to every vector in Rn
49
The Pythagorean theorem
two vectors are orthogonal if and only if || u + v||^2 = ||u||^2 + ||v||^2
50
Orthogonal Complements Basics
set of vectors where each vector is orthogonal to a subspace W Orthogonal COmplement of W = W⊥
51
W⊥
a vector x is in W⊥ if and only if x is orthogonal to every vector in a set that spans W - must calculate every single dot product pair to prove orthogonality W⊥ is a subspace of Rn just like W - both subspaces have n entries - do not necessarily have the same dimension dim(Row W⊥) = n - dim(Col W) because Row W⊥ = Nul W
52
Theorem: Perps of SubSpaces
Let A be an mxn matrix: (Row A)⊥ = Null A (Col A) ⊥ = Null A^T Proof: Av = 0 - taking the dot product of every row of A with the vector v and seeing if v is orthogonal to A = dot product of every row of A is equal to 0 making the vector v orthogonal to A which is also the null space of Av = 0
53
Rank Theorem Expanded and Row A
Row A: space spanned by the rows of matrix A pivot rows of A dim(Row A) = dim(Col A) # of pivot rows is equal to the number of pivot columns Row A^T = Col A THEREFORE N (number of columns in a matrix) Dim(Col A) + dim (Nul A) = N Dim(Row A) + dim(Nul A) = N
54
Orthogonal Set
``` ```a set of vectors in Rn where each pair of distinct vectors from the set is orthogonal ui*uj = 0 where u and j don't equal each other
55
Orthogonal Basis
basis for a subspace W that is also an orthogonal set
56
Orthogonal Projection
projecting a vector onto a line/plan to get its orthogonal complement? yhat = proj(_L)y = (y*u/u*u)u With L being the subspace spanned by u (subspaces must include the 0 vector)
57
Orthonormal set
orthogonal set where every vector is a unit vector
58
Orthonormal basis
basis for a subspace W that is also an orthonormal set
59
Orthogonal Matrix
SQUARE matrix whose columns form an orthonormal set
60
Theorem: Orthogonal Sets and Linear Independence
if S = {u1...up} is an orthogonal set of nonzero vectors in Rn, then S is linearly independent and is a basis for the subspace spanned by S
61
All Orthogonal Sets are Linearly Independent Sets
TRUE only if there are no zero vectors BUT not all linearly independent sets are orthogonal REMEMBER to omit the zero vector for an orthogonal set!
62
Theorem: Finding the weights for a linear comibination of an orthogonal basis
Let {u1...up} be an orthogonal basis for a subspace W or Rn for every y in W, the weights in the linear combination y = c1u1+...+cpup is given by cj = (y*uj/uj*uj) FOR ORTHOGONAL BASES
63
How to find an Orthogonal Projection
yhat = proj(L)y = (y*u/u*u)u y = yhat + z (z is the component of y orthogonal to u)
64
Orthogonal Projections can be written as a Linear Combination of a Vector's Components
y = (y*u1/u1*u1)u1 + (y*u2/u2*u2)u2
65
Orthonormal Sets vs Orthogonal
all orthonormal sets are orthogonal while not all orthogonal sets are orthonormal
66
Theorem: Transpose of matrix with Orthonormal Columns
A mxn matrix U has orthonormal columns if and only if U^TU = I -the transpose of a matrix with orthonormal columns multiplied by the original matrix ALWAYS results in the identity matrix (even if NOT square!) Proof: an orthonormal vector times itself is the square root of its length which is 1 (dot product)
67
A^TA where A is a matrix with orthogonal columns
produces a diagonal matrix with all entries equal to each vector's length squared
68
Theorem: Properties of a Matrix with orthonormal columns
||Ux|| = ||x|| linear mapping x -> Ux preserves length (Ux)*(Uy) = x*y (Ux)*(Uy) = 0 if and only if x and y are orthogonal to each other - preserves orthogonality
69
Difference between Orthogonal Matrix and a Matrix with Orthonormal Columns
Orthogonal Matrices must be square
70
● U-1 = UT
○ The inverse of orthogonal matrices is its transpose ○ Orthogonal matrices have linearly independent columns
71
Determinant of an Orthogonal Matrix
if A is an orthogonal matrix, then detA is equal to 1 or -1 converse is NOT TRUE
72
Orthogonal Projection vs orthogonal component of y onto W
yhat vs z z = y - yhat
73
Best Approximation
||y-yhat|| < ||y - v|| the verticle distance going straight up and down between a vector and its projection space ANY distance between a vector and a subspace that is not per pendicular to the space is automatically not the shortest distance
74
Properties of an orthogonal projection onto Rn
Given a vector y and a subspace W in Rn, there is a vector yhat in W that is the UNIQUE vector in W for which y-yhat is orthogonal to W yhat is the unique vector in W closest to y
75
Theorem: Orthogonal Decomposition Theorem
Let W be a subspace of Rn; each y in Rn can be written uniquely in the form of y =yhat + z whre yhat is in W and z is in Wperp
76
if {u1..up} is any orthogonal basis of W then yhat is
the fun equation we all know about we assume that W is not the zero subspace because everything projected on the zero subspace is just the zero vector
77
Properties of Orthogonal Projections
if y is in W = Span {u1...up} then projwy = y if y is already in the subspace then projecting it onto the same subspace does nothing
78
The Best Approximation Theorem
||y-yhat|| < ||y-v|| yhat is the closest point in W to y
79
Theorem: Orthonormal Basis and Projections
if {u1....up} is an orthonormal basis for a subspace W in Rn, then proj = (y*u1)u1 + (y *u2)u2... projwy = UU^Ty for all y
80
Theorem: Matrix with orthonormal columns vs orthogonal matrix
if U is an nxp matrix with orthonormal columns and W is the column space of U, U^TUx = Ipx = x for all x in Rp UU^Ty = projxy for all y in Rn if U is an nxn matrix with orthonormal columns then U is an orthogonal matrix UU^Ty = Iy = y for all y in Rn
81
Gram-Schmidt
producing an orthogonal/orthonormal basis for any nonzero subspace of Rn
82
The actual algorithm for Gram-Schmidt
v1 = x1 v2 = x2 - (x2 *v1/v1 *v1)v1 v2 = x3 - (x3 *v1/v1 *v1)v1 - (x3 *v2/v2 *v2)v2 ...
83
{v1...vp} is an orthogonal basis for W... what about its Span in relation to original matrix
Span are the same span{v1..vp} = span{x1..xk}
84
what is required for gram-schmidt
linearly independent basis any nonzero subspace has an orthogonal basis because an ordinary basis {x1..xp} is always available
85
Orthonormal Bases
normalize all vectors in the orthogonal basis
86
QR Factorization
If A is an mxn matrix with linearly independent columns, then A can be factored as A = QR Q: an mxn matrix whose columns form an orthonormal basis for Col A R: an nxn upper triangular invertible matrix with positive entires on its diagonal
87
How to QR Factorize
Use Gram-Schmidt to find Q, if needed, normalize to make it orthonormal THEN solve A = QR or solve R = Q^TA if the columns of A were linearly dependent, then R would not be invertible
88
General least-squares problem
Finding the x that makes ||b-Ax|| as small as possible
89
Normal Equations
A^TAx = A^Tb
90
Difference between x and xhat
x just refers to some general solution while xhat is the solution that solves the least squares problem/normal equations
91
least squares error
distance from b to Axhat where xhat is the least-squares solution to b ||b-Axhat||
92
why do we solve least-square problems
want to find a close enough solution to Ax = b when it is an inconsistent system
93
if A is mxn and b is in Rn, a least-squares solution of Ax=b is an xhat in Rn such that
||b-Axhat|| <= ||b-Ax|| for all x in Rn it can be equal when the columns of A are linearly dependent? if A is already consistent, then ||b-Axhat|| = 0
94
Solution of the General Least-Squares Problem
Use normal equations!!
95
Theorem: Least Square Solutions and Normal EQ
set of least-squres solutions of Ax=b coincide with the nonempty set of solutions of the normal equations A^TAx = A^Tb POSSIBLE TO HAVE more than one least-squares solution - with the existence of a free variable aka columns of A are linearly dependent
96
Theorem: Logically equivalent statements
1. the equation Ax = b has a unique least-squares solution for each b in Rn 2. the columns of A are linearly independent 3. the matrix A^TA is invertible When these statements are true, the least-squares solution xhat is given by xhat = (ATA)^-1ATb
97
Least Squares Error
||b-Axhat||
98
Theorem: Finding the LeastSquares Solution using QR Factorization
given an mxn matrix A with linearly independent columns, let A = QR be aQR factorization the equation Ax = b has a unique least-square solution, given by xhat = R^-1Q^Tb Rxhat = Q^Tb
99
if b is orthogonal to the columns if A, what can we say about the leastsquares solution
if b is orthogonal to A, then the projection of b onto A is 0 a least square solution xhat of Ax=b satisfies Axhat = 0
100
Least-Squares Lines
y = B0 + B1x
101
Residual
the difference between the actual y-value and the predicted y-value
102
Least-Squares Line
line of best-fit for a set of data minimizes the sum of the residuals aka the least-squares solution
103
Objective in Least-Squares Lines
finding B0 and B1 that create the least-squares lines, plugging in xs from the data points using Betas as your variables can use the normal equations to solve
104
Mean-Deviation Procedure
1. find the average of all the x-values x_ 2. calculate x* = x - x_ 3. then solve XB = y but use the x* values
105
General Linear Model
y = XB + episolon solve the normal equations XTXB = XTy