Linear Algebra Flashcards

1
Q

Properties of regular Markov Chains

A

There is a unique SSV or SSPV (If the Markov Chain is not regular then there can be multiple steady-state vectors)
All initial state vectors xo go towards the unique SSV x where xk –> x as k approaches infinity
As k approaches infinity the regular transition matrix P^k approaches a transition matrix where the columns are the SSPV

(Only for regular Markov Chains)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Regular Markov Chain

A

A Markov Chain where the transition matrix is regular

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Regular Transition Matrix

A

A regular transition matrix is a stochastic matrix that for some power of the matrix, the matrix is positive. (Cannot include 0, to be positive it must be greater than 0)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

When must you add a parameter when solving

A

When there is no leading entry for that column

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Why do all Ax = λx not have unique solutions

A

From the definition of an eigenvector and eigenvalue, Aλ = λx if and only if (A-λI)x = 0. We know that A-λI is invertible if and only if (A-λI)x = 0 has a unique solution x = 0. But x cannot be equal to 0 as it contradicts the definition of an eigenvector. Therefore there is no unique solution to (A-λI)x = 0

Or det(A-λI) = 0. This means that A-λI is not invertible. This means that (A-λI)x = 0 does not have a unique solution x = 0. So since A-λI is not invertible, then (A-λI)x = 0 does not have unique solutions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Diagonalisation Theorem

A

A is diagonalisable if D = P^-1AP. A has n linearly independent eigenvectors. Each eigenvalue of A has algebraic multiplicity equal to its geometric multiplicity.. All the statements are equivalent.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does it mean to diagonalise a matrix

A

It is too express a diagonalisable matrix A in the form A = PDP^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Why is diagonalisation useful

A

Because it makes calculating the powers of matrices more effective. This is because if A is diagonalisable with D = P^-1AP then for all k>/= 1 A^k = PD^kP^-1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

When is a matrix diagonalisable

A

An n by n matrix A is diagonalisable if there exists a diagonal matrix D and an invertible matrix P so that D = P^-1AP

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Can you scale an eigenspace

A

Yes. It is useful when the eigenvector that you calculated is not a whole number and it being in a whole number would make later calculations easier. A good time to do this is when you find the matrix P with matrix A’s eigenvectors. It works as P^-1 is changed correspondingly.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Stochastic Matrix

A

An n by n matrix P is a stochastic matrix if its columns are probability vectors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Can diagonal matrices have zeros on the diagonals

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Eigenvalues of a trianglular matrix and powers of a diagonal matrix

A

D^k has eigenvalues (d11)^k, (d22)^k, …, (dnn)^k.

This is because D^k is also diagonal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When are matrices similar

A

Let A and B be n by n matrices. A is similar to B if there is an invertible matrix P so that P^-1AP = B. If P is invertible and AP = PB holds we have P^-1AP = B meaning that A is similar to B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Properties of similar matrices

A

If A is similar to B then B is similar to A
If A is similar to B and B is similar to C then A is similar to C
A is similar to A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Determinant of the inverse matrix

A

det(A^-1) = 1/det(A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Properties of matrices that are similar

A

IF A is similar to B then
det(A) = det(B)
det(A-λI) = det(B-λI)
This means A and B have the same eigenvalues as they have the same characteristic polynomial.
A is invertible <=> B is invertible
If A is similar to B they have the same trace

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What is proof by contradiction

A

You prove why the opposite is not true to say that something is true.
For example, we want to prove that something is not diagonalisable. We first assume it is diagonalisable but then from there we can see something that makes it not diagonalisable. Then we can say that it is not diagonalisable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Properties of triangular matrices

A

The determinant of a triangular matrix is the product of its diagonal entries. This implies that the eigenvalues of a triangular matrix are its diagonal entries. Or the roots of det(A-λI) are the diagonal entries of A. (The roots of the characteristic equation are the the eigenvalues)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Trace of a matrix

A

It is the sum of all the eigenvalues including multiplicities of an n by n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Determinant of a matrix from eigenvalues

A

The product of all eigenvalues including multiplicities gives the determinant of an n by n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

How do row operations affect determinants

A

If B is obtained from A by swapping 2 rows then det(B) = -det(A)

If B is obtained from A by multiplying one row by a scalar c then det(B) = cdet(A)

If B is obtained from A by adding a multiple of one row of A to another row of A then det(B) = det(A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Do you need to bracket when expanding coefficients for determinants

A

Yes. 3 |A| = 3(ad - bc)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

N distinct eigenvalues and diagonalisability

A

If an n by n matrix has n distinct eigenvalues then A is diagonalisable. (no multiplicities)

Proof: If A has n distinct eigenvalues then the corresponding eigenvectors are linearly independent and if A has n linearly independent eigenvectors then the matrix A is diagonalisable.

Note: A matrix can be diagonalisable even if its eigenvalues are not all distinct.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Definition of trace

A

It is the sum of the diagonal entries of an n by n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Geometric multiplicity

A

The geometric multiplicity of an eigenvalue is the number of distinct parameters appearing in its eigenspace

27
Q

Invertibility and eigenvalues

A

An n by n matrix A is invertible if and only if 0 is not an eigenvalue of A. In other words, A is only invertible if all the eigenvalues are non zero.

Since det(A-0I) cannot equal 0 then 0 is not a solution to det(A-λI)=0 and therefore 0 is not an eigenvalue

28
Q

Eigenvalue and eigenvector defintion

A

Let A be an n by n matrix. A scalar λ is called an eigenvalue of A if there is a non-zero vector x so that Ax = λx. Such a vector x is called an eigenvector of A corresponding to λ

29
Q

Properties of transpose

A

The transpose of a transpose gives the original matrix
The transpose of the sum of two matrices is the sum of the individual transposes of the matrices
(kA)^T = kA^T
(AB)^T = B^T A^T
(A^m)^T = (A^T)^m
det(A) = det(A^T)
(This also means that the characteristic polynomial is the same, ie they have the same eigenvalues)

Remember that each of these matrices can apply to a larger more complex matrix.

30
Q

Symmetric matrix

A

A is symmetric if A=A^T ie A equals its own transpose

31
Q

Further properties of determinants

A

det(cA) = c^n det(A)
det(AB) = det(A)det(B)
If A is invertible, then det(A^-1) = 1/det(A)
det(A) = det(A^T)

32
Q

Invertibility equivalent theorems

A
A is invertible
Ax=b has a unique solution for every bERn
Ax = 0 has unique solution x = 0
The reduced row echelon form of A is In
A is a product of elementary matrices
33
Q

What do we need to know when a proof references a unique something/solution

A
Reference its invertibility (main thing)
Whether the Ax = b has a unique solution for all bERn
Whether Ax = 0 has unique solution x = 0
The reduced row echelon form of A is In
A is a product of elementary matrices
34
Q

How to prove that something can or can’t equal 0 or when numbers are distinct or indistinct

A

Factor things like (a2-a1)(a3-a1)(a2-a3). If a2, a1, and a3 are distinct then (a2-a1)(a3-a1)(a2-a3) cannot equal 0.

35
Q

Methods of finding the trace

A

It is the sum of the diagonal entries of an n by n matrix, it is the sum of the eigenvalues of an n by n matrix

36
Q

Diagonal entries and eigenvalues

A

To clear things up, the product of the eigenvalues including multiplicities of any n by n matrix is its determinant.

The diagonal entries of a triangular matrix ARE its eigenvalues.

37
Q

How to find SSPV and SSV

A

Since a transition matrix for a Markov chain will always have 1 as an eigenvalue. We find the 1-eigenspace for that transition matrix. Then we scale it by a suitable scalar. For SSV we need a given population. The eigenvalue needs to add to the population. For SSPV we need a probability vector so the entries in the eigenvector must add to 1.

38
Q

Consistent or inconsistent

A

A system is consistent if it has solutions (A unique solution or infinitely many in 1 line). A system is inconsistent is it has no solutions. (No points of intersection or there is no point where all lines meet)

39
Q

The inverse of an elementary matrix

A

It comes from doing the inverse row operation on the identity matrix. It itself is an elementary matrix as it has had a row operation done to it.

40
Q

Row echelon form

A

Any rows which consist entirely of 0s are at the bottom

In each row that isn’t all zeros, the first non-zero entry in that row is in a column to the left of any leading entries in rows further down the matrix.

41
Q

Reduced row echelon form

A

It is in row echelon form
The leading entry in each non-zero row is 1
Each column containing a leading 1 has zeros everywhere else.

42
Q

Length of vectors identities

A
||v|| = √(v.v) 
||cV|| = |c| ||V||

Proofs may involve combining both of these identities and squaring both sides of an equation as we can deal with non- negative numbers

43
Q

Equations for planes

A

Normal form n.(x-p) = 0 All letters should have tildes under them

General form is normal form expanded out in the form ax + by + cz = d

Vector form x = p + t(direction vector) + s(direction vector). Note that the direction vectors cannot be parallel.

Parametric form is just the vector form expanded

44
Q

Coincide

A

It is when two lines are perfectly on top of each other meaning they have infinite solutions.

45
Q

When are there no solutions to a system

A

When there is a zero equal to a non-zero number in one of the rows of an augmented matrix/in one of the linear equations in the system

46
Q

Inverse properties

A
(cA)^-1 = c^-1 A^1
(AB)^-1 = B^-1 A^-1
(ABC)^-1 = C^-1 B^-1 A^-1
A^T is invertible and (A^T)^-1 = (A^-1)^T
(A^n)^-1 = (A^-1)^n
47
Q

When is a set of vectors linearly dependent

A

A (the entire set) set of vectors in Rn is linearly dependent if and only if at least one of them can be expressed as a linear combination of the others.

48
Q

Definition of linear independence

A

A set of vectors v1, v2, vk is linearly independent if the only solution to the equation c1v1 + c2v2 + … + ckvk = 0 is when c1 = c2 = … = ck = 0. (All of the scalars and equal and equal to 0)

49
Q

Definition of a span

A

If set S = {v1, v2, …vk} is a set of vectors (not points) in Rn, then the set of all linear combinations of v1, v2,…, vk is denoted by span(v1, v2, …, vk) or span(S)

50
Q

How to prove that a point p is in a span

A

Are there scalars c1 and c2 up to cn such that p = c1v1 +c2v2 +… + cnvn. (If there is any linear combination of any of the vectors in the set that can represent the point, then the point is in the span.

51
Q

An alternative way of saying how two vectors are linearly independent

A

Two vectors v1 and v2 in Rn are linearly independent if and only if they are not scalar multiples are each other.

This relates back to the formal definition of linear independence.
eg. Suppose c1v1 = v2 s.t. cER
Then c1v1 - v2 = 0
Since the coefficient of v2 is non zero they are linearly dependent
(The coefficients have to be all zero)

52
Q

Are the direction vectors of the vector form of a plane linearly independent

A

Yes because the direction vectors are parallel to the plane but cannot be scalar multiples of each other. This means that they are linearly independent

53
Q

What is a homogenous system and why can they never be inconsistent

A

A system of linear equations is homogenous if all constant terms are 0. (In other words all the numbers in the augmented column are zeros)

Homogenous systems always have at least one as x1 = 0, x2 = 0 x3 = 0 can all equal 0. Either infinite or a unique solution.

54
Q

When are there unique, infinite, and no solutions

A

If each column has a leading entry then there is a unique solution

If there is a zero equal to a non zero then there are no solutions

If there is a column with no leading entries then there are infinite solutions

(There are unique solutions if the system cannot have infinite nor no solutions, process of elimination)

55
Q

If there is a zero vector in the set of vectors is the set linearly independent

A

No because any number can go in front of the zero vector even while all the other coefficients are zero.

56
Q

Row reducing when solving

A

Don’t forget to apply the row operation to the augmented part of the matrix

57
Q

Scalars of row operations

A

The scalar cannot be 0. When you multiply a row by a scalar the scalar cannot be 0.

58
Q

Matrix Properties

A
Associativity A(BC) = (AB)C
Distributivity A(B+C) = AB + AC
k(AB) = (kA)B
(A+B)C = AB + AC
59
Q

What does having n distinct eigenvalues mean for an n by n matrix

A

It means that the corresponding eigenvectors are linearly independent.

(Compare diagonalisation theorem
A is diagonalisable <=> A has n linearly independent eigenvectors)

60
Q

When does AP = PD hold

A

If P is invertible AP = PD <=> D = P^-1AP

61
Q

Relationship between diagonalisability and similarity

A

A matrix A is diagonalisable if it is similiar to a diagonal matrix as D = P^-1AP

62
Q

Powers of stochastic and regular matrices

A

If P is stochastic then P^m is stochastic

If P is positive then P^m is positive

63
Q

Definition of a steady state vector

A

Let P be the transition matrix of a Markov chain. stead-state vector is any vector x so that Px=x with non-negative entries summing to the total number of objects in the Markov chain. (Ie it can sum to 1 if its objects are probabilities or it can sum to a population)