Linear Algebra 2 Flashcards

(58 cards)

1
Q

What is a determinantal mapping?

A

A mapping D : Mn(R) → R is determinantal if it is
(a) multilinear in the columns:
D[· · · , bᵢ + cᵢ, · · · ] = D[· · · , bᵢ, · · · ] + D[· · · , cᵢ, · · · ]
D[· · · , λaᵢ, · · · ] = λD[· · · , aᵢ, · · · ] for λ ∈ R
(b) alternating:
D[· · · , aᵢ, aᵢ₊₁, · · · ] = 0 when aᵢ = aᵢ₊₁
(c) and D(Iₙ) = 1 for In the n × n identity matrix.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Let D : Mn(R) → R be a determinantal map. Then

1) ) D[· · · , aᵢ , aᵢ₊₁ · · · ]
(2) D[· · · , aᵢ, · · · , aⱼ · · · ] = 0 when
3) D[· · · , aᵢ, · · · , aⱼ · · · ] = - D[

A

(1) D[· · · , aᵢ, a₊₁ · · · ] = −D[· · · , a₊₁, aᵢ, · · · ]
(2) D[· · · , aᵢ, · · · , aj · · · ] = 0 when ai = aj , i ≠ j.
(3) D[· · · , aᵢ, · · · , aⱼ · · · ] = −D[· · · , aⱼ , · · · , ai , · · · ] when i ≠ j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Let n ∈ N. What is a permutation? What is Sn?

A

A permutation σ is a bijective map from the set {1, 2, · · · , n} to itself. The set of all such permutations is denoted Sn

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a transposition?

A

An element σ ∈ Sn which switches two elements 1 ≤ i < j ≤ n and fixes the others is called a transposition

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

For each n ∈ N there exists a unique determinantal function D : Mn(R) → R and
it is given explicitly by the expansion [ ]
We write this unique function as det(·) or sometimes | · |

A

D[a1, · · · , an] = Σσ∈Sn
sign(σ)aσ(1),1 · · · aσ(n),n

σ(n),n part is subscript

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

For σ ∈ Sn, we have sign(σ) = sign(σ⁻¹)

Prove it

A

Follows since σ ◦ σ⁻¹ is the identify map, which can be written as a sequence of 0 transpositions, an even number

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
Write det(A) in terms of Aᵀ.
Prove it
A

det(A) = det(Aᵀ)

Proof:
Σσ∈Sn sign(σ)a1,σ(1) · · · an,σ(n) = Σσ∈Sn sign(σ)aσ⁻¹(1),1 · · · aσ⁻¹(n),n = Σσ⁻¹∈Sn sign(σ⁻¹)aσ⁻¹(1),1 · · · aσ⁻¹(n),n = det(A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

The map det : Mn(R) → R is [ ] and alternating in the rows of a matrix.

A

multilinear

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

det(A) = Σσ∈Sn [ ]

A

Σσ∈Sn sign(σ)a1,σ(1) · · · an,σ(n)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Let A, B ∈ Mn(R). Then

(i) det(A) ≠ 0 ⇔
(ii) det(AB) =

A

(i) det(A) ≠ 0 ⇔ A is invertible

ii) det(AB) = det(A) det(B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Let A ∈ Mn(R). For such an elementary matrix E we have det(EA) = [ ]

A

det(E) det(A)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the determinant of an upper triangular matrix?

A

The product of its diagonal entries

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Let V be a vector space of dimension n over R.
Let T : V → V be a linear transformation, B a basis for V , and Mᴮᵦ (T) the matrix for T with respect to initial and final basis B. We define
det(T) := [ ]

A

det(T) := det(Mᴮᵦ (T))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Let V be a vector space of dimension n over R.
Let T : V → V be a linear transformation, B a basis for V
The determinant of T is independent of [ ]

A

the choice of basis B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Let S, T : V → V be linear transformations. Then

i) [ ] ⇔ T is invertible.
(ii) [ ] = det(S) det(T

A

(i) det(T) ≠ 0 ⇔ T is invertible.

ii) det(ST) = det(S) det(T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define eigenvector

A

Let V be a vector space over R and T : V → V be a linear transformation
A vector v ∈ V is called an eigenvector of T if v ≠ 0 and T v = λv for some
λ ∈ R.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Define eigenvalue

A

We call λ ∈ R an eigenvalue of T if T v = λv for some nonzero v ∈ V

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

λ is an eigenvalue of T ⇔ [ ]

A

λ is an eigenvalue of T ⇔ Ker(T − λI) ≠ {0}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

λ is an eigenvalue of T ⇔ Ker(T − λI) ≠ {0}

Prove it

A

λ is an eigenvalue of T ⇔ ∃v ∈ V, v ≠ 0, T v = λv

⇔ ∃v ∈ V, v ≠ 0, (T − λI)v = 0 ⇔ Ker(T − λI) ≠ {0}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

The following statements are equivalent:

(a) λ is an eigenvalue of T
(b) Ker(T − λI) ≠ [ ]
(c) T − λI is not [ ]
(d) det(T − λI) = [ ]

A

(a) λ is an eigenvalue of T
(b) Ker(T − λI) ≠ {0}
(c) T − λI is not invertible
(d) det(T − λI) = 0.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

For A ∈ Mn(R). What is the characteristic polynomial of A?

A

the characteristic polynomial of A is defined as det(A−xIₙ)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

For T : V → V a linear transformation, let A be the matrix for T with respect to some basis B
What is the characteristic polynomial of T?
We denote the characteristic polynomial of T by χT (x), and of a matrix A by χA(x)

A

The characteristic polynomial of T is defined as det(A − xIₙ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

Describe the link between eigenvalues and characteristic polynomials

A

Let T : V → V be a linear transformation. Then λ is an eigenvalue of T if and
only if λ is a root of the characteristic polynomial χT (x) of T

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Let T : V → V be a linear transformation. Then λ is an eigenvalue of T if and
only if λ is a root of the characteristic polynomial χT (x) of T

Prove it

A

(⇒) Suppose λ is an eigenvalue of T. Then by λ is an eigenvalue of T ⇒ det(T − λI) = 0, we have det(T − λ1) = 0. Thus det(A − λIn) = 0 for any matrix A for T. (If A is a matrix for T, then
A − λIn is the corresponding one for T − λI.) So λ is a root of χT (x) = det(A − xIₙ).
(⇐) Suppose λ is a root of χT (x) = det(A − xIₙ) for some matrix (all matrices) A for T. Then det(A − λIₙ) = 0, and so det(T − λI) = 0. Thus by det(T − λI) = 0 ⇒ λ is an eigenvalue of T, λ is an
eigenvalue of T.

25
For T : V → V a linear transformation the trace tr(T) is defined to be [ ]
For T : V → V a linear transformation the trace tr(T) is defined to be tr(A) where A is any matrix for T.
26
For A ∈ Mn(R), χA(x) = (−1)ⁿxⁿ + (−1)ⁿ⁻¹tr(A)xⁿ⁻¹ + .... + [ ] Similarly χT (x) =
χA(x) = (−1)ⁿxⁿ + (−1)ⁿ⁻¹tr(A)xⁿ⁻¹ + .... + detA χT(x) = (−1)ⁿxⁿ + (−1)ⁿ⁻¹tr(T)xⁿ⁻¹ + .... + detT
27
Let A ∈ Mn(C) have eigenvalues λ1, λ2, · · · , λn ∈ C (not necessarily distinct). Then tr(A) = detA =
tr(A) = λ1 + λ2 + · · · + λn and det(A) = λ1 · · · λn
28
Let λ1, · · · , λm (m ≤ n) be the distinct eigenvalues of T and v1, · · · , vm be corresponding eigenvectors. Then v1, · · · , vm are linearly [ ]
independent
29
When is a linear map T : V → V diagonalisable?
if V has a basis consisting of | eigenvectors for T
30
When is a matrix A ∈ Mn(R) diagonalisable?
if the map it defines by acting on (column) vectors in Rⁿ is diagonalisable.
31
A matrix A ∈ Mn(R) is diagonalisable if and only if there exists an invertible matrix P such that [ ]
B := P⁻¹AP is a diagonal matrix (in which case, the diagonal entries in B are the eigenvalues, and the columns in P the corresponding eigenvectors)
32
A matrix A ∈ Mn(R) is diagonalisable if and only if there exists an invertible matrix P such that B := P⁻¹AP is a diagonal matrix Prove it
Assume A is diagonalisable and let v1, . . . , vn be the basis of eigenvectors and λ1, . . . , λn the eigenvalues (possibly with repetition of eigenvalues). Using the notation in Section 1, define P = [v1, · · · , vn] and B the diagonal matrix with entries λ1, · · · , λn. Then P is invertible since its columns are linearly independent, and the equation [λ1v1, · · · , λnvn] = [Av1, · · · Avn] is the same as P B = AP, that is B = P⁻¹AP. Conversely, given that B := P⁻¹AP is diagonal, the columns of P must be n linearly eigenvectors of A and entries of B corresponding eigenvalues (since P B = AP)
33
Let V be a vector space of dimension n. Suppose a linear map T : V → V (matrix A ∈ Mn(R), respectively) has n distinct eigenvalues. The T (A, respectively) is [ ]
diagonalisable
34
Let V be a vector space of dimension n. Suppose a linear map T : V → V (matrix A ∈ Mn(R), respectively) has n distinct eigenvalues. The T (A, respectively) is diagonalisable Prove it
Assume T has n distinct eigenvalues. For each of the n distinct eigenvalues λi there is at least one eigenvector vi (by definition). The n eigenvectors v1, · · · , vn are linearly independent, and thus form a basis for V . (The statement for matrices A follows by viewing A as a map on Rⁿ.)
35
Suppose χT (x) (χA(x), respectively) has n distinct roots in R. Then T (A, respectively) is [ ]
diagonalisable over R
36
How do you diagonalise a matrix?
Let A ∈ Mn(R). (1) Compute χA(x) = det(A − xI) and find its roots λ ∈ R (real eigenvalues). (2) For each eigenvalue λ, find a basis for Ker(A − λI) using, for example, row-reduction (this gives you linearly independent eigenvectors for each eigenvalue). (3) Collect together all these eigenvectors. If you have n of them put them as columns in a matrix P, and the corresponding eigenvalues as the diagonal entries in a matrix B. Then B = P⁻¹AP and you have diagonalised A. If you have < n eigenvectors you cannot diagonalise A (over R).
37
let T : V → V be a linear transformation | What is an eigenspace?
Let λ be an eigenvalue for T. Then Eλ := Ker(T − λI) = {v ∈ V : T v = λv} is called the eigenspace for λ. (This is just the set of all eigenvectors of T with eigenvalue λ, along with the zero vector.)
38
Is Eλ(eigenspace) a subspace of V?
Yes since it is the kernel of the map T − λI
39
Define geometric multiplicity
Let λ be an eigenvalue of T. The dimension of Eλ is called the geometric multiplicity of λ
40
Define algebraic multiplicity
The multiplicity of λ as a root of the characteristic polynomial χT (x) is called the algebraic multiplicity of λ
41
Let λ be an eigenvalue of T. The geometric multiplicity of λ is [ ] to the algebraic multiplicity of λ
less than or | equal
42
Let λ be an eigenvalue of T. The geometric multiplicity of λ is less than or equal to the algebraic multiplicity of λ Prove it
Let’s denote these multiplicities gλ and aλ respectively. Extend a basis for Eλ to one for V . Then the matrix for T with respect to this basis for V looks like (λI𝓰λ * ) ( 0 * ) Hence the matrix for T − xI looks like ( (λ − x)I𝓰λ * ) ( 0 • ) and so det(T −xI) = (λ−x)^gλ h(x) for some h(x) := det(•) ∈ R[x]. We must then have gλ ≤ aλ
43
Let λ1, · · · , λr (r ≤ n) be the distinct eigenvalues of T. Then the eigenspaces Eλ1, · · · , Eλr form a [ ]
a direct sum Eλ1 ⊕ · · · ⊕ Eλr
44
The Gram-Schmidt procedure
see pg17 of the notes
45
Let A ∈ Mn(R) be a symmetric matrix, that is AT = A. Now A may be thought of as a linear transformation on Cⁿ and so in particular has (counting multiplicities) n eigenvalues in C The eigenvalues of A all lie in [ ]
The eigenvalues of A all lie in R
46
Let A ∈ Mn(R) be a symmetric matrix, that is AT = A. Now A may be thought of as a linear transformation on Cⁿ and so in particular has (counting multiplicities) n eigenvalues in C The eigenvalues of A all lie in R Prove it
Let λ ∈ C be an eigenvalue of A with eigenvector v ∈ Cⁿ. So Av = λv with v ≠ 0. Now (AV)ᵀ (vbar) = vᵀAᵀ(vbar) = (as Aᵀ = A) = vᵀA(vbar) = (as A(vbar) = (Av)bar = (λbar)(vbar)) = λbar vᵀ vbar (AV)ᵀ (vbar) = (as Av = λv) = λvᵀvbar Writing vᵀ = (v1, · · · , vn) we see vᵀvbar = v1v1bar + ... + vnvnbar = |v1|² + · · · + |vn|² > 0 Since v ≠ 0. Thus we can cancel vᵀvbar and one gets λbar = λ, that is λ ∈ R
47
Let A ∈ Mn(R) be symmetric. Then the space Rⁿ has an orthonormal basis consisting of eigenvectors of A. That is, there exists an orthogonal real matrix R (so Rᵀ = R⁻¹) such that R⁻¹AR is [ ]
diagonal with real entries.
48
What is the Spectral theorem for real symmetric matrices?
A real symmetric matrix A ∈ Mn(R) has real eigenvalues and there exists an orthonormal basis for Rⁿ consisting of eigenvectors for A
49
Now let V be a real vector space with inner product . We call a linear map T : V → V self-adjoint (or symmetric) if....
= for all u, v ∈ Rⁿ
50
What is the Spectral theorem for self-adjoint operators on a real inner product space?
A self-adjoint map T on a finite dimensional real inner product space V has real eigenvalues and there exists an orthonormal basis for V consisting of eigenvectors of T.
51
What is a quadric form in n variables x1, ..., xn over R?
a homogeneous degree 2 polynomial Q(x1, · · · , xn) = ⁿΣᵢ,ⱼ₌₁ aᵢⱼxᵢxⱼ = (x1 ... xn)A(x1 .... xn)ᵀ, A = (aᵢⱼ) with real coefficients. We can and do assume A is symmetric we can find an orthogonal change of variable (y1 · · · yn) = (x1 · · · xn)P, Pᵀ = P⁻¹ so that Q(y1, · · · , yn) = λ₁y₁² + · · · + λₙyₙ² where λ1, · · · , λn ∈ R are the (all real) eigenvalues of the symmetric matrix A.
52
What is a quadric?
A quadric is the set of points in R³satisfying a degree 2 equation f(x₁,x₂,x₃) = ³Σᵢ,ⱼ₌₁ aᵢⱼxᵢxⱼ + ³Σᵢ₌₁ bᵢxᵢ + c = 0 with A = (aᵢⱼ)∈ M3(R) symmetric and non-zero, and b1, b2, b3, c ∈ R
53
Classify a quadric to be: | an ellipsoid
µ₁Y₁² + µ₂Y₂² + µ₃Y₃² = 1
54
Classify a quadric to be: | ∅
µ₁Y₁² + µ₂Y₂² + µ₃Y₃² = - 1
55
Classify a quadric to be: | {0}
µ₁Y₁² + µ₂Y₂² + µ₃Y₃² = 0
56
Classify a quadric to be: | 1-sheet Hyperboloid
µ₁Y₁² + µ₂Y₂² - µ₃Y₃² = 1
57
Classify a quadric to be: | 2-sheet Hyperboloid :
µ₁Y₁² + µ₂Y₂² - µ₃Y₃² = - 1
58
Classify a quadric to be: | Cone
µ₁Y₁² + µ₂Y₂² - µ₃Y₃² = 0