Linear Algebra 1 Flashcards

(171 cards)

1
Q

Given m, n ≥ 1, what is an m × n matrix?

A

a rectangular array with m

rows and n columns.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a row vector?

A

A 1 x n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is a column vector?

A

An m x 1 matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is a square matrix?

A

An n x n matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is a diagonal matrix?

A

If A = (aᵢⱼ) is a
square matrix and aᵢⱼ = 0 whenever i ≠ j, then we say that A is a diagonal
matrix

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is F ? (Fancy F)

A

The field from which the entries (scalars) of a matrix come

Usually F = the reals, or the complex

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does Mₘₓₙ(F) mean?

A

Mₘₓₙ(F) = {A : A is an m × n matrix with entries from F}

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What does Fⁿ mean? (Fancy F)

A

Fⁿ for M₁ₓₙ(F)

Similarly for Fᵐ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Is matrix addition associative and commutative?

A

Yes to both

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the formula for entry (i, j) with matrix multiplication?

A

ₖ₌₀Σⁿaᵢₖbₖⱼ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Is matrix multiplication associative?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Is matrix multiplication distributive?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

When do two matrices commute?

A

If AB=BA

Not true for most A and B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is an upper triangular matrix?

A

Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)

If aᵢⱼ = 0 whenever i > j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is an lower triangular matrix?

A

Let A = (aᵢⱼ) ∈ Mₙₓₙ(F)

If aᵢⱼ = 0 whenever i < j

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

We say that A ∈ Mₙₓₙ(F) is invertible if …..

A

there exists B ∈ Mₙₓₙ(F) such that AB = Iₙ = BA.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

If A ∈ Mₙₓₙ(F) is invertible, is the inverse unique?

Prove it

A
Yes
Proof:
Suppose that B, C ∈ Mₙₓₙ(F) are both inverses for A
Then AB = BA = Iₙ and AC = CA = Iₙ
so B = BIₙ = B(AC) = (BA)C = IₙC = C.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Let A, B be invertible n×n matrices. Is AB invertible?

A

Yes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Let A, B be invertible n×n matrices.
What is (AB)⁻¹ ??
Prove it

A

(AB)⁻¹ = B⁻¹A⁻¹

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is the transpose of A = (aᵢⱼ) ∈ Mₙₓₙ(F)?

A

the n × m matrix Aᵀ with (i, j) entry aⱼᵢ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What is an orthogonal matrix?

A

We say that A ∈ Mₙₓₙ(R) is orthogonal if AAᵀ = Iₙ = AᵀA

Equivalently, A is invertible and Aᵀ = A⁻¹

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

What is a unitary matrix?

A

We say that A ∈ Mₙₓₙ(C) is unitary if AA⁻ᵀ = Iₙ = A⁻ᵀA
By A⁻ (A bar) we
mean the matrix obtained from A by replacing each entry by its complex
conjugate.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

What is the general strategy for solving a system of m equations in variables x1, …, xn by Gaussian elimination?

A

Swap equations if necessary to make the coefficient of x1 in the first
equation nonzero.
Divide through the first equation by the coefficient of x1
Subtract appropriate multiples of the first equation from all other equations to eliminate x1 from all but the first equation.
Now the first equation will tell us the value of x1 once we have determined the values of x2, . . . , xn, and we have m − 1 other equations in
n − 1 variables.
Use the same strategy to solve these m−1 equations in n−1 variables

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What are the 3 elementary row operations on the augmented matrix A|b

A

for some 1 ≤ r < s ≤ m, interchange rows r and s
for some 1 ≤ r ≤ m and λ ≠ 0, multiply (every entry of) row r by λ
• for some 1 ≤ r, s ≤ m with r ≠ s and λ ∈ F, add λ times row r to row
s.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Are the EROs invertible?
Yes
26
We say that an m × n matrix E is in echelon form if...
(i) if row r of E has any nonzero entries, then the first of these is 1; (ii) if 1 ≤ r < s ≤ m and rows r, s of E contain nonzero entries, the first of which are eᵣⱼ and eₛₖ respectively, then j < k (the leading entries of lower rows occur to the right of those in higher rows); (iii) if row r of E contains nonzero entries and row s does not (that is, eₛⱼ = 0 for 1 ≤ j ≤ n), then r < s (zero rows, if any exist, appear below all nonzero rows).
27
Let E | d be the m × (n + 1) augmented matrix of a system of equations, where E is in echelon form. We say that variable xⱼ is determined if ..... What is the alternative to being determined?
if there is i such that eᵢⱼ is the leading entry of row i of E (so eᵢⱼ = 1) Otherwise we say that xⱼ is free
28
Gaussian Elimination: | What shows that the equations are inconsistent?
When the final row reads 0=1
29
What is reduced row echelon form?
We say that an m × n matrix is in reduced row echelon form (RRE form) if it is in echelon form and if each column containing the leading entry of a row has all other entries 0.
30
Can all matrices in echelon form be reduced to RRE form?
Yes
31
An invertible nxn matrix can be reduced to Iₙ using [ ] | Prove it
EROs Proof: Take A ∈ Mₙₓₙ(F) with A invertible. Proof. Take A ∈ Mn×n(F) with A invertible. Let E be an RRE form of A. We can obtain E from A by EROs, and EROs do not change the solution set of the system of equations Ax = 0. If Ax = 0, then x = Iₙ x = (A⁻¹A)x = A⁻¹(Ax) = A⁻¹0 = 0, so the only n × 1 column vector x with Ax = 0 is x = 0. (Here 0 is the n × 1 column vector of zeros.) So the only solution of Ex = 0 is x = 0. We can read off solutions to Ex = 0. We could choose arbitrary values for the free variables—but the only solution is x = 0, so there are no free variables. So all the variables are determined, so each column must contain the leading entry of a row (which must be 1). Since the leading entry of a row comes to the right of leading entries of rows above, it must be the case that E = Iₙ
32
What is an elementary matrix?
For an ERO on an m × n matrix, we define the corresponding | elementary matrix to be the result of applying that ERO to Iₘ
33
Is the inverse of an ERO and ERO?
Yes
34
Is the inverse of an elementary matrix and elementary matrix?
Yes
35
Let A be an m × n matrix, let B be obtained from A by applying an ERO. Then B = EA, where E is ..... Prove it
E is the elementary matrix for that ERO
36
Let A be an invertible n × n matrix. Let X1, X2, . . . , Xk be a sequence of EROs that take A to Iₙ. Let B be the matrix obtained from In by this same sequence of EROs. Then B = ?? prove it
B = A⁻¹ Proof: Let Eᵢ be the elementary matrix corresponding to ERO Xᵢ. Then applying X1, X2, . . . , Xk to A gives matrix Eₖ...E₂E₁A = Iₙ, and applying 1, X2, . . . , Xk to Iₙ gives matrix Eₖ...E₂E₁ = B So BA = Iₙ, so B = A⁻¹
37
The sequence of EROs X1, X2, . . . , Xk that take A to Iₙ exists. Prove it
Proof theorem 6: An invertible n × n matrix can be reduced to Iₙ using EROs.
38
What is a vector space?
Let F be a field. A vector space over F is a non-empty set V together with a map V × V → V given by (v, v′) |→ v + v′ (called addition) and a map F × V → V given by (λ, v) |→ λv (called scalar multiplication) that satisfy the vector space axioms
39
What are the vector space axioms? | Addition ones
• u + v = v + u for all u, v ∈ V (addition is commutative); • u + (v + w) = (u + v) + w for all u, v, w ∈ V (addition is associative); • there is 0ᵥ ∈ V such that v + 0ᵥ = v = 0ᵥ + v for all v ∈ V (existence of additive identity); • for all v ∈ V there exists w ∈ V such that v+w = 0ᵥ = w + v (existence of additive inverses);
40
What are the vector space axioms? | Multiplication ones)
• λ(u + v) = λu + λv for all u, v ∈ V , λ ∈ F (distributivity of scalar multiplication over vector addition); • (λ + µ)v = λv + µv for all v ∈ V , λ, µ ∈ F (distributivity of scalar multiplication over field addition); • (λµ)v = λ(µv) for all v ∈ V , λ, µ ∈ F (scalar multiplication interacts well with field multiplication); • 1v = v for all v ∈ V (identity for scalar multiplication).
41
For m, n ≥ 1, is the set Mₘₓₙ(R) a real vector space?
Yes
42
Elements of V are called [ ] Elements of F are called [ ] If V is a vector space over R, then we say that V is a [ ] vector space If V is a vector space over C, then we say that V is a [ ] vector space If V is a vector space over F, then we say that V is an [ ] vector space
``` Vectors Scalars Real Complex F ```
43
Let V be a vector space over F | Then there is a [ ] additive identity element 0ᵥ
unique
44
Prove that: Let V be a vector space over F. Take v ∈ V . Then there is a unique additive inverse for v.
Proof
45
Let V be a vector space over F. Take v ∈ V . | What is the unique additive inverse of v?
-v
46
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F. Then λ0ᵥ = 0ᵥ Prove it
We have λ0ᵥ = λ(0ᵥ + 0ᵥ) (definition of additive identity) = λ0ᵥ + λ0ᵥ (distributivity of scalar · over vector +). Adding -(λ0ᵥ) to both sides, we have λ0ᵥ + (-(λ0ᵥ)) = (λ0ᵥ + λ0ᵥ) + (-(λ0ᵥ)) so 0ᵥ = λ0ᵥ (using definition of additive inverse, associativity of addition, definition of additive identity).
47
``` Let V be a vector space over a field F. Take v ∈ V , λ ∈ F. Then 0ᵥ = 0ᵥ (First v = v, second v = V) Prove it ```
Proof
48
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F. Then −λ)v = −(λv) = λ(−v) Prove it
We have λv + λ(−v) = λ(v + (−v)) (distributivity of scalar · over vector +) = λ0ᵥ (definition of additive inverse) = 0ᵥ So λ(−v) is the additive inverse of λv (by uniqueness), so λ(−v) = −(λv). Similarly, we see that λv + (−λ)v = 0ᵥ and so (−λ)v = −(λv).
49
Let V be a vector space over a field F. Take v ∈ V , λ ∈ F. Then f λv = 0ᵥ then λ = 0 or v = 0ᵥ Prove it
Suppose that λv = 0ᵥ , and that λ ≠ 0 Then λ⁻¹ exists in F, and λ⁻¹(λv) = λ⁻¹ 0ᵥ so (λ⁻¹ λ)v = 0ᵥ (scalar · interacts well with field ·, and by (i)) so 1v = 0ᵥ so v = 0ᵥ (identity for scalar multiplication)
50
What is a subspace?
Let V be a vector space over F. A subspace of V is a non-empty subset of V that is closed under addition and scalar multiplication, that is, a subset U ⊆ V such that (i) U ≠ ∅ (U is non-empty); (ii) u₁ + u₂ ∈ U for all u₁, u₂ ∈ U (U is closed under addition); (iii) λu ∈ U for all u ∈ U, λ ∈ F (U is closed under scalar multiplication).
51
Is {0ᵥ} a subspace of V?
Always | The zero/trivial subspace
52
Is V a subspace of V?
Always
53
What are the subspaces of V called that isn't 0ᵥ?
Proper subsapce
54
What is the subspace test?
Let V be a vector space over F, let U be a subset of V . Then U is a subspace if and only if (i) 0ᵥ ∈ U; and (ii) λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U and λ ∈ F
55
Prove the subspace test
Assume that U is a subspace of V . • 0ᵥ ∈ U: Since U is a subspace, it is non-empty, so there exists u₀ ∈ U Since U is closed under scalar multiplication, 0ᵤ = 0ᵥ ∈ U • λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F. Then λu₁ ∈ U because U is closed under scalar multiplication, so λu₁ + u₂ ∈ U because U is closed under addition (Prove other direction) Assume that 0ᵥ ∈ U and that λu₁ + u₂ ∈ U for all u₁, u₂ ∈ U, and λ ∈ F. • U is non-empty: have 0ᵥ ∈ U • U is closed under addition: for u₁ + u₂ ∈ U have u₁ + u₂ = 1 * u₁ + u₂ ∈ U • U is closed under scalar multiplication: for u ∈ U and λ ∈ F, have λu = λu + 0ᵥ ∈ U So U is a subspace of V
56
What does the notation U ≤ V mean? What is the difference between that and U ⊆ V?
If U is a subspace of the vector space V , then we write U ≤ V . (Compare with U ⊆ V , which means that U is a subset of V but we do not know whether it is a subspace.)
57
Let V be a vector space over F, and let U ≤ V . Then (i) U is a vector space over F; Prove it
We need to check the vector space axioms, but first we need to check that we have legitimate operations. Since U is closed under addition, the operation + restricted to U gives a map U × U → U. Since U is closed under scalar multiplication, that operation restricted to U gives a map F × U → U. Now for the axioms. Commutativity and associativity of addition are inherited from V . There is an additive identity (by the Subspace Test). There are additive inverses: if u ∈ U then multiplying by −1 ∈ F and applying [(−λ)v = −(λv) = λ(−v)] shows that −u ∈ U. The other four properties are all inherited from V .
58
Let V be a vector space over F, and let U ≤ V . Then (ii) if W ≤ U then W ≤ V (“a subspace of a subspace is a subspace”). Prove it
This is immediate from the definition of a subspace
59
Let V be a vector space over F. Take A, B ⊆ V and take λ ∈ F | Define A+B and λA
A + B := {a + b : a ∈ A, b ∈ B} | λA := {λa : a ∈ A}.
60
Let V be a vector space. Take U, W ≤ V Is U+W a subspace of V? Is U ∩ W a subspace of V? Prove it
Yes Then U +W ≤ V and U ∩ W ≤ V .
61
Does R, the reals have any proper subspaces, and if so what are they?
No Let V = R, let U be a non-trivial subspace of V Then there exists u₀ ∈ U with u₀ ≠ 0. Take x ∈ R. Let λ = x/u₀ Then x = λu₀∈ U, because U is closed under scalar multiplication. So U = V .So R has no non-zero proper subspaces
62
Let V be a vector space over F, take u₁, u₂, ..., uₘ∈ V . Define U := {α₁u₁ + ... + αₘuₘ : α₁, ..., αₘ ∈ F}. Then U ≤ V . Prove it
Subspace test | pg29
63
Let V be a vector space over F, take u₁, u₂, ..., uₘ∈ V. What is a linear combination of u₁, u₂, ..., uₘ
a vector α₁u₁ + ... + αₘuₘ for some α₁, ..., αₘ ∈ F
64
Define the span of u₁, u₂, ..., uₘ
Span(u₁, u₂, ..., uₘ) := {α₁u₁ + ... + αₘuₘ : α₁, ..., αₘ ∈ F}. The smallest subspace of V that contains u₁, u₂, ..., uₘ
65
What are the different notations for the span of u₁, u₂, ..., uₘ
Span(u₁, u₂, ..., uₘ) Sp(u₁, u₂, ..., uₘ)
66
Define the span of a set S ⊆ V (even a potentially infinite set S)
Span(S) := {α₁s₁ + ... + αₘsₘ : m ≥ 0,s₁, ..., sₘ ∈ S, α₁, ..., αₘ ∈ F}
67
Can a linear combination involve infinitely many elements? Say if S is infinite
No a linear combination only ever involves finitely many elements of S, even if S is infinite.
68
What is the empty sum? | And what is the span of the empty set?
ᵢ∈∅ Σαᵢuᵢ is 0ᵥ (the ‘empty sum’), so | Span ∅ = {0ᵥ}
69
For any S ⊆ V, what is the relationship between Span(S) and V
Span(S) ≤ V
70
What is a spanning set?
Let V be a vector space over F. If S ⊆ V is such that V = | Span(S), then we say that S spans V , and that S is a spanning set for V .
71
Define linear dependence
Let V be a vector space over F. We say that v₁, ..., vₘ ∈ V | are linearly dependent if there are α₁, ..., αₘ ∈ F, not all 0, such that α₁v₁ + ... + αₘvₘ = 0.
72
Define linear independence
If v₁, ..., vₘ ∈ V are not linearly dependent, then we say that they are linearly independent.
73
When is S ⊆ V linearly independent?
We say that S ⊆ V is linearly independent if every finite subset of S is linearly independent
74
So v₁, ..., vₘ ∈ Vare linearly independent if and only if ...
So v₁, ..., vₘ ∈ V are linearly independent if and only if the only linear combination of them that gives 0ᵥ is the trivial combination, that is, if and only if α₁v₁ + ... + αₘvₘ = 0 implies α₁ = ... = αₘ = 0
75
Let v₁, ..., vₘ be linearly independent in an F-vector space V . Let vₘ₊₁∈ V be such that vₘ₊₁ ∉ Span(v₁, ..., vₘ). Then v₁, ..., vₘ, vₘ₊₁ are linearly [ ] Prove it
Independent Proof: Take α₁, ..., αₘ₊₁ ∈ F such that α₁v₁ + ... + αₘ₊₁vₘ₊₁ = 0 If αₘ₊₁ ≠ 0, then we have vₘ₊₁ = - (1/ αₘ₊₁)(α₁v₁ + ... + αₘvₘ) ∈ Span(v₁, ..., vₘ) which is a contradiction. So αₘ₊₁ = 0, so α₁v₁ + ... + αₘvₘ = 0 But v₁, ..., vₘ are linearly independent, so this means that α₁ = ... = αₘ = 0
76
Let V be a vector space. | What is a basis of V
A basis of V is a linearly independent | spanning set.
77
Define a finite dimensional vector space
A vector space with a finite basis
78
What is the standard basis of Rⁿ?
For 1 ≤ i ≤ n, let eᵢ be the row vector with coordinate 1 in the ith entry and 0 elsewhere. Then e₁, ..., eₙ are linearly independent: if α₁v₁ + ... + αₙvₙ = 0 then by looking at the ith entry we see that αᵢ = 0 for all i Also, e₁, ..., eₙ span Rⁿ, because (α₁, ..., αₙ) = α₁e₁ + ... + αₙeₙ So e₁, ..., eₙ is a basis for Rⁿ. And the standard basis
79
What is the standard basis of Mₘₓₙ?
Consider V = Mₘₓₙ(R). For 1 ≤ i ≤ m and 1 ≤ j ≤ n, let Eᵢⱼ be the matrix with a 1 in entry (i, j) and 0 elsewhere. Then {Eᵢⱼ : 1 ≤ i ≤ m, 1 ≤ j ≤ n} is a basis for V , called the standard basis of Mₘₓₙ(R)
80
Let V be a vector space over F, let S = {v₁, ..., vₙ} ⊆ V . Then S is a basis of V if and only if every vector in V has a unique expression as a linear combination of elements of S. Prove it
Proof pg32 | Prop 17
81
Let V be a vector space over F. Suppose that V has a finite spanning set S. Then S contains a linearly independent [ ]
Spanning set
82
if V has a finite spanning set, then V has a [ ] | Prove it
basis Proof: Let S be a finite spanning set for V . Take T ⊆ S such that T is linearly independent, and T is a largest such set (any linearly independent subset of S has size ≤ |T|). Suppose, for a contradiction, that Span(T) ≠ V . Then, since Span(S) = V , there must exist v ∈ S \ Span(T). Now by Lemma 16 we see that T ∪ {v} is linearly independent, and T ∪ {v} ⊆ S, and |T ∪ {v}| > |T|, which contradicts our choice of T. So T spans V , and by our choice is linearly independent.
83
What is Steinitz Exchange Lemma?
Let V be a vector space over F. Take X ⊆ V . Suppose that u ∈ Span(X) but that u ∉ Span(X \ {v}) for some v ∈ X. Let Y = (X \ {v}) ∪ {u} (“exchange u for v”). Then Span(Y ) = Span(X).
84
Prove Steinitz Exchange Lemma
pg34
85
Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| .....
Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| ≤ |T|. “linearly independent sets are at most as big as spanning sets”
86
Let V be a vector space. Let S, T be finite subsets of V . If S is linearly independent and T spans V , then |S| ≤ |T|. “linearly independent sets are at most as big as spanning sets” Prove it
Poof pg 35 (top)
87
Let V be a finite-dimensional vector space. Let S, T be bases of V . Then S and T are finite, and |S| = ????
Then S and T are finite, and |S| = |T|
88
Let V be a finite-dimensional vector space. Let S, T be bases of V . Then S and T are finite, and |S| = |T| Prove it
Since V is finite-dimensional, it has a finite basis B. Say |B| = n. Now B is a spanning set and |B| = n, so by Theorem 20 any finite linearly independent subset of V has size at most n. Since S is a basis of V , it is linearly independent, so every finite subset of S is linearly independent. So in fact S must be finite, and |S| ≤ n. Similarly, T is finite and |T| ≤ n. Now S is linearly independent and T is spanning, so by Theorem 20 |S| ≤ |T|. Applying Theorem 20 with the roles of S and T reversed shows that |S| ≥ |T|. So |S| = |T|
89
What is the dimension of a finite-vector space?
Let V be a finite-dimensional vector space. The dimension of V , written dim V , is the size of any basis of V
90
What is the dimension of Rⁿ?
The standard basis is e₁, e₂, ..., eₙ and hence has dimension n
91
What is the dimension of the vector space Mₘₓₙ?
mn
92
What is row space?
Let A be an m×n matrix over F. We define the row space of A to be the span of the subset of Fⁿ consisting of the rows of A, and we denote it by rowsp(A).
93
What is row rank?
We define the row rank of A to be rowrank(A) := dim rowsp(A)
94
Let A be an m ×n matrix, and let B be a matrix obtained from A by a finite sequence of EROs. Then rowsp(A) = ??? Rowrank(A) = ???
``` Then rowsp(A) = rowsp(B). In particular, rowrank(A) = rowrank(B). ```
95
Let U be a subspace of a finite-dimensional vector space V . Then (a) U is finite-dimensional, and dim U .... ; and (b) if dim U = dim V , then ...
(a) U is finite-dimensional, and dim U ≤ dim V ; and | (b) if dim U = dim V , then U = V
96
Let U be a subspace of a finite-dimensional vector space V (a) U is finite-dimensional, and dim U ≤ dim V ; prove it
By Theorem 20, every linearly independent subset of V has size at most n. Let S be a largest linearly independent set contained in U, so |S| ≤ n. [Secret aim: S spans U.] Suppose, for a contradiction, that Span(S) 6= U. Then there exists u ∈ U \ Span(S). Now by Lemma 16 S ∪ {u} is linearly independent, and |S ∪ {u}| > |S|, which contradicts our choice of S. So U = Span(S) and S is linearly independent, so S is a basis of U, and as we noted earlier |S| ≤ n.
97
Let U be a subspace of a finite-dimensional vector space V (b) if dim U = dim V , then U = V prove it
If dim U = dim V , then there is a basis S of U with dim U elements. Then S is a linearly independent subset of V with size dim V . Now adding any vector to S must give a linearly dependent set as every linearly independent subset of V has size at most n, so S must span V . So V = Span(S) = U
98
In an n-dimensional vector space, any linearly independent set of size n is a [ ]. Similarly, any spanning set of size n is a [ ] .
basis | basis
99
Let U be a subspace of a finite-dimensional vector space V Can a basis of U be extended to a basis of V? Explain
Then every basis of U can be extended to a basis of V That is, if u₁, ..., uₘ is a basis of U, then there are vₘ₊₁, ..., vₙ ∈ V such that u₁, ..., uₘ, vₘ₊₁, ..., vₙ is a basis of V This does not say that if U ≤ V and if we have a basis of V then there is a subset that is a basis of U. The reason it does not say this is that in general this is false.
100
Let U be a subspace of a finite-dimensional vector space V . Then every basis of U can be extended to a basis of V prove it
Proof pg 38 | Idea: Start with a basis of U and and vectors till we reach a basis of V
101
n Let S be a finite set of vectors in Rⁿ. How can we (efficiently) find a basis of Span(S)?
Let m = |S|. Write the m elements of S as the rows of an m × n matrix A. Use EROs to reduce A to matrix E in echelon form. Then rowsp(E) = rowsp(A) = Span(S), by Lemma 22. The nonzero rows of E are certainly linearly independent. So the nonzero rows of E give a basis for Span(S)
102
What is the dimension formula?
Let U, W be subspaces of a finite-dimensional vector space V over F. Then dim(U + W) + dim(U ∩ W) = dim U + dim W
103
Prove the dimension formula
Take a basis v₁, ..., vₘ of U ∩ W Now U ∩ W ≤ U and U ∩ W ≤ W, so by Theorem 24 we can extend this basis to a basis v₁, ..., vₘ, u₁, ..., uₚ of U, and a basis v₁, ..., vₘ, w₁, ..., wᵩ of W. With this notation, we see that dim(U ∩ W) = m, dim U = m + p and dim W = m + q
104
Let U, W be subspaces of a finitedimensional vector space V over F Claim. v₁, ..., vₘ, u₁, ..., uₚ, w₁, ..., wᵩ is a basis of U + W Prove it
Call this collection of vectors S. Note that all these vectors really are in U+W (eg, u₁ = u₁ + 0ᵥ ∈ U + W). spanning: Take x ∈ U + W. Then x = u + w for some u ∈ U, w ∈ W. Since v₁, ..., vₘ, u₁, ..., uₚ span U, there are α₁, ..., αₘ, α'₁, ..., α'ₚ ∈ F such that u = α₁v₁ + ... + αₘvₘ + α'₁u₁ + ... + α'ₚuₚ Similarly, there are β₁, ..., βₘ, β'₁, ..., β'ᵩ ∈ F such that w = β₁v₁ + ... + βₘvₘ + β'₁w₁ + ... + β'ᵩwᵩ Then x = u + w = (α₁+β₁)v₁ + ... + (αₘ+βₘ)vₘ + α'₁u₁ + ... + α'ₚuₚ + β'₁w₁ + ... + β'ᵩwᵩ ∈ Span(S). And certainly Span(S) ⊆ U + W. So Span(S) = U + W Proof continues pg41
105
What is a direct sum of two subspaces?
Let U, W be subspaces of a vector space V . If U ∩ W = {0ᵥ} and U + W = V , then we say that V is the direct sum of U and W, and we write V = U ⊕ W
106
What is a direct complement?
Let U, W be subspaces of a vector space V . If U ∩ W = {0ᵥ} and U + W = V , then we say that V is the direct sum of U and W, and we write V = U ⊕ W In this case, we say that W is a direct complement of U in V (and vice versa).
107
Let U, W be subspaces of a finite-dimensional vector space V . The following are equivalent: (i) V = U ⊕ W; (ii) every v ∈ V has a unique expression as u+w where u ∈ U and w ∈ W; (iii) dim V = dim U + dim W and V = [ ] ; (iv) dim V = dim U + dim W and [ ] = {0ᵥ}; (v) if u₁, ..., uₘ is a basis for U and w₁, ..., wₙ is a basis for W, then [ ] is a basis for V
(i) V = U ⊕ W; (ii) every v ∈ V has a unique expression as u+w where u ∈ U and w ∈ W; (iii) dim V = dim U + dim W and V = U + W; (iv) dim V = dim U + dim W and U ∩ W = {0ᵥ} (v) if u₁, ..., uₘ is a basis for U and w₁, ..., wₙ is a basis for W, then u₁, ..., uₘ, w₁, ..., wₙ is a basis for V
108
What is a linear map/transformation?
Let V , W be vector spaces over F. We say that a map T : V → W is linear if (i) T(v₁ + v₂) = T(v₁) + T(v₂) for all v₁, v₂ ∈ V (preserves additive structure); and (ii) T(λv) = λT(v) for all v ∈ V and λ ∈ F (respects scalar multiplication). We call T a linear transformation or a linear map.
109
27. Let V , W be vector spaces over F, let T : V → W be linear. Then T(0ᵥ) = ??
T(0ᵥ) = 0𝓌
110
If T : V → W and T(0ᵥ) ≠ 0𝓌, then can T ever be linear?
No
111
That is, if T is any map that preserves additive structure then T(0ᵥ) = 0𝓌, and if T is any map that respects scalar multiplication then T(0ᵥ) = 0𝓌 Prove it
Let x = T(0ᵥ) ∈ W The z + z = T(0ᵥ) + T(0ᵥ) = T(0ᵥ + 0ᵥ) = T(0ᵥ) = z (using the assumption to see that T(0ᵥ) + T(0ᵥ) = T(0ᵥ + 0ᵥ)) so z = 0𝓌
112
Let V , W be vector spaces over F, let T : V → W. The following are equivalent: (i) T is linear; (ii) T(αv₁ + βv₂) = [ ] for all v₁, v₂ ∈ V and α, β ∈ F; (iii) for any n ≥ 1, if v₁, ..., vₙ ∈ V and α₁, ..., αₙ∈ F then [ ] = α₁T( v₁) + ... + αₙT(vₙ)
(ii) T(αv₁ + βv₂) = αT(v₁) + βT(v₂) for all v₁, v₂ ∈ V and α, β ∈ F; (iii) for any n ≥ 1, if v₁, ..., vₙ ∈ V and α₁, ..., αₙ∈ F then T(α₁v₁ + ...+ αₙvₙ) = α₁T( v₁) + ... + αₙT(vₙ)
113
What is the identity map?
Let V be a vector space. Then the identity map idᵥ : V → V given by idᵥ (v) = v for all v ∈ V is a linear map
114
What is the zero map
Let V , W be vector spaces. The zero map 0 : V → W that sends every v ∈ V to 0𝓌 is a linear map. (In particular, there is at least one linear map between any pair of vector spaces.)
115
Do linear transformations themselves form a vector space?
Yes with the operations of addition and scalar multiplication , as well as the zero map
116
Let V , W be vector spaces over F. For S, T : V → W and | λ ∈ F, define S + T : V → W by [ ] for v ∈ V , and define λS : V → W by [ ] for v ∈ V
Let V , W be vector spaces over F. For S, T : V → W and λ ∈ F, define S + T : V → W by (S + T)(v) = S(v) + T(v) for v ∈ V , and define λS : V → W by (λS)(v) = λS(v) for v ∈ V
117
Let U, V , W be vector spaces over F. Let S : U → V and T : V → W be linear. Then is T ◦ S U → W linear? Prove or disprove it
Yes | proof top of page 46
118
Let V , W be vector spaces, let T : V → W be linear. We say that T is invertible if????
Let V , W be vector spaces, let T : V → W be linear. We say that T is invertible if there is a linear transformation S : W → V such that ST = idᵥ and T S = id𝓌 (where idᵥ and id𝓌 are the identity maps on V and W respectively). In this case, we call S the inverse of T, and write it as T⁻¹
119
Let V , W be vector spaces. Let T : V → W be linear. | Then T is invertible if and only if T is injective/surjective/bijective
Bijective
120
Let V , W be vector spaces. Let T : V → W be linear. Then T is invertible if and only if T is bijective Prove it
Proof bottom of pg 46
121
Let U, V , W be vector spaces. Let S : U → V and T : V → W be invertible linear transformations. Then T S : U → W is [ ] , and (TS)⁻¹ = Prove it
invertible | (TS)⁻¹ = S⁻¹T⁻¹
122
Let V , W be vector spaces. Let T : V → W be linear | Define the kernel (or null space) of T
ker T := {v ∈ V : T(v) = 0𝓌}
123
Let V , W be vector spaces. Let T : V → W be linear | Define the image of T
Im T := {T(v) : v ∈ V }
124
Let V , W be vector spaces. Let T : V → W be linear. For v₁, v₂ ∈ V , T(v₁) = T(v₂) iff [ ]
v₁ - v₂ ∈ ker T
125
Let V , W be vector spaces. Let T : V → W be linear. For v₁, v₂ ∈ V , T(v₁) = T(v₂) iff v₁ - v₂ ∈ ker T Prove it
For v₁, v₂ ∈ V, we have | T(v₁) = T(v₂) ⇔ T(v₁) - T(v₂) = 0𝓌 ⇔ T(v₁ - v₂) = 0𝓌 ⇔ v₁ - v₂ ∈ ker T
126
Let V , W be vector spaces. Let T : V → W be linear. Then | T is injective if and only if [ ]
kerT = {0ᵥ}
127
Let V , W be vector spaces. Let T : V → W be linear. Then T is injective if and only if kerT = {0ᵥ} Prove it
Proof. (⇐) Assume that ker T = {0ᵥ} Take v₁, v₂ ∈ V with T(v₁) = T(v₂). Then v₁ - v₂ ∈ ker T (previously proved), so v₁ = v₂ So T is injective. (⇒) Assume that ker T ≠ {0ᵥ} Then there is v ∈ ker T with v ≠ 0ᵥ Then T(v) = T(0ᵥ), so T is not injective.
128
Let V , W be vector spaces over F. Let T : V → W be linear. Then (i) ker T is a subspace of [ ] and Im T is a subspace of [ ]; (ii) if A is a spanning set for V, then T(A) is a spanning set for [ ]; and (iii) if V is finite-dimensional, then ker T and [ ] are finite-dimensional.
Let V , W be vector spaces over F. Let T : V → W be linear. Then (i) ker T is a subspace of [V] and Im T is a subspace of [W]; (ii) if A is a spanning set for V , then T(A) is a spanning set for [ImT]; and (iii) if V is finite-dimensional, then ker T and [ImT] are finite-dimensional.
129
Define nullity
Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. We define the nullity of T to be null(T) := dim(ker T)
130
Define Rank
Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. the rank of T to be rank(T) := dim(Im T)
131
What is the rank-nullity theorem?
Let V , W be vector spaces with V finite-dimensional. Let T : V → W be linear. Then dim V = rank(T) + null(T).
132
Prove the rank-nullity theorem
Take a basis v₁, ..., vₙ for ker T, where n = null(T). Since ker T ≤ V , by Theorem 24 this can be extended to a basis v₁, ..., vₙ, v'₁, ..., v'ₙ of V Then dim(V ) = n + r. For 1 ≤ i ≤ r, let wᵢ = T(v'ᵢ)
133
``` Let V be a finite-dimensional vector space. Let T : V → V be linear. The following are equivalent: (i) T is invertible; (ii) rank T = [ ] ; (iii) null T = [ ] ```
(i) T is invertible; (ii) rank T = dim V ; (iii) null T = 0
134
Let V be a finite-dimensional vector space. Let T : V → V be linear. Are any one-sided inverses two-sided? Prove it
Then any one-sided inverse of T is a two-sided inverse, and so is unique. Proof pg50
135
Let V and W be vector spaces, with V finite-dimensional. Let T : V → W be linear. Let U ≤ V . Then dim U−null T ≤ [ ] ≤ dim U. In particular, if T is [ ] then dim T(U) = dim U prove it
dim T(U) injective proof end pg 50
136
Let V be an n-dimensional vector space over F, let v₁, ..., vₙ be a basis of V . Let W be an m-dimensional vector space over F, let w₁, ..., wₘ be a basis of W. Let T : V → W be a linear transformation. We define an m × n matrix for T as follows..... (basis form)
For 1 ≤ j ≤ n, T(vⱼ) ∈ W so T(vⱼ) is uniquely expressible as a linear combination of w₁, ..., wₘ : there are unique aᵢⱼ (for 1 ≤ i ≤ m such that T(vⱼ) = a₁ⱼw₁, ..., aₘⱼwₘ. That is, T(v₁) = a₁₁w₁, ..., aₘ₁wₘ T(v₂) = a₁₂w₁, ..., aₘ₂wₘ ... T(vₙ) = a₁ₙw₁, ..., aₘₙwₘ We say that M(T) = (aᵢⱼ) is the matrix for T with respect to these ordered bases for V and W
137
Let V be an n-dimensional vector space over F, let Bᵥ be an ordered basis for V . Let W be an m-dimensional vector space over F, let B𝓌 be an ordered basis for W. Then (i) the matrix of 0 : V → W is [ ] (ii) the matrix of idᵥ : V → V is [ ] (iii) if S : V → W, T : V → W are linear and α, β ∈ F, then M(αS+βT) = [ ] Moreover, let T : V → W be linear, with matrix A with respect to Bᵥ and B𝓌. Take v ∈ V with coordinates xᵀ = (x₁, ..., xₙ)ᵀ with respect to Bᵥ. Then Ax is the [ ] of T(v) with respect to [ ]
(i) the matrix of 0 : V → W is 0ₘₓₙ (ii) the matrix of idᵥ : V → V is Iₙ (iii) if S : V → W, T : V → W are linear and α, β ∈ F, then M(αS+βT) = αM(S) + βM(T) Then Ax is the coordinate vector of T(v) with respect to B𝓌
138
Let U, V , W be finite-dimensional vector spaces over F, with ordered bases Bᵤ , Bᵥ , B𝓌 respectively. Say Bᵤ has size m, Bᵥ has size n, B𝓌 has size p. Let S : U → V and T : V → W be linear. Let A be the matrix of S with respect to BU and Bᵥ. Let B be the matrix of T with respect to Bᵥ and B𝓌 . Then the matrix of T ◦ S with respect to Bᵤ and B𝓌 is [ ]
BA
139
Prove that matrix multiplication is associative
Proof, end of pg 54
140
Let V be a finite-dimensional vector space. Let T : V → V be an invertible linear transformation. Let v₁, ..., vₙ be a basis of V . Let A be the matrix of T with respect to this basis (for both domain and codomain). Is A invertible? If so, what does the inverse represent?
Then A is invertible, and A⁻¹ | is the matrix of T⁻¹ with respect to this basis
141
What is the change of basis theorem?
Let V , W be finite-dimensional vector spaces over F. Let T : V → W be linear. Let v₁, ..., vₙ and v'₁, ..., v'ₙ be bases for V. Let w₁, ..., wₙ and w'₁, ..., w'ₙ be bases for W. Let A = (aᵢⱼ) ∈ Mₘₓₙ (F) be the matrix for T with respect to v'₁, ..., v'ₙ and w'₁, ..., w'ₙ. Take pᵢⱼ, qᵢⱼ ∈ F such that v'ᵢ = ⱼ₌₁Σⁿ pᵢⱼvⱼ and w'ᵢ = ⱼ₌₁Σⁿ qᵢⱼwⱼ Let P = (pᵢⱼ) ∈ Mₘₓₙ (F) and Q = (qᵢⱼ) ∈ Mₘₓₙ (F) Then B = Q⁻¹AP
142
Let V be a finite dimensional vector space. Let T : V → V be linear. What is the second version of change of basis theorem (only one vector space)?
Let V be a finitedimensional vector space. Let T : V → V be linear. Let v₁, ..., vₙ and v'₁, ..., v'ₙ be bases for V. Let A be the matrix of T with respect to v₁, ..., vₙ. Let B be the matrix of T with respect to v'₁, ..., v'ₙ. Let P be the change of basis matrix, that is, the n × n matrix (pᵢⱼ) such that v'ᵢ = ⱼ₌₁Σⁿ pᵢⱼvⱼ Then B = P⁻¹AP
143
change of basis theorem: The change of basis matrix P is the matrix of the identity map idᵥ : V → V with respect to the basis [ ] for V as domain and the basis [ ] as codomain
v'₁, ..., v'ₙ domain v₁, ..., vₙ codomain
144
When are two matrices similar?
Take A,B ∈ Mₘₓₙ (F).If there is an invertible n × n matrix P | such that P⁻¹AP = B, then we say that A and B are similar
145
rowsp(Aᵀ) = rowrank(Aᵀ) = colsp(Aᵀ) = colrank(Aᵀ) =
colsp(A) colrank(A) rowsp(A) rowrank(A)
146
Take A ∈ Mₘₓₙ (F), let r = colrank(A). Then there are invertible matrices P ∈ Mₙₓₙ (F) and Q ∈ Mₘₓₘ (F) such that Q⁻¹AP has the block form ( Ir 0rxs ) ( 0txr 0txs ) where s = n − r and t = m − r Prove it
Proof pg59
147
Take A ∈ Mₘₓₙ (F). Let R be an invertible m × m matrix, let P be an invertible n × n matrix. Then (i) rowsp(RA) = [ ] and so rowrank(RA) = [ ]; (ii) colrank(RA) = [ ]; (iii) colsp(AP) = [ ] and so colrank(AP) = colrank([ ]); (iv) rowrank(AP) = rowrank([ ]).
(i) rowsp(RA) = rowsp(A) and so rowrank(RA) = rowrank(A); (ii) colrank(RA) = colrank(A); (iii) colsp(AP) = colsp(A) and so colrank(AP) = colrank(A); (iv) rowrank(AP) = rowrank(A).
148
Let A be an m × n matrix. | Then what is colrank(A) = ??
colrank(A) = rowrank(A)
149
What is the rank of a matrix?
Let A be an m × n matrix. The rank of A, written rank(A), is | the row rank of A (which we have just seen is also the column rank of A).
150
Let T : V → W be linear. Let Bᵥ, B𝓌 be ordered bases of V , W respectively. Let A be the matrix for T with respect to Bᵥ and B𝓌 . Then rank(A) = [ ](T).
rank(A) = rank(T)
151
Let A be an m×n matrix. Let x be the n×1 column vector of variables x₁, ..., xₙ. Let S be the solution space of the system Ax = 0 of m homogeneous linear equations in x₁, ..., xₙ, that is, S = {v ∈ 𝒸ₒₗFⁿ : Av = 0}. Then dim S = [ ]
dim S = n − colrank A
152
Let V be a vector space over F | What is a bilinear form on V?
A bilinear form on V is a function of two variables from V taking values in F, often written : V × V → F, such that (i) = α₁ + α₂ for all v₁, v₂, v₃ ∈ V and α₁,α₂ ∈ F; and (ii) = α₂ + α₃ for all v₁, v₂, v₃ ∈ V and α₂,α₃ ∈ F
153
What is a Gram matrix?
Let V be a vector space over F. Let be a bilinear form on V . Take v₁, ..., vₙ ∈ V . The Gram matrix of v₁, ..., vₙ with respect to < −, − > is the n × n matrix () ∈ Mₙₓₙ (F)
154
Let V be a finite-dimensional vector space over F. Let be a bilinear form on V. Let v₁, ..., vₙ be a basis for V . Let A ∈ Mₙₓₙ (F) be the associated Gram matrix. For u, v ∈ V , let x = (x₁, ..., xₙ)∈ Fⁿ and y = (y₁, ..., yₙ)∈ Fⁿ be the unique coordinate vectors such that u = x₁v₁ + ... + xₙvₙ and v = y₁v₁ + ... + yₙvₙ Then = ???
= xAyᵀ
155
What is a symmetric bilinear form?
We say that a bilinear form : V × V → F is symmetric if | = for all v₁, v₂ ∈ V
156
What is a positive definite bilinear from?
Let V be a real vector space. We say that a bilinear form | : V × V → R is positive definite if ≥ 0 for all v ∈ V, with = 0 if and only if v = 0.
157
What is an inner product on a real vector space?
An inner product on a real vector space V is a positive definite symmetric bilinear form on V
158
What is an inner product space?
ymmetric bilinear form on V . We say that a real vector space is an inner product space if it is equipped with an inner product. Unless otherwise specified, we write the inner product as
159
Let V be a real inner product space | What is the norm/magnitude/length of v for v ∈ V?
||v|| := √
160
Define the angle between any two vectors for any inner product space
the angle between nonzero vectors x, y ∈ V to be cos⁻¹( / (||x|| ||y||) ) where this is taken to lie in the interval [0, π]
161
``` Let V be a finite-dimensional real inner product space. Take u ∈ V \ {0}. Define u⊥ (The ⊥ should be superscript) dim(u⊥) = ? V = (in terms of u⊥) ?? ```
u⊥ := { v ∈ V : = 0} Then u⊥ is a subspace of V ``` dim(u⊥) = dim V - 1 V = Span(u) ⊕ u⊥ ```
162
Let V be an inner product space. We say that {v₁, . . . , vₙ} ⊆ V is an orthonormal set if ...
We say that {v₁, . . . , vₙ} ⊆ V is an orthonormal set if for all i, j we have = δᵢⱼ = { 1 if i = j { 0 if i ≠ j
163
Let {v₁, . . . , vₙ} be an orthonormal set in an inner product space V. Are v₁, . . . , vₙ linearly dependent or independent?
linearly independent
164
So a set of n orthonormal vectors in an n-dimensional vector space is a [ ]
basis
165
Let V be an n-dimensional real inner product space. | Is there an orthonormal basis of V?
Yes, v₁, . . . , vₙ ᵀ
166
Take X ∈ Mₙₓₙ (R). Consider Rⁿ equipped with the usual inner product = x · y. The following are equivalent: (i) XXᵀ = [ ] (ii) [ ] X = Iₙ (iii) the [ ] of X form an orthonormal basis of Rⁿ; (iv)the [ ] of X form an orthonormal basis of Rⁿcol; (v) for all x, y ∈ Rⁿ, we have xX · yX = [ ]
(i) XXᵀ = Iₙ (ii) XᵀX = Iₙ (iii) the rows of X form an orthonormal basis of Rⁿ; (iv) the columns of X form an orthonormal basis of Rⁿcol; (v) for all x, y ∈ Rⁿ, we have xX · yX = x · y
167
X is orthogonal iff the map Rₓ is an [ ]
Isometry
168
What is the Cauchy-Schwarz Inequality?
Let V be a real inner product | space. Take v₁, v₂ ∈ V . Then || ≤ ||v₁|| ||v₂||, with equality if and only if v₁, v₂ are linearly dependent.
169
What it a complex inner product space?
A complex inner product space is a complex vector space equipped with a positive definite sesquilinear form
170
What are Hermitian forms and spaces?
Hermitian form = Positive definite sesquilinear form | Hermitian Space = complex inner product space
171
Let V be a complex vector space | What is a sesquilinear form?
Let V be a complex vector space. A function : V ×V → C is a sesquilinear form if (i) = α₁ + α₂ for all v₁, v₂, v₃ ∈ V and α₁,α₂ ∈ C; and _____ (ii) =