Mata22 Flashcards Preview

Mata22 > Mata22 > Flashcards

Flashcards in Mata22 Deck (98):
1

Euclidean n-space,

the euclidean n space, RN, is the collection of all ordered n-tuples of real numbers. There are two types points and vectors

2

a vector

a vector a is = [a1..an] in its standard postion starts at origin and ends at point (a1...an)

3

, a linear combination,

linear combination of vectors v1...vk in rn and scalars s1...sk to be defined as y= s1v1...skvk.

4

a span of vectors,

The span of all v1...vk or the subset is represented by
(r1v1 + r2v2 + rkvk) sp(u, v) is all the possible linear combinations of u and v

5

a magnitude of vector

let u = [u1,un] be in R then magnitude is sqrt(u1...un)
if magnitude = 1 then it is a unit vecotr

6

dot product of vectors

v.W = v1w1 + vnwn

7

Associativity

(U + V) +W = U + (V + W)

8

commutativity

U+V=V+U

9

cauchy schwartz inequality

|w,v| =< |w||v|

10

trinagle inequality

|v+w| =< |u| +|v|

11

Vectors in Euclidiean N space

If n is a +ve integer, the Euclidean n-space, Rn, is the collection of all odered n-tuples of real numbers.
(a1, an) ais are real numbers. point. [a1,an] ais are real numbers this n tuple is a vector

12

parallel vectors

v||v is v = rw

13

linear combination

given v1 to vk in Rn with scalars in R. The vector y is definied by y= s1v1 + skvk. LC of vectors with weights s1sk

14

Span

the set of all linear combinations of vectors v1 vk is denoted by sp(v1,vk).
sp(v1,vk)={r1v1+rkvk| ri in R, 1<= i <=k}

15

Manitude or norm

u be a vector in Rn. norm is ||u|| sqrt(u1^2+ uk^2).
any vector with mag of 1 is a unit vector

16

Dot product

v and w in Rn. the dot product in Rn is defined to be the real number =v1w1+vnwn

17

angle between two nonzero vectors in RN

arccos (v.w/||v||||w||)

18

perpedicular or orthogonal

two vectors in Rn are orthogonal v.u = 0.

19


distance of v and u

||v-u||. is defined to be the distance between v and u

20

orthoganal projection

given a non zero vector a in Rn, any vector can be decomposed into a sum of two other vectors
b = p + v. p parallel to a and v perpindicular. p is the orthogonal projection of b on a. we call v the vector component of b orthogonal to a.

21

N

natural

22

z

integers

23

Q

rational

24

diagonal matrix

square matrix aij = 0 for all i != j

25

upper triangular

aij =0 for all i > j

26

lower triangular

aij = 0 for all i

27

skew symmetric

AT = -A

28

communtative

moving

29

ASSOCIATIVE

BRACKETING

30

additive matrix

A + B = 0

31

r(sA) = (rs)A

associative law of scalar multiplication

32

(r+s)A = rA+sA

a right distribitive law matrix scalar

33

r(A+B) = rA+rB

right distributive law matrix scalar

34

A(BC) = (AB)C

matrix multiplication associative law

35

ImA=AIn = A unique Im and In matrices

multiplicative identity

36

Definition of trace

Let A in Matrix N*N, then the trace of A is definined as Tr(A) = sum of Aii

37

m x n linear system of equations

is a system of m linear equations n variables:

38

REF

all rows containing only zeros are below rows with non zero entries

the first non zero entry in a column to the right of the first nonzero entry in any row above it

39

RREF

REF
pivot is one
every pivot is only non zero entry in column

40

constant linear system

one or more solutions.

41

row equivalents

[H|c] performing row operations to [A|b] then [A|b] ~ [H|c]

42

gauss reduction with backsusbtition

method of solving Ax=b by reducing [A|b] to a row reduced echelon form and then using back susbtitution.

The Gauss-Jordan method refers to solving the system Ax=b by reducing [A|b] to a reduced row echelon form

43

-V as an additve inverse of v

for each x in Rn x-x = 0

44

preservation of scale

1x=x for all x Rn

45

homogeneity

||rv|| = |r| ||v||

46

triangle inequality

||v+w||

47

cauchy shcartz inequality

|u.v| <= ||u||||v||

48

invertible

an n x n matrix A if there exists an n x n matrix (called the inverse of A AC=CA=I. If A is not invertible it is singular

49

Every consistent system with fewer equation than unknowns has

infinitely many
solutions

50

Let A and C be n × n matrices. Then CA = I

if and only if AC = I.

51

conditions for ainverse to exist

1. A is invertible.
2. A is row- equivalent to the n × n identity matrix I.
3. Ax = b has a solution for each column vector b ∈ R
n
.
4. A can be expressed as a product of elementary matrices.
5. The span of the column vectors of A is R
n
.
6. The span of the row vectors of A is R
n

52

conditions for ainverse to exist

1. A is invertible.
2. A is row- equivalent to the n × n identity matrix I.
3. Ax = b has a solution for each column vector b ∈ R
n
.
4. A can be expressed as a product of elementary matrices.
5. The span of the column vectors of A is R
n
.
6. The span of the row vectors of A is Rn

53

conditions for ainverse to exist

1. A is invertible.
2. A is row- equivalent to the n × n identity matrix I.
3. Ax = b has a solution for each column vector b ∈ R
n
.
4. A can be expressed as a product of elementary matrices.
5. The span of the column vectors of A is R
n
.
6. The span of the row vectors of A is Rn

54

Defn: homogeneous system

Ax = 0

iii) if m

55

Defn: Null space, rowspace, and columnspace

null(A) is the set of all solutions to Ax = 0. Denoted in set notation, N = {x in Rn | Ax = 0}

row space is the span of row vectors of A

the column vectors of A

56

let Ax = b be a linear system with p then

if h is in the nullspace of A then p + h is also a solution Ax = b

if q is any solution to Ax = b then b = p + h

57

Linearly independent

two non zero non parallel vectors

58

W ⊂ Rn. W is a Subspace of Rn if

1 W is non empty
2 vectors u and v in W then u+v in W
3 if u in W and r In R then ru in W

59

definition of W

if W= span{w1,wk} then we say vectors span or generate W

60

Definition of basis

Let W be a subspace of Rn
. If B = {b1, b2, · · · , bk} is a
subset of W, then we say that B is a basis for W if every vector in W can
be written uniquely as a linear combination of the vectors in B. The plural
for the word basis is ”bases”.

61

let A be nxn there are 4 equivalent statements to
1. The linear system Ax = b has a unique solution for each b ∈ Rn
.

1. The linear system Ax = b has a unique solution for each b ∈ Rn
.
2. The matrix A is row equivalent to the n × n identity, I.
3. The matrix A is invertible.
4. The column vectors of A form a basis for R
n
.

62

Let A be an m × n matrix (m > n). The following are
equivalent: 1. Each consistent system Ax = b has a unique solution.

1. Each consistent system Ax = b has a unique solution.
2. The reduced row-echelon form of A consists of the n × n identity matrix
on top followed by m − n rows of zeroes.
3. The column vectors of A form a basis for the column space of A.

63

Defn of dimension

Let W be a subspace of Rn
. The number of elements in a
basis for W is called the dimension of W, denoted by dim(W).

64

theorems existence and determinination of base

i) Every subspace W of Rn has a basis and dim(W) ≤ n.
ii) Every linearly independent set of vectors in Rn
can be enlarged , if
necessary, to become a basis for R
n
.
iii) If W is a subspace of R
n
and dim(W) = k, then
a) every independent set of k vectors in W is a basis for W, and
b) every set of k vectors that spans W, is a basis for W

65

Rank + nullity

n

66

how to tell if diagonizable

If A is symmetric, then A is diagonalizable. i.e AT = A, ) A is diagonalizable

If A has n distinct eigenvalues, then A is diagonalizable

A is diagonalizable iff the algebraic multiplicity of each eigenvalue is equal to its geometric
multiplicity

A is diagonalizable iff the algebraic multiplicity of each eigenvalue is equal to its geometric
multiplicity

67

The Cayley-Hamilton Theorem

The Cayley-Hamilton Theorem is useful for finding the inverse of any given n × n matrix. In short, it states
that every matrix satisfies its characteristic polynomial P(λ), i.e P(A) = 0. Boxed below is the statement
written in mathematical notation.

Let A be an n × n matrix with characteristic polynomial
P(λ) = anλn + an−1λn−1 + ::: + a1λ + a0
+
P(A) = anAn + an−1An−1 + ::: + a1A + a0I = O
where O is the n × n 0 matrix
Notice that by solving for I, we get an expression of the form AB = I, where the matrices A and B are
inverses of each other.

68

The determinant of any n × n matrix can be obtained via what we call a cofactor expansion.

The
cofactor of any given entry in a matrix is given by the formula
a0
ij = (−1)i+jdet(Aij)

69

rank

number of dim in the output

70

set of all possible outputs

column space

71

null space

things that become 0

72

dot product

projection

73

det of (vectors) equals what

the cross product

74

det of (vectors) equals what

the cross product

75

product of a matrix to be zero

ch 5 and 6

76

matrix xof lt

Determinants shouldn't be a bother, it's just a matter of doing a few. Always row reduce the matrix to get some zeros in there. Remember, adding a multiple of a row to another does not change the determinant. Then do co factor expansion along the row/column w the most zero's, so you only have a few terms. If you feel like determinants popped out of nowhere and you really don't get what the hell they mean, look up the definition of determinants using permutations. This should make it a bit more clear, especially if you're down with some combinatorics.

Proving something is a vector space, again, just do a few examples, and memorize the list of properties. Remember if you are asked to prove W is a subspace of V all you have to do is check

Zero vector is in W

If a,b are in W, then c*a+b are in W, where c is a scalar.

all of the other properties are consequences of W being a subset of V.

Don't know if this helped, for more review I really suggest looking at "Paul's Online Notes," they are super clear and probably cover most of the stuff you did in class.

77

lternatively, one could equally proceed by finding the adjoint matrix of A and use the formula
A−1 = 1
det(A)adj(A

n order to prove that a subspace A is a subspace of Rn, check to see if it contains the 0 vector. If
so, based on the conditions defined in the Subspaces section, assume that ~u and ~v are both in A, then
use allowable algebra to show that (~u + ~v) and r~u are equally in A. This is usually done using the
”general formula” of the set. The steps to determine whether a function is a linear transformation are
quite similar.
• Because the determinant of an upper or lower triangular matrix is just the product of the entries along
the main diagonal, an easier and more convenient way of computing the determinant of a matrix would
be to reduce the matrix to REF or RREF.

78

lternatively, one could equally proceed by finding the adjoint matrix of A and use the formula
A−1 = 1
det(A)adj(A

n order to prove that a subspace A is a subspace of Rn, check to see if it contains the 0 vector. If
so, based on the conditions defined in the Subspaces section, assume that ~u and ~v are both in A, then
use allowable algebra to show that (~u + ~v) and r~u are equally in A. This is usually done using the
”general formula” of the set. The steps to determine whether a function is a linear transformation are
quite similar.
• Because the determinant of an upper or lower triangular matrix is just the product of the entries along
the main diagonal, an easier and more convenient way of computing the determinant of a matrix would
be to reduce the matrix to REF or RREF.

79

• det(AT ) = det(A)
• det(AB) = det(A) det(B)
• If A is invertible, det(A−1) = 1/det(A)
• If A is a triangular matrix, its determinant is the product of all the entries along the main diagonal
• If A has 2 identical or proportional rows or columns, then det(A) = 0
• A is invertible () det(A) 6= 0
• Performing a row addition on a matrix does not affect its determinant
• Every row interchange operation applied on a matrix negates its determinant
• Row scaling by a constant r magnifies the determinant of the matrix by a factor of r

.4.1 The Cross Product
The cross product is ONLY defined for vectors in R3. Finding the cross product of two vectors a and b in
R3 is equivalent to finding the determinant of the matrix 2 4abi j k 11 ab22 ab333 5
The cross product comes in handy when trying to find the area of a parallelogram in R3 or the volume of a
parallelepiped in the same dimension. Boxed below are tips to compute the area of a parallelogram in R2
and R3, as well as the volume of a parallelepiped in R3.
• The area of a parallelogram formed by two nonzero vectors a and b in R2 is given by a1 a2
b1 b2
• The area of a parallelogram formed by two nonzero vectors a and b in R3 is given by jja × bjj
• The volume of a parallelepiped formed by three vectors a, b and c in R3 is given by
a1 a2 a3
b1 b2 b3
c1 c2 c3
The cross product of two vectors a and b is denoted (a×b), which is not to be confused with the dot product
of the same vectors, denoted by (a · b) instead. One should also remember that the cross product of two
vectors yields another vector, while the dot product is actually a real number.
Note: The cross product (a × b) is always perpendicular to both a and

80

codomain

the result

81

det(A-li)

for factorized of eignevalues

82

values and vectors

null space by making matrix singular by subtracting the eignevalues

83

(A-li)x=0

for x use null space

84

adding rI to A makes eigen value what

lx + 3x

85

Ax = lx

b has eigenvalues l1

86

trace is

sum of eigen values

87

det

eigen vector multiplication

88

degenerate matrix

if

89

determinant

deterinamt singular when 0

90

determinant

deterinamt singular when 0

91

3 prop

det (i) = 1

92

3 prop

det (i) = 1

exchange rows
reverse sign

3a multiplying a row by t
becomes the det of t * det

3b can split up a row into lesser values

93

det

ad-bc

3x3

a1 -- a2 + a3 by det of row and col taken out

cofactor of aij = (i+j)^(-1) det of aij with row i and col j removed

split up the rows

elimination

94

cofactors

smaller choices sum

95

if A is invertible

det(A-1)= 1/det(A)

96

project of a on b

proj ba

97

cramers rule

x = A-1b = 1/det(A) Ct b

98

xk

det(bk)/detA where bk is the column vector