What are the three ways of multiplying vectors, and what are their respective outcomes?

###
- Multiplying a vector by a scalar (also known as scaling)
- Multiplying two vectors to obtain a scalar (dot product)
- Multiplying two vectors to obtain a new vector (vector product)

What is the expression for the dot product?

It is defined as being the product of the magnitude of the two vectors multiplied by the cosine of the angle 𝜃 between them.

What is the vector product/cross product?

The magnitude of the vector product of two vectors is defined as the product of their magntitudes multiplied by the **sine** of the angle 𝜃.

The vector product is a vector that acts perpendicular to both vectors A and B.

If you curl your fingers in your right hand in the direction of the first vector A to the second vector B, then the direction of the cross product is given by the thumb.

Before you add matrices, what do both matrices need to have?

They need to have the same dimensions.

What conformability to do need for both the matrices before multiplying?

If there is conformability what are the dimensions of the product matrix?

The column of the 'lead' (1st) matrix must be equal to the row of the 'lag' (2nd) matrix.

It is equal to the rows of the 'lead' matrix and the columns of the 'lag' matrix.

If two matrices conform for multiplication, what would be the values be in the product?

Individual values would be equal to sum of the product of the individual values in the row(lead matrix) and columns (lag matrix).

Matrix multiplication commutative and associative?

Not commutative, but is associative and distributive.

Determinants are only defined for...

square matrices.

How is the determinant of a 2x2 matrix determined?

How would you determine the determinant of the this 3x3 matrix?

Using the most intuitive method.

First add the first two columns of the matrix as two new columns 4 and 5 on the right.

All the terms you add are determined by creating diagonals going to the right. Starting from a_{11} and going to a_{13} .

All the terms you subtract are determined by going from the left, a_{12}, to a_{13} .

What is the cofactor and how is it defined?

M_{ij} is the minor of the row and column defined.

the -1 raised to the power of the row and column, is the same sign as the minor if the sum of i and j is **even**. It is the opposite sign if the sum is **odd**.

How can the third order determinant be expressed through the laplace expansion ?

Plug in the values for a_{11} to a_{13}

Determine the cofactors for each of the minors.

What does interchanging any two rows (or columns) do to the value of the determinant?

It will change the sign but not the magnitude.

What does multiplying any row (or column) by a scalar do to the determinant?

It will change the determinant by k-fold.

If one row (or column) is a multiple of another row (or column), what is the value of the determinant?

zero

Under what conditions can an inverse matrix be defined for a matrix?

It can be defined if it is a square matrix.

What does a matrix require to be __non-singular__?

its rows (or columns) must be linearly independent.

What is the rank of a matrix?

The rank is defined as the __maximum number of linearly independent rows or columns in the matrix__.

A n*n __non-singular__ matrix is of rank n.

For a m*n the rank can be at most n or m, whichever is smaller.

Another way of determining the rank would be to __find out the maximum number of non-vanishing determinants__ that can be constructed from the rows and the columns of the matrix.

Can you determine the cofactor for each element in a matrix?

Yes.

Whats really important that you need remember when multiplying matrices?

You need to be multiplying __across the row of one matrix__ and __the column of another__.

One row maintains fixed as you move across the columns of the other matrix.

Keep focused! Its easy to mess it up!

What is the general formula for the inverse of a matrix?

Where the determinant is for the original matrix.

The cofactor of a matrix needs to be transposed to obtain the adjoint.

How do you find the cartesian eq from a vector equation?

Separate all the vector components into their respective cartesian formats.

From the resultant eq, try to combine to obtain one cartesian eq.

What is the scalar product form?

Explain why this is the case.

###
- When the 1st derivate is 0 but the 2nd derivative is positive, it implies that the slope of the gradient is positive, so it is increasing as we move across from left to right. If the gradient is going from negative to positive as we move from left to right, we know that the critical point is a minimum.
- If the 2nd derivative is negative, it implies that the gradient is becoming more negative as we move across from left to right. A gradient decreasing in value happens with maximums.

What is the integral of this expression?

What is d(y^{2})/dx?

Using the chain rule the outcome would be: 2y(dy/dx)

Integration by parts formula.

Remember the value you get by integrating is in both terms.

What is the order of a ODE?

It is the order of the highest derivative in the equation.

What defines an ODE as linear or non-linear?

Is the attached expression linear or non-linear?

A differential equation is linear if the terms involving the __dependent variable__ (y in dy/dx) and its derivatives are all __linear terms__.

An ODE is linear if the unknown function (e.g. y) and its derivatives appear to power one. Also if there is no product of the unkown function and its derivatives.

Its non-linear because of sin(y) as well as y(dy/dx)

Linear or non-linear?

non-linear because of the absolute term

What is the degree of an ODE?

What is the degree of the attached derivative?

The degree is the highest power the highest derivative is raised to.

First degree, the degree is determined by the highest derivative.

What makes an ODE homogenous or non-homogenous?

The homogeneity is determined by putting all the terms containing the dependent variable on the LHS and all the rest of the variables on the right-hand side.

If the terms on the RHS equate to zero, the ODE is said to be homogenous. If it isn't equal to zero, then it's non-homogenous.

Homogenous or non-homogeneous?

Homogenous, because all the x values are attached to the dependent y, so they stay on the LHS.

Does the differential equation have to be linear to apply the integration factor method?

Yes it does.

If you're going to use the integration factor method, can the derivative term have any coefficients?

No it can't, any coefficients need to be divided out.

How do you obtain the general solution of a 2nd order **homogenous ** ODE?

For real distinct roots.

The general solution is of the form 𝑦 = 𝐴𝑒^{𝑚1𝑥}+ 𝐵𝑒^{𝑚2𝑥} .

The value of m can be found through the auxiliary equation, 𝒂𝒎^{𝟐} + 𝒃𝒎 + 𝒄 = 𝟎

The values of a, b, c are the coefficients of the respective terms in the ODE.

The values m1 and m2 are plugged back into the general solution equation.

To find the constants A and B, initial conditions are required.

How do you obtain the general solution of a 2nd order homogeneous ODE with real coincident roots?

Similar to real distinct roots, however, the auxiliary equation required is slightly different.

𝑦 = 𝐴𝑒^{𝑚𝑥} + 𝐵𝑥𝑒^{𝑚𝑥 }

Compared to the auxiliary equation for real distinct roots, one of the 𝑒^{𝑚𝑥} terms has an x in it. This is the only difference.

What is the auxiliary equation for a 2nd order homogeneous ODE with complex roots?

𝒚 = 𝒆^{𝒑𝒙}( 𝑪*𝐜𝐨𝐬(𝒒𝒙) + 𝑫*𝐬𝐢𝐧(𝒒𝒙) )

How do you solve non-homogeneous 2nd order ODEs?

The general solution for a non-homogeneous 2nd order ODE is the __sum of the complementary function and the particular integral__.

The complementary function is based on the auxiliary equations required for homogeneous 2nd order ODEs. To solve it the RHS of the ODE is equal to zero.

To find the particular integral, you first need to know the form. This can be given, or it has to be found. Once the form is obtained, find the first and second derivatives of the form and plug back into the equation.

To solve, equate the 'x' terms on the LHS to the RHS, and similarly, the leftover terms on the LHS equate them to the RHS.

How do you find the form of the particular integral?

First, try the same form as Q(x).

If this form is the same as any of the terms in the complementary function, try x*Q(x).

If this is still doesn't work, try x^{2}*Q(x)

For each of these complementary functions and forms of Q(x), what is the most suitable first choice for the particular integral?

What makes a set of vectors linearly independent?

Set of vectors are **linearly independent** if __no vector in the set is a scalar multiple of another vector__ in the set.

It is also linearly independent if the vector __isn't a linear combination of any of the other vectors__ (addition or subtraction).

What is the relation between a and b?

What is the relation between a and d?

What is the relation between a, b and c?

a and b are linearly independent

a and d are linearly dependent

c is a linear combination of a and b, so the vectors are linearly dependent.

What are the two types of random variables?

And what are the two probability distribution functions?

A random variable can either be discrete or continuous.

The distribution of a ** discrete** random variable can be specified by a

__probability mass function__.

The distribution of a ** continuous** random variable is specified, by a

__probability density function__.

What is the expected value of a random variable X? And what does it represent?

The expected value (denoted by E(X) ) of a random variable X is a **‘weighted average’** of X with respect to its underlying probability distribution.

How would you find the expectation value of a **discrete** random variable X?

How would you find the expectation value of a **continuous** random variable X?

What is the rank of a n*n __non-singular__ matrix?

A n*n non-singular matrix is of rank n.

How do you determine the stationary points of a multivariate function?

[x refers to first coordinate and y refers to 2 nd coordinate, (x_{0} ,y_{ 0})]

First, find the partial derivates.

By equating the partial derivates to zero the stationary points can be found.

To determine the nature of these stationary points, the discriminant needs to be determined.

The discriminant consists of the product of both 2nd order partial derivates minus the __2nd order differential of y partial derivate wrt x__.

If ∆ > 0 and the 2nd partial derivative of the first coordinate *f*_{xx } < 0 then it's a __maximum__.

If ∆ > 0 and the 2nd partial derivative of the first coordinate *f*_{xx } > 0 then it's a __minimum__.

If ∆< 0, it’s a __saddle point__: thus it is neither maximum or minimum.

If ∆= 0, it’s not possible to classify stationary points using this method.

How would you determine the stationary points of partial derivative equations coming from a multivariate equation?

For each partial derivative, you need to determine the values of BOTH variables that would result in 0.

What z score range do you use to get the 95% confidence interval for normal distribution?

If you need to divide the LHS by a matrix on the RHS, what do you do?

You can't actually divide by you can multiply by the matrix inverted.

The inverted matrix is equal to the C^{T} (cofactor transposed), and then divided by the modulus of the determinant.

Remember how to take the determinant of 3x3.

What is the matrix of cofactors for a 3x3?

What does the determinant of a 3x3 matrix look like?

What do you need to remember?

You across the top values.

The determinants are obtained by drawing a line through the row and column the top value is in, then the determinant is applied on the remaining 2x2 set of values.

A key thing to remember is that the middle term is negative, in the same way, it is in the cross product.

For a normal distribution, what do these values represent?

The mean and the standard deviation.