Optimisation Flashcards

(23 cards)

1
Q

What’s a feasible direction

A

At a feasible point , a direction is a feasible direction if an arbitrary small move from in direction remains feasible

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What’s a stationary point

A

If grad f(x) at that value of x =0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What’s unimodality

A

A function is unimodal if it has a single minimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What’s strongly unimodal

A

A function is strongly unimodal if along a straight line from every point to the minimum the gradient <0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What makes a matrix positive definite

A

All eigenvalues are positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What makes a function convex?

A

If the line segment between any two points on the graph of the function lies above or on the curve.

Mathematically, the value of the function is greater than or equal to the first two terms of the Taylor expansion

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can you tell if a hessian matrix is a positive definite matrix

A

The determinant of each principle minor matrix of H is positive

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What’s the order of convergence for steepest descent

A

1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What’s the order of convergence for Newton’s method (when it works)

A

2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

For the steepest approach method, what can you assume about the hessian

A

That it is so good that the step size can be 1

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the condition number of a function

A

The condition number of a function wrt an argument measures how much the output value of the function can change for a small archangel in the input argument. How sensitive a function is to changes or errors in the input.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What’s the order of convergence for conjugate gradient method

A

1 (linear)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is linear programming

A

An optimization problem in which both the objective and constraint functions are linear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

In linear programming where do we know the optimum lies

A

At an extremal point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What is the number of control variables that will be zero at a feasible region’s extremal points

A

n-m

n is no of control variables
m is no of constraints

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What’s the simplex algorithm

A

Starts with an extremal feasible solution (a vertex of the feasible region), i.e. particular set of nonzero
control variables
Then moves along an edge of the feasible set to another vertex where is smaller
Eventually finds a vertex where every edge leading away would increase , this is the optimum
It turns out that finding an extremal feasible solution to start with can be nontrivial - more on this later. (It is
called Phase 1)
Moving to the best vertex is called Phase 2

17
Q

What’s the particular solution to a differential equation

A

The solution obtained by assigning particular values to the arbitrary constants in the general solution.

18
Q

What happens in simplex method when we have a choice of variables to increase (increasing both of the variables yields a reduction in the objective function)

A

Look at their coefficients in the objective function equation and choose the most negative one

19
Q

How do you construct the tableau for the simplex method

A

Lhs of equation is [A b] on top line and [cT 0] on bottom line. The tableau has the first n columns corresponding to variables (final column is the answer) and the first m rows corresponding to constraints

Then perform Gaussian elimination like before to bringh into canonical form with the basic variables (non-zero) forming the identity matrix.

Then eliminate the basic variable coefficients from the last line to get the negative value of the objective function on the bottom right and the reduced variable costs which allow you to choose the entering variable.

20
Q

What are the optimality conditions for a minimum in the multi-variate case

A

The gradient is zero and it’s hessian is positive definite (the principal minor matrices are positive)

21
Q

How does Newton’s method perform on quadratics?

A

It converges in one iteration.

22
Q

How does the conjugate gradient method perform on quadratic problems

A

It converges in a number of iterations equal to the number of control variables.

23
Q

Tip: To make optimisation problems easier you can put a constant numerator/denominator =1

A

You’re welcome