Optimisation Flashcards
(23 cards)
What’s a feasible direction
At a feasible point , a direction is a feasible direction if an arbitrary small move from in direction remains feasible
What’s a stationary point
If grad f(x) at that value of x =0
What’s unimodality
A function is unimodal if it has a single minimum.
What’s strongly unimodal
A function is strongly unimodal if along a straight line from every point to the minimum the gradient <0
What makes a matrix positive definite
All eigenvalues are positive
What makes a function convex?
If the line segment between any two points on the graph of the function lies above or on the curve.
Mathematically, the value of the function is greater than or equal to the first two terms of the Taylor expansion
How can you tell if a hessian matrix is a positive definite matrix
The determinant of each principle minor matrix of H is positive
What’s the order of convergence for steepest descent
1
What’s the order of convergence for Newton’s method (when it works)
2
For the steepest approach method, what can you assume about the hessian
That it is so good that the step size can be 1
What is the condition number of a function
The condition number of a function wrt an argument measures how much the output value of the function can change for a small archangel in the input argument. How sensitive a function is to changes or errors in the input.
What’s the order of convergence for conjugate gradient method
1 (linear)
What is linear programming
An optimization problem in which both the objective and constraint functions are linear.
In linear programming where do we know the optimum lies
At an extremal point
What is the number of control variables that will be zero at a feasible region’s extremal points
n-m
n is no of control variables
m is no of constraints
What’s the simplex algorithm
Starts with an extremal feasible solution (a vertex of the feasible region), i.e. particular set of nonzero
control variables
Then moves along an edge of the feasible set to another vertex where is smaller
Eventually finds a vertex where every edge leading away would increase , this is the optimum
It turns out that finding an extremal feasible solution to start with can be nontrivial - more on this later. (It is
called Phase 1)
Moving to the best vertex is called Phase 2
What’s the particular solution to a differential equation
The solution obtained by assigning particular values to the arbitrary constants in the general solution.
What happens in simplex method when we have a choice of variables to increase (increasing both of the variables yields a reduction in the objective function)
Look at their coefficients in the objective function equation and choose the most negative one
How do you construct the tableau for the simplex method
Lhs of equation is [A b] on top line and [cT 0] on bottom line. The tableau has the first n columns corresponding to variables (final column is the answer) and the first m rows corresponding to constraints
Then perform Gaussian elimination like before to bringh into canonical form with the basic variables (non-zero) forming the identity matrix.
Then eliminate the basic variable coefficients from the last line to get the negative value of the objective function on the bottom right and the reduced variable costs which allow you to choose the entering variable.
What are the optimality conditions for a minimum in the multi-variate case
The gradient is zero and it’s hessian is positive definite (the principal minor matrices are positive)
How does Newton’s method perform on quadratics?
It converges in one iteration.
How does the conjugate gradient method perform on quadratic problems
It converges in a number of iterations equal to the number of control variables.
Tip: To make optimisation problems easier you can put a constant numerator/denominator =1
You’re welcome