Constrained Optimization Flashcards

1
Q

What is the formulation for standard form in constrained optimization?

(hint: any particular problem could be arranged into this form)

(German’s Notes)

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the Lagragian function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What are the KKT conditions?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the implications of

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

If there is only one active inequality constraint, the convex cone collapses to a…

A

line, so the objective function’s gradient must point in exactly the opposite direction as the inequality constraint’s gradient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the drawing of f, g1, and g2 from this example problem?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the lagragian of this example problem?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the second and third KKT conditions of this example problem?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

If given two points that satisfy KKT conditions 2 and 3. Which, if either, could be the constrained minimum? Draw it out

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Even if you can “solve” a constrained optimization problem using the KKT conditions to find a point 𝒙* , we are not guaranteed that the point is a local optimum. Why is that?

A

The reason is that the KKT conditions are only “necessary” conditions for a constrained local optimum. We would need to meet the “sufficient” conditions to guarantee a local optimum.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Do the KKT Conditions Always Apply?

A

No.

The KKT Conditions apply only if the constraints satisfy some regularity conditions called “constraint qualifications.”

A commonly applied condition is that the gradients of the active constraints must be linearly independent at the point at which the KKT conditions are being checked.

Other less restrictive conditions also exis

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

How would you draw this?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the two general types of approaches for solving constrained optimization problems

A
  1. Indirect methods: modify the objective function with a penalty function to account for the influence of the constraints. The problems are then solved with unconstrained optimization techniques.
  2. Direct methods: enforce the constraints explicitly to achieve a solution. These methods therefore depart considerably from unconstrained optimization techniques.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

For indirect methods, what is the pseudo-objective function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What are Sequential Unconstrained Minimization Techniques (SUMTs)?

A

SUMTs are indirect methods for constrained optimization.

Used to deal with the issue that the steepness of the penalty function may cause numerical ill- conditioning when unconstrained algorithms are applied.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

What are the three general types of indirect methods?

A
  • Exterior penalty function methods
  • Interior penalty function methods
  • Augmented Lagrangian methods
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

In the exterior penalty function method, the penalty function 𝑝(x) is formulated as

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

In the exterior penalty function method, what is the corresponding pseudo-objective?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

Describe the Exterior Penalty Function Method

A

The exterior penalty function increases the objective only when the search proceeds in the region exterior to the feasible region, i.e. within the infeasible region.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

Pros and Cons of the Exterior Penalty Method?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

Describe the Interior Penalty Function Method

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

What is the common form of the interior penalty function?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

What is the logic for the interior penalty function method?

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

What are the Pros/Cons of Interior Penalty Method?

A
30
Q

What is the Extended Interior Penalty Function Method?

A
31
Q

What is the Linear Extended Interior Penalty Function?

A
32
Q

What is the psuedo code for the Extended Interior Penalty Function Method?

A
33
Q

In penalty function methods, it is advantageous to scale the constraints such that the gradient of the constraints is of the same order of magnitude as the gradient of the objective function. What are the advantages?

A
34
Q

What is the augmented Lagrangian function?

A
35
Q

What are the steps for the augmented Lagrangian method for equality-constrained problems?

A
36
Q

What are the advantages of the Augmented Lagrangian Method?

A
37
Q

What is the “Standard Form” of a Linear Programming Problem

A
38
Q

For the conversion of a Linear Problem to Standard Form, the Standard form may appear to be very restrictive because it would appear that it does not allow for:

A
  • Negative design variables
  • Inequality constraints

However, it turns out that we can convert arbitrary linear problems to standard form rather easily

39
Q

What are slack variables?

A

Slack variables are so named because they “take up the slack” of an inactive inequality constraint and create an equivalent active equality constraint.

40
Q

Linear programming problems can be written in standard matrix form as…

A
41
Q

What are four types of possible outcomes to solving LP problems?

A
  1. Unique solution
  2. Non-unique solution
  3. Unbounded solution
  4. No solution
42
Q

Two major classes of algorithms for solving LP problems

A
  1. Basis exchange methods, e.g. the simplex method
  2. Interior point methods, e.g. Karmarkar’s algorithm
43
Q

What is the Simplex Method?

A
44
Q

What are Direct Methods?

A
45
Q

What are some types of direct methods?

A
  • Sequential linear programming
  • Method of feasible directions
  • Generalized reduced gradient method
  • Sequential quadratic programming
46
Q

What is sequential linear programming (SLP)?

A

SLP repeatedly solves linear programming problems to march toward a solution to a constrained nonlinear optimization problem.

47
Q

The general approach of sequential linear programming is as follows:

A
  1. Linearize the optimization problem around a baseline point in the design space to obtain a linear program (LP).
  2. Add additional side constraints called “move limits” around the baseline point to prevent the LP from problem from being unbounded (solutions at ±∞ for some of the design variables)
  3. Solve the LP with any appropriate technique such as simplex.
  4. Set the new baseline point as the optimum found in step 3.
  5. Repeat steps 1 to 4 until a convergence criterion is met.
48
Q

What is Method of Feasible Directions (MoFD)?

A
49
Q

What are Usable Directions?

A
50
Q

What are feasible directions?

A
51
Q

What are the requirements for usability and feasibility?

A
52
Q

For MoFD Direction Finding, the “fastest feasible improvement” could be achieved if we could select the search direction 𝒔 such that…

A
53
Q

The direction of “fastest feasible improvement” can be formulated as a “direction-finding” optimization problem:

A
54
Q

MoFD specifies that the search direction obey the following requirement for all active constraints:

A
55
Q

For MoFD, when is a constraint active?

A
56
Q

For MoFD, what is the Push-Off Factor?

A
57
Q

For Method of Feasible Direction, the two common forms for bounds on the magnitude of the search direction (s) are…

A
58
Q

For MoFD, the direction-finding problem with side constraints on 𝒔:

A
59
Q

The direction-finding problem with 2-norm constraints on 𝒔 is:

A
60
Q

For MoFD: Line Searches, to do a line search along the search direction found in the direction-finding problem we can proceed as follows:

A
61
Q

What is the MoFD Convergence Criteria?

A

In order to determine if the algorithm has reached a minimum, we have to check two possibilities:

  • A minimum occurs at a point at which one or more of the constraints are active, i.e. the minimum is on the constraint boundary
  • A minimum occurs without any constraints active, i.e. the minimum is in the interior of the feasible region
62
Q

MoFD overall algorithm is…

A
63
Q

What is the Generalized Reduced Gradient Method?

A

The concept of the generalized reduced gradient method is to move in search directions along active constraints, presuming that the constraints stay active during the move.

If a constraint becomes inactive during the move because of nonlinearity, we step back to the constraint with Newton’s method for root finding.

The formulation begins by converting a general constrained optimization problem to a problem with only equality constraints by introducing slack variables.

64
Q

What are the pros/cons for the Generalized Reduced Gradient Method?

A
65
Q

What is Sequential Quadratic Programming?

A

Sequential Quadratic Programming (SQP) develops quadratic approximations to the objective function.

The constraints are linearized.

These approximations are solved sequentially, in a manner similar to SLP, to converge to the solution.

Because SQP involves a quadratic approximation, certain implementations of SQP share similarities to Newton’s method and quasi-Newton methods such as BFGS.

66
Q

What is the SQP Overall Line Search Algorithm?

A
67
Q

For Sequential Quadratic Programming, what is the basic form of the quadratic approxmiation?

A
68
Q

Methods for solving inequality-constrained QPs include:

A
  • Active set methods
  • Interior point methods

As their name implies, active set methods attempt to identify the “active set” of inequality constraints.

This active set can be added to the set of equality constraints.

The equality-constrained problem can then be solved as a linear system with efficient algorithms.

69
Q

For Equality-Constrained Quadratic Programming, what is the QP formulation?

A
70
Q

The linear system of KKT conditions for equality-constrained QPs can be solved by several different approaches:

A

Direct solution approaches: Symmetric indefinite factorization, Schur-complement method, & Null-space method

Iterative solution approaches: Conjugate gradient method & Projected conjugate gradient method

71
Q

Practical Considerations for SQP are:

A
  • There are many algorithms for SQP. Most leverage equality- constrained QP algorithms by determining the “active set” or a “working set” of constraints at each iteration.
  • Both line search and trust region implementations are possible.
  • Most methods “modify” the simple quadratic approximation we examined earlier to account for practical considerations.
72
Q

In the Vanderplaats line search implementation, the SQP direction- finding problem is as follows:

A