Chapter 13 - Nonlinear Flashcards

1
Q

Why is there a need for non linear pgoramming?

A

Because a lot of models dont actually model well with only linear funciton

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Formally define a non linear progrmaming problem

A

We want to find x=(x1, x2, …, xn) so as to maximize f(x)

subject to the constriants

gi(x) <= bi, for all i.

and

x >= 0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Give the function for profit.

Is it linear?

A

P(x) = xp(x) - cx

non linear.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What would be the objective function for a case where we want to produce different products at different quantities?

A

We need to sum the different profit functions:

Pj(xj) is the profit of selling xj amount of product j.

f(x) = ∑Pj(xj) [j=1, n]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Name some reasons for non linearity

A

Marginal costs can decrease or increase as production differ. This relationship requires a non linear development.

Learning curve effects resemble the marginal case, and represent non linear relationships.

In essence, every scenario where we dont have the very same price regardless of the number of units we sell, etc. It is about change.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Source of non linearity in regards to transportation?

A

Volume discounts.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What happens with CPF solutions in non linear programming?

A

no longer guaranteed to be CPF. This means that we are not able to use this simplification. Searching in the set of CPF’s is extremely easy compared to the entire other space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Can otpimal solutions to non linear programming problems lie inside of the feasbiel region, and not just on the boundary+

A

Yes.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Elaborate on local vs global maxima

A

In general, OR algos are not able to distinguish between local maximas and global maximas. Therefore, it is absolutely crucial that we as programmers understand the conditions that can guarantee a global maxima.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

How can we use calculus to tell whether our function is global or local maxima?

A

if the double derivative is always smaller than or equal to 0, we know that we have a guaranteed global maxima.

We are talking about concave vs convex functions. Concave functions are always “not smiling”. Convex are smiling.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the concave “sum” theorem?

A

The sum of concave functions will always be concave.
This also applies to convex functions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Hva er substitusjonsmetoden?

A

Når vi har kun en likhetsfunksjon som restriksjon kan vi substituere den inn i målfunksjonen.

max Z = 10x1 - 0.02x1^2 + 12x2 - 0.03x2^2
s.t. 0.2x1 + 0.1x2 = 40

Kan skrive restriksjonen som:

0.2x1 = 40 - 0.1x2
x1 = 200 - 0.5x2

Så setter vi dette inn i målfunksjonen:

Z = 10(200 - 0.5x2) - 0.02(200-0.5x2)^2 + 12x2 - 0.03x2^2

Nå kan vi derivere med hensyn på x2.

∂z/∂x2 = -5 - 0.02(200-0.5X2) + 12….

= 11 - 0.07x2

Setter derivat lik 0.

x2 = 157.14

Vi finner x1 ved å bruke substitusjonen vi brukte tidligere. Finner da at x1 = 121.5.

Da kan vi finne Z, og Z = 2064.78

IMPORTANT: Vi vet ikke hvordan punkt dette egentlig er. Dette må vi finne ut av. Vi trenger andre derivert. Vi sier at vi “kontrollerer om det er maximum eller minimum eller sadelpunkt ved å bruke andrederivert”.

∂2z/∂x2^2 = -0.07, som er strikt negativt, sim betyr at vi har et globalt maximumpunkt. Funksjonen en konkav.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Hva definerer en konkav funksjon?

A

Kurve som har strengt fallende stigningstall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Hva definerer en konveks funksjon?

A

Kurver som har strengt økende stigningstall.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Hva er en konvekskombinasjon?

A

En konvekskombinasjon er:

x’’’ = lambda x’’ + (1 - lambda)x’, lambda between 1 and 0.

x’’’ er altså mulig alle punkter mellom x’ og x’’, bestemt av hva lambda er.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Er en lineær funksjon konveks eller konkav?

A

Den tiflredstiller begge deler.

17
Q

Hva er kravet for å si at en funksjon kun har ett ekstremalpunkt?

A

Funksjonen må være strengt konkav eller strengt konveks.

18
Q

Forklar litt om Hessian

A

Hessian er en matrise med andrederivater. Antallet entries er lik kombinasjoner av andre derivat-variabler. Funksjonen i seg selv er alltid andrederivert. Men variablen(e) vi deriverer med hensyn på, vil variere:

Konveksitetstest for en funksjon med to variabler gjøres med å se på determinanten

19
Q

Hvordan kan vi gjøre konveksitetstest for en funksjon med flere variabler?

A

Vi finner først Hessian. Deretter bruker vi de to testene:

1) Determinant må være større enn null
2) Hoveddiagonalen i Hessianen har verdier som er kun større enn 0 (konveks) eller kun mindre enn 0 (konkav).

20
Q

Why is non linear programming models difficult?

A

For one thing, there is no universal algorithm. in the case of LP problems, simplex could solve it all. For non linear programming, different procedures work for different types of problems.

21
Q

Name some classes of non linear problems

A

Unconstrained optimization: No constraints. The goal is simply to optimize a function. Function can have a single variable, but is typically multivariable. We differentiate and equal to 0 to find this optimal solution.

Linearly constrained optimization: The objective function is non-linear, but the constraints are all linear.

Quadratic programming: Linear constraints, but f(x) MUST be both quadratic AND concave, and we must maximize.

22
Q

What is difficult with unconstrained non linear problems?

A

First of all, we must understand what a non linear unconstrained problem is. We are basically reducing the equation to solving a set of slightly simpler equations (derivative). However, these solutions may very well be non linear as well. In this case, we can only pray to find an analytical solution.

23
Q

We usually solve unconstrained non linear prblems with ∂f/∂x = 0. Are there any cases where this is not the case?

A

Consider the case where the variable have a non-negativity constraint. This means that we are in the upper right or lower right quadrant.
IF the function is such that its maximum point is at the very beginning (x=0), then we obviously wont find derivative equal to 0. If the function is concave, meaning the slope will be strictly decreasing, the maximized point is at the very beginning, where x=0. However, to find it we have to accept ∂f/∂x <= 0

24
Q

If we have an unconstrained non linear problem, what is the necessary and sufficient condition for a particular solution x=x’ to be optimal, when the function is concave?

A

∂f/∂x = 0, at x=x’.

Note that a numerical solution may be required.

25
Q

When is a function convex?

A

A function is convex if its second derivative is positive for all values of X.

26
Q

What is the difference between convex and strictly convex?

A

Convex requires that the second derivative is greater than, or equal to, 0 for all values of X.

Strictly convex requires that the second derivative is greater than 0 for all values of X.

27
Q

When is a function concave?

A

A function is concave if the second derivative is smaller than or equal to 0 for all values of x.

A function is called “strictly concave” if its second derivative is smaller than 0 for all x.

28
Q

in a non-linear problem, will the optimal point be on the boundary?

A

No, not necessarily. It may, but it is probably more likely to not be there.

29
Q

Hvorfor er vi så opptatte av konvekse problemer?

A

et konvekst problem har egenskapen at alle lokale optimal også er globale optimale.

30
Q

Forklar faenskapet med x-hatt og det der

A

Vi definerer x-hatt = lambda x’’ + (1 - lambda)x’

Det dette betyr er at vi definerer variabelen x-hatt til å kunne være hvilket som helst punkt mellom x’ og x’’. Ved å variere lambda mellom 0 og 1 tillates dette. Lambda vil fungere som en prosentvis greie også. Dersom lambda er 0.5 vil vi være nøyaktig midt mellom puntkene osv.

Denne dritten bruker vi for å teste enkel konveksitet. Dersom vi ser for oss en funksjon, f(x): Dersom f(x-hatt) alltid ligger under “en rett linje mellom x’ og x’’”, så er f(x) konveks.

Matematisk:

f(lambda x’’ + (1-lambda)x’) <= lambda f(x’’) + (1-lambda) f(x’)

LHS er korrsponderende til x-hatt, bare f(x-hatt).
RHS er korresponderende til y-verdien, eller f(x) verdien, til den rette linjen.

Dermed kan vi bruke dette resultatet for en enkel konveksitetstest i 2D. Vi kan derfor også justere for konkavitet.

31
Q

Hva kan vi si om Hessian?

A

Symmetrisk matrise bestående av andrederiverte.

Siden Hessianen består av andre-deriverte, så kan vi bruke den for å finne konveksitet. Kravet da er at alle entries langs diagonalen må være større enn, eller lik 0. Dette gir mening, dersom vi ser for oss tilfellet med kun en variabel. Andre derivert som er positiv gir en smilende uderivert. Dermed konveks.

32
Q

Forklar om Hessian konveksitetstest i forhold til determinanter og egenverdier

A

Vi kan finne konveksitet ved å bruke determinant til H og til submatriser dersom større enn 2x2. MEN, dette er ikke veldig skalerbart. Derfor brukes ofte egenverdier i større matriser.

Vi må selvfølgelig også sjekke diagonal-entriene for større enn, eller lik, 0 for å være konveks.

33
Q

Relationship between eigenvalues and determinants?

A

THe product of all eigenvalues is equal to the determinant of the matrix.

34
Q

Why is the convexity test different between 2x2 and nxn cases?

A

In the 2x2 case, assessing the convexity of a function through its Hessian matrix can effectively be done by examining the determinant. The determinant of a 2x2 matrix is equal to the product of its eigenvalues. Given that a 2x2 matrix has exactly two eigenvalues, the condition for convexity (which requires non-negative eigenvalues) or concavity (which requires non-positive eigenvalues) can indeed be inferred from the determinant. For convexity, both eigenvalues should be non-negative, leading to a non-negative determinant. For concavity, both eigenvalues should be non-positive, which also results in a non-negative determinant since the product of two negative numbers is positive. Thus, a non-negative determinant indicates that the eigenvalues are either both non-negative or both non-positive, suggesting the function is either convex or concave, respectively. To determine which one specifically, we must look beyond the determinant.

This method, however, does not extend directly to larger matrices because the relationship between the determinant and the signs of the eigenvalues becomes more complex with more than two eigenvalues. The determinant being positive in larger matrices does not exclusively indicate that all eigenvalues are of the same sign due to the potential for an even number of negative eigenvalues to multiply to a positive product. Thus, for matrices larger than 2x2, more detailed analysis of the Hessian’s eigenvalues or its leading principal minors is necessary to ascertain convexity or concavity.

35
Q

What exactly is the lagrange method

A

We solve a system of equations where we include the constraints. This way, we force the value of the equality constraints.

Also, the equations we use in the system ensure optimality because of differentiation.

36
Q

What is the requirements of using KKT conditions?

A

Convex problem.

We achieve a convex problem by maximizing a concave function subject to convex constraints, OR minimizing a convex function subject to concave constraints.

Recall that a convex feasible region is one where we can draw a line between any two points in the region, and all points on the line will be inside the region.

37
Q

what determines the sign of the lambda (whether we add or subtract the lambda term) when using lagrange?

A

Max problems have negative. Min have positive.

38
Q

Why does the lagrange method with inequalities only work for convex problems?

A

Yes, your explanation captures a key reason why the method works reliably for convex problems and not always for non-convex problems. In convex optimization problems, because there is only one global maximum (for a concave objective function) or one global minimum (for a convex objective function), and the feasible region defined by convex constraints is also convex, the first solution you find that satisfies the constraints (whether it’s from directly solving the unconstrained problem or by adjusting for active constraints) is indeed the global optimum. This is due to the properties of convex functions and convex sets, where local optima are also global optima.

In contrast, non-convex problems can have multiple local optima. If you find a solution that satisfies the constraints, you cannot be sure it’s the best possible solution without exploring other potential optima. This complexity arises because moving slightly from one point does not guarantee proximity to the global optimum due to the presence of multiple local optima.

39
Q
A