Equal Interval, Golden Section Search and Gradient Method Flashcards

1
Q

Which of the following statements is incorrect regarding the Equal Interval Search method?

A. Both methods require an initial boundary region to start the search

B. The number of iterations in both methods are affected by the size of ε

C. Everything else being equal, the Golden Section Search method should find an optimal solution faster

D. Everything else being equal, the Equal Interval Search method should find an optimal solution faster

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Which of the following statements is incorrect regarding the Golden Section Search method?

A. The Golden Section Search method is an iterative algorithm.

B. The Golden Section Search method is used to find the minimum of a unimodal function.

C. The Golden Section Search method is a gradient-based optimization method.

D. The Golden Section Search method is a derivative-free optimization method.

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Which of the following is a requirement for the Golden Section Search method to work?

A. Convex function

B. Multimodal function

C. Discontinuous function

D. Unimodal function

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Which of the following is true about the Golden Section Search method?

A. It is an optimization algorithm used to find the minimum or maximum of a unimodal function within a given interval.

B. It is a graph traversal algorithm used to find the shortest path between two nodes.

C. It is a machine learning algorithm used for classification tasks.

D. It is a sorting algorithm used to arrange elements in ascending order.

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

In the Golden Section Search method, what is the ratio used to divide the search space?

A. 0.618

B. 0.145

C. 0.236

D. 0.382

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

In the Fibonacci Search method, what is the ratio used to divide the search space?

A. 0.236

B. 0.618

C. 0.145

D. 0.382

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Which of the following parameters is not required to use the Golden Section Search method for optimization?

A. None

B. Lower bound

C. Objective function

D. Initial guess

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the gradient method?

A. An optimization algorithm used to find the minimum or maximum of a function.

B. A programming technique used to sort arrays.

C. A statistical method used to analyze data trends.

D. A mathematical equation used to calculate the slope of a line.

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Name one application of the gradient method.

A. Regression analysis

B. Optimization

C. Data visualization

D. Machine learning

A

B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is convergence analysis in the context of the gradient method?

A. Determining whether the method will converge to the optimal solution or not.

B. Evaluating the stability of the method during the convergence process.

C. Determining the number of iterations required for the method to converge.

D. Analyzing the rate at which the method approaches the optimal solution.

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does the gradient method solve optimization problems?

A. By randomly selecting parameters until the objective function is minimized.

B. By calculating the average of the objective function values at different parameter values.

C. By using a fixed step size to update the parameters in the direction of the steepest ascent of the objective function.

D. By iteratively updating the parameters in the direction of the steepest descent of the objective function.

A

D

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What are some advantages of the gradient method compared to other optimization algorithms?

A. Faster convergence, simplicity of implementation, and efficient handling of large datasets.

B. Less prone to getting stuck in local minima, ability to handle non-differentiable functions, and robustness to noisy data.

A

A.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is the main disadvantage of the gradient method?

A. Local optima

B. Overfitting

C. Underfitting

D. Slow convergence

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is the purpose of the gradient method in optimization?

A. To find the minimum or maximum of a function.

B. To solve a system of equations.

C. To determine the derivative of a function.

D. To calculate the average of a function.

A

A

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What are some alternative optimization algorithms to the gradient method?

A. Stochastic gradient descent

B. Hill climbing algorithm

C. Newton’s method, Conjugate gradient method, Quasi-Newton methods (e.g., BFGS, L-BFGS), Nelder-Mead method, and Simulated annealing.

D. Genetic algorithm

A

C

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain the concept of step size in the gradient method.

A. Magnitude of the gradient at each iteration.

B. Number of iterations required to converge.

C. Size of the steps taken in each iteration to update the parameters.

D. Rate of change of the objective function.

A

C

17
Q

The Gradient method algorithm has ___ steps

A. 2

B. 3

C. 4

D. 5

A

D

18
Q

A gradient is

A. a vector

B. a scalar

C. a partial derivative

D. a sum of partial derivatives

A

A

19
Q

A Hessian is

A. A matrix that contains second degree partial derivatives

B. A determinant of a Gradient matrix

C. Any triangular matrix

D. A matrix of gradient components (one for each dimension of the solution space)

A

A

20
Q

If the determinant of a Hessian is zero, we

A. have a saddle point

B. have a local minimum

C. have a local maximum

D. have an indetermination

A

D

21
Q

If the value of the first submatrix of a Hessian is positive and the deteminant of the Hessian is positive, we have

A. a local minimum

B. a local maximum

C. an absolute minumum

D. an absolute maximum

A

A

22
Q

We have reached a null vector as a gradient. This means we have

A. reached a local minimum or maximum

B. reached an absolute minimum or maximum

C. made a mistake somewhere

D. reached a saddle point

A

A