Midterm grind Flashcards

1
Q

<p>In Game Playing Algorithms, what is the evaluation function f(n) (or utility value), where n is a board configuration measure?</p>

A

<p>The "goodness" of the board configuration n. This is the number of possible wins, which are not blocked by the opponent minus the number of possible wins for the opponent not blocked by the player:<br></br>
<br></br>f(n) = m(n) - o(n)<br></br>
<br></br>"m" is the possible winning states, "o" is the opponent's possible winning states</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the open list at a particular state in a search algorithm

A

The set of nodes in the ready queue AFTER the node has been visited and expanded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

<p></p>

<p></p>

<p>Briefly describe DLS</p>

A

<p></p>

<p></p>

<p>Just like DFS but limits the search to a maximum depth parameter "l". Once the algorithm discovers nodes below depth "l", it does not expand them.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

<p></p>

<p></p>

<p>What is the process of "searching"</p>

A

<p></p>

<p></p>

<p>Moving from state-to-state in the problem space to find a goal (or to terminate)</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

<p></p>

<p>Describe the Simulated Annealing Algorithm Step-by-step</p>

A

<p></p>

<p>Set Current Solution S0,initial temperature T0, and Select temperature reduction function</p>

<p>Repeat:</p>

<p> Repeat:</p>

<p> Select soliution Sifrom the neighborhood N(S)</p>

<p> Calculate the change in cost according to the new solutionΔC</p>

<p> IfΔC < 0, then accept this solution (since it is improving) S=Si</p>

<p> Else, generate a random number x in the range (0, 1)</p>

<p> if x < e-ΔC/Tthen accept this new solution S=Si</p>

<br></br><br></br><p> Until termination criterion met</p><br></br><br></br><p> Decrease temperature T using reduction function</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

<p></p>

<p></p>

<p>Give a reason to use 2 heuristics simultaneously in a searching algorithm</p>

A

<p></p>

<p></p>

<p>Use 1 heuristic as primary, and 1 as secondary to break ties</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

<p></p>

<p></p>

<p>What does SA do to escape from local optimums?</p>

A

<p></p>

<p></p>

<p>Accepts non-improving solutions depending on what the temperature is</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

<p></p>

<p></p>

<p>What is the motivation behind alpha-beta pruning?</p>

A

<p></p>

<p></p>

<p>In the game tree, some branches are never played since rational players don't pick sub-optimal moves. Therefore, there is no need to generate the subtree for such moves.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

<p></p>

<p></p>

<p>What is the difference between trajectory methods and population-based methods?</p>

A

<p></p>

<p></p>

<p>Trajectory methods handle one solution at a time, whereas Population methods handles multiple solutions at a time</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

<p></p>

<p></p>

<p>Under what conditions does UCS become Dijkstra's single-source shortest path algorithm?</p>

A

<p></p>

<p></p>

<p>if g(n) is the overall cost function for all nodes to a source node</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

<p></p>

<p></p>

<p>What are Meta-Heuristic algorithms</p>

A

<p></p>

<p></p>

<p>Meta-Heuristics are algorithms that combine heuristics in a higher level framework aimed at efficiently and effectively exploring the search space.
<br></br>
<br></br>They are strategies that "guide" the search process</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

<p></p>

<p></p>

<p>Are Meta-Heuristics problem specific? are they at risk of getting trapped in confined areas of the search space?</p>

A

<p></p>

<p></p>

<p>Meta-Heuristics are not problem-specific
<br></br>
<br></br>They utilize mechanisms (depending on the method) to avoid getting trapped in dead ends</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

<p></p>

<p></p>

<p>Describe the difference between "A Search" and "A* Search"</p>

A

<p></p>

<p></p>

<p>"A*" is admissible, in which h(n) <= h*(n). Whereas "A" is not admissible, h(n) has no upper bound in the A algorithm.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Is DLS complete?
Does DLS always terminate?
Is DLS optimal?

A

<p></p>

<p></p>

<p>As long as the solution is within the depth "d", then it is guaranteed to be complete since there is a limit on the depth that it can search to.
<br></br>
<br></br>DLS always terminates, and it is not optimal</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

<p></p>

<p></p>

<p>How does the min/max algorithm relate to game playing</p>

A

<p></p>

<p></p>

<p>The game tree alternates between the 2 opponent's moves, and the min/max algorithm can be used to optimize for the best outcome in this game tree.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

<p></p>

<p></p>

<p>What are search trees, and how are they abstracted from graphs.</p>

A

<p></p>

<p></p>

<p>Search trees are superimposed over top of graph representation of a problem. The graph might be finite, but the search tree can be either finite or infinite. infinite if we allow for repeated states due to reversible actions or cycles.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

<p></p>

<p></p>

<p>What do Heuristics Aim to do?</p>

A

<p></p>

<p></p>

<p>Heuristics aim to efficiently generate good solutions, but does not guarantee optimality.

They guide the algorithm
</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

<p></p>

<p></p>

<p>What are the 2 general optimization methods in searching algorithms?</p>

A

<p></p>

<p></p>

<p>Constructive methods: starting from scratch and getting to the solution one component at a time. Ex: Sudoku solver, N-Queens
<br></br><br></br>Local Search Methods: Starts from an initial solution and iteratively tries to replace the current solution with better solution Ex: TSP</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

<p></p>

<p></p>

<p>What does a heuristic function do?</p>

A

<p></p>

<p></p>

<p>Heuristic functions give an estimate of the distance to the goal based on domain-specific information. It guides the algorithm to a better solution by estimating the "goodness" of a node/state.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

<p></p>

<p></p>

<p>Under what condition is A* complete?</p>

A

A* is complete if the branching factor is finite, and every operator (evaluation function) has a fixed positive cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

<p></p>

<p></p>

<p>What is the difference between well-structured problems and ill-structured problems</p>

A

<p></p>

<p></p>

<p>Well Structured: The existing state and desired state are clearly identified, and the methods to reach the desired state are fairly obvious
<br></br>
<br></br>Ill Structured: Existing state is and desired state are unclear and hence the methods of reaching the desired state cannot be found.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

<p></p>

<p></p>

<p>What is the purpose of the minimax or min/max algorithm</p>

A

<p></p>

<p></p>

<p>To determine the best next move and the cost of such move</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

<p></p>

<p></p>

<p>What does "admissible" mean with respect to heuristics?</p>

A

<p></p>

<p></p>

<p>A heuristic is said to be "admissible" if it NEVER overestimates the distance to the goal node.
<br></br>
<br></br>It is also known as an "optimistic" heuristic</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

<p></p>

<p></p>

<p>List the 5 types of informed search algorithms</p>

A
Hill Climbing Search
Greedy Best-first Search
Beam Search
A Search
A* Search
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25

Under what condition does A* act like UCS?

When h(n) = 0 for all n. This is the null heuristic.

26

What is a rational agent

An agent that for each perceived sequence of events, it does what is expected to maximize performance on the basis of perceptual history and built-in knowledge

27

Describe Fully observable vs Partially observable environments

Fully Observable environment:
sensors detect all aspects that are relevant to the action

Partially Observable environment:
Lots of noise and inaccurate sensors due to parts of state missing

28

What does Behaving rationally mean?

Perceiving the “world” and acting to achieve
some goals given a set of beliefs.

29

Agents work within an environment with certain characteristics. What are these 6 characteristics/dimensions?

``` Fully Observable vs Partially Observable
Deterministic vs Stochastic
Episodic vs Sequential
Static vs Dynamic
Discrete vs Continuous
Competitive vs Co-operative ```
30

Briefly describe UCS

Expand the node with the lowest cost in the fringe, use a heap/sorted list, always take the min-cost node. If there are numerous nodes with the same cost, take the most recently added one.

Always finds the minimum-cost path to the goal node.

31

What is the branching factor in a search Tree?

The maximum number of children possible for a node. Denoted with "b"

32

There are 2 types of Games:

Perfect information & Imperfect information

Describe and Contrast them

Perfect information:
Each player has complete information about the opponent's position and available choices (ex: chess, monopoly)

Imperfect information:
Each player does NOT have complete information about the opponent's position and available choices (ex: poker)

33
In Simulated Annealing, what is the cost function
The quantity which is trying to be minimized as much as possible. It is a function of the current state
34

What is the purpose of using memory structure in trajectory methods?

To escape local minima

To improve explorative strategy by avoiding visited nodes

35

Describe the "Problem Formulation" of Game Playing as Search

State Description: the configuration of the board

Initial State: initial board position & who's move it is

Operators/Action State: the legal moves that a player can make

Goal: is the game over?

Utility Function: Measures an outcome of the game and its desirability

36
What are the 2 categories of engineering problems in terms of the structure of the problem

Well-Structured Problems & Ill-Structured Problems

37

Does SA find an approximation or exact value? What size search space does it work well in?

SA works very well in large search spaces, and provides a good approximation

38

Briefly describe BFS

traverse layer-by-layer, use a FIFO structure (queue) as the fringe.

Always finds the closest path, but takes long.

39

What is cooperation

A group to solve a join problem or perform a common task, based on sharing the responsibility for reaching a goal.

OR

Cooperation is the practice of software/hardware entities working together in order to achieve certain objective

40

Alpha-Beta Pruning is an optimization of the minimax searching algorithm.

What are the tradeoffs? does it affect the completeness of minimax?

Alpha-Beta is guaranteed to return exactly the same value as the min/max algorithm.

It is a pure optimization of min/max without any tradeoffs. It does not affect the completeness, but reduces the number of computations

41

What are the 4 properties of search algorithms?

Completeness: Whether or not the algorithm is guaranteed to find a goal node (if it exists). Optimality: Is the algorithm guaranteed to find the BEST goal node - with the cheapest cost. Time Complexity: How many nodes are expanded? Space Complexity: What is the maximum number of nodes stored in memory?
42

Are meta-heuristics exhaustive?

Meta-Heuristics are non-exhaustive, and they use their own internal heuristic

43

What are strong vs weak methods?

Strong methods are those designed to address a specific type of problem.

Weak methods are general approaches that may be applied to many types of problems. Weak methods are extremely general and not tailored to a specific situation. They are called "weak" because they do not take advantage of more powerful domain-specific heuristics.

44
Suppose "h" us a heuristic function. What does can you say about the current state if: 1. h(n) = 0 2. h(n) = infinity

h(n)=0 means that the state is a goal node. h(n) = infinity means that the state "n" is a dead-end that can never lead to a goal

45

Is Greedy Best-first search Complete? is it Optimal?

No and No, it purely depends on the heuristic that it uses

46

What happens to the expanded nodes in A* search if h(n) is too high or too low?

h(n) too high: Optimal solution will be overlooked (non admissible heuristic)

h(n) too low: More nodes than needed are expanded

47

Compare Hill Climbing Search to Beam Search

Hill Climbing search is equivalent to Beam search with ß=1. Beam is like the middle ground between Hill Climbing and Greedy Best First Search

48

At what point do you prune in minimax alpha beta pruning

If there is an alpha/beta value assigned to a parent node, and ANY grandchild node is below/above the cutoff, then you prune the other grandchildren

49

Under what condition does A* perform like a perfect algorithm, only expanding the nodes which lead to the solution?

When h(n) = h*(n) for all n. This means that h(n) = actual cost to goal node. In this case, the heuristic gives a perfect distance to the solution for all nodes

50

Under what circumstances is DFS incomplete? is DFS optimal?

spaces with loops, or infinite-depth spaces.

DFS is not optimal for finding the shortest path

51

What is the strategy for solving ill-structured problems?

Retrieve a solution/answer, Start from a guess, and improve it. Search among alternatives. Exploring surrounding states.

52

in SA, what is the acceptance probability?

if delta C <= 0: P = 1

if delta C > 0: P = exp(-1*delta C / T)

53

What is Greedy Best-First search?

A searching algorithm in which the next state is picked using the evaluation function f(n)=h(n). The fringe is a priority queue (heap or self-sorted list).

54
What does the min/max algorithm assume for the opposing player's strategy?

Assumes the opponent is rational and optimizes their behaviour & considers their best response/move

55

Is the min/max search complete? is it optimal?

If so, under what conditions

It is complete if the tree is finite,

it is optimal assuming that the opponent plays rationally & optimally

56

What do α and β parameters denote in alpha-beta pruning?

Why do these matter?

α: the best value for MAX seen so far. This is the lower bound on the value that a MAX node can be assigned.

β: the best value for MIN seen so far. This is the upper bound on the value that a MIN node can be assigned.

We use these values to prune cousin nodes and assign cutoff values

57

What is Beam Search

A greedy best-first search variant which tries to minimize the amount of memory which greedy best-first search uses by only storing ß nodes in the fringe, and using a heuristic. Evaluation function is f(n) = h(n), and the algorithm will not add new nodes to the fringe if it is full.

58

Briefly describe IDS

Iteratively increases the limit of the search. Algorithm performs DLS for d=0, if no solution is found, it increases d=1 and restarts the DLS. If no solution is found it increases again to d=2 and restarts. It keeps on iteratively increasing the maximum depth and restarting the DLS until a solution is found

59

Is Beam search Optimal? is it Complete?

No and No, it purely depends on the heuristic that it uses

60

What are the 3 variations of Hill Climbing Search? Describe them

Stochastic Hill Climbing: Random Selection among the uphill moves First-Choice Hill Climbing: Keeps generating successors until a better one is found & takes it (good when there are many successors) Random-Restart hill climbing: Tries to avoid getting stuck in local maxima
61

Why do we limit the depth in expanding the game tree for min/max search?

Typically, the number of states (board configurations) is too large, leading to an unreasonable depth in the game tree. We optimize this by limiting the depth of the game tree in the traversal.

62

 



Describe the step-by-step process of using the min/max algorithm

 



Generate the game tree (alternating max & min)



Apply the utility function to all leaf nodes to give them their utility values



From bottom to top; use the leaf nodes' utility values to determine the utility values of their parents. Repeat this until you get to the root node.



Pick the most optimal move from the root node

63

Describe Hill-Climbing Search

For all successors, it strictly only picks a state which has a lower heuristic value (not even equal), and lower than all the other neighbouring states' heuristic value. It picks this state and ignores all other options.

Hill-Climbing search strictly looks just 1 step ahead to determine if a successor is better than the current state, and moves to the best successor. Very memory efficient.

It is like climbing a mountain with a thick fog around you

64

In SA, what happens when a neighbouring state has higher cost?

A random number is generated between 0 and 1. If P is greater than this random number, it is accepted. In this case P = exp(-1*delta C / T)

Note that in this case, the probability of accepting depends on the magnitude of change in cost, and the temperature

65

What are game playing problems?

Problems which have well-defined rules with moves that can be searched intelligently

66

What is adaptation

The ability to improve behaviour, evolve, adjust to new or different situations

67

What condition does UCS need to work?

No zero-cost edges or negative-cost edges

68

Describe the Process of goal formation

We decide which aspects we are interested in, and which aspects can be ignored.

We need goals to limit search and allow termination.

69
What are the 2 types of optimization Algorithms with respect to solving a non-discrete problem

Exact Algorithms and Approximate Algorithms

70

What is Artificial Intelligence

The Science and Engineering of making intelligent machines, especially intelligent computer programs. Examples: Visual perception, speech recognition, decision making, language translation

71

In the minimax algorithm, how do min and max nodes behave in terms of inheriting their children's values?

MAX: label node will take the maximum value of its children

MIN: label node will take the minimum node of its children

72

What 5 aspects does Problem formulation consist of?

Note - Problem formulation is the compact representation of the problem space. Search space is represented by states

``` A State Space Initial State Goal State, and Goal Test Set of Actions Concept of Cost ```
73

under what condition is IDS complete?

What is a drawback of IDS?

IDS is complete if "b" (branching factor) is finite. | The drawback is that nodes on levels above "d" are generated multiple times
74

Briefly describe DFS

Expand nodes in pre-order fashion (root -> left -> right), use a LIFO data structure for the fringe (stack)

75

Under what condition is UCS the same as BFS

when the costs of all edges in the search space are equal. g(n) is constant

76
What are Greedy/best-first search algorithms
They use either g[n] (distance to goal), or h[n] (heuristic value), or both to evaluate the next move in their evaluation function f[n] = g[n] + h[n], or f[n] = h[n], or f[n] = g[n] Ex: UCS, Hill-Climbing, A*, Best-First
77

In SA, what happens when a neighbouring state has lower cost?

It is always accepted

78

What are Heuristics?

Heuristics are solution strategies or rules by trial and error to produce acceptable (optimal or suboptimal) solutions to complex problems in a reasonable amount of time.

79

What is Alpha-Beta pruning?

A way to reduce the number of nodes that need to be generated and evaluated by pruning.

Avoid processing subtrees that have no effect on the result

80

What is Intelligence

The ability to acquire and apply knowledge and skills

81

Do Meta-heuristic algorithms find exact solutions or approximates?

Are they deterministic?

Meta-Heuristics find approximate solutions, and are non-deterministic

The goal with Meta-Heuristics is to efficiently explore the search space in order to find near-optimal solutions

82

Describe the difference between static and dynamic environments

If the environment can change while the agent is deliberating, then it is dynamic. Otherwise, it is static

83

Why is the "A Search" Algorithm incomplete?

If h(n) is infinite, then the algorithm will be stuck in a dead end, and will not be able to find a goal node, making it an incomplete algorithm

84

Describe the difference between Discrete and Continuous environments

State is describable in a discrete way, or in a continuous way

85

What are the 2 classes of local search strategies? What is the difference between them

Uninformed search strategies have no heuristics, and no information about the likely direction of the goal node.

Informed search strategies have a sense of direction, and use heuristics to guide them on their path.

86

What defines a well-defined state-space formulation. Go into detail for each of the 5 aspects.

State Space: Either a complete configuration of a problem, or a partial configuration of a problem.
Initial State: The search starts here

Goal State, and Goal Test: the Search terminates here.

Set of Actions: This allows movement between states (successor function)

Concept of Cost: Allows to create a costing solution

87

In game theory, what is a utility function?

Assigns a player/state a number denoting the outcome of the game and its desirability

88

Compare Hill Climbing Search to Greedy Best-First Search

Hill Climbing Search does not allow for backtracking, whereas Greedy Best-First Search does, and it jumps to alternative paths & stores the children of previously visited nodes.

89
What is a good rule of thumb for choosing a tabu tenure in the non-cooperative version of tabu search
sqrt (n) where n is the number of states
90

What does Thinking rationally mean?

Using logic to achieve goals via logical inferencing

91

What does the min/max value of a node in the min/max algorithm denote with respect to a game theory?

The utility of being in the corresponding state, assuming that both players play optimally from that state till the end of the game

92

Formal definition of a meta-heuristic

An iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies used to structure information in order to efficiently find near-optimal solutions

93

In SA, describe the relationship between temperature, and probability of accepting a state. What happens to the algorithm at high temperatures vs low temperatures?

Probability of accepting a state is directly proportional to temperature.



At high temperatures, explore parameter space, At lower temperatures, restrict exploration.

94

Under what condition is A/A* Search admissible? What happens when we perform such a search without an admissible heuristic?

if h(n) < h*(n), then it is admissible. A* is always admissible.

Using an admissible heuristic guarantees that the first solution will be an optimal one

95

What is the analogy for simulated annealing

Physical annealing. If a liquid material cools and anneals too quickly, then it will solidify into a sub-optimal configuration. If the metal cools slowly, then the crystals within the material will solidify optimally.

96

Describe Deterministic vs Stochastic Environments

If the next state of the environment is completely determined by the current state, and the action is executed by the agent, then it is deterministic. Otherwise, it is Stochastic

97

What are the 2 types of meta-heuristics?

Trajectory methods, and Population-based methods

98

List the 5 types of uninformed search algorithms

Depth-First Search
Breadth-First Search
Uniform-Cost Search
Depth-Limiting Search
Iterative-Deepening Search

99

How are min and max nodes allocated in the min/max algorithm?

Root node is always MAX (maximizes) player's utility value, and nodes alternate between MIN and MAX layer by layer

100

What type of systems are we interested in for engineering

Rational Systems

101

Compare the performance of Depth-First and Breadth-first searching strategies

Depth-First:
- Low memory requirements
- Can get stuck exploring infinite paths
- Better if there are many solutions and you only need one of them

Breadth-First:
- Uses high amount of memory
- Doesn't get stuck
- Finds the shortest path always

102

If you have A1* with h1(n), A2* with h2(n) > h1(n), then how can you compare A2* with A1* in terms of the performance of the algorithm?

Note that h2(n) will be closer to h*(n)

A2* will expand less nodes, and hence take less time to find the solution than A1*. The closer a heuristic is to h*(n), the better its performance.

The fewer nodes expanded, the better the heuristic

103

Give a template for a generic searching algorithm for a graph/tree

Use a data structure to store nodes in a fringe. This will depend on the algorithm/strategy.

Repetitively Choose a node from the list, test it, then expand its children.

A particular search strategy will influence how we choose the next node to consider in the search (depending on what algorithm we're using)

104

Describe Episodic vs Sequential Environments

In Episodic environments, the choice of action in each state depends only on the state itself.

In Sequential environments, the current decision could affect all future decisions - the agent needs to think ahead. Ex: chess

105

In Simulated Annealing, what does the acceptance probability depend on?

1. The current Temperature



2. The cost difference between the current and next state. This is next cost minus current cost

106
# Define Tabu tenure. What impact does it have on search performance? What is the rule of thumb for choosing a value for this control variable?
Tabu Tenure is the length of time (number of iterations) for which a move is forbidden. If tabu tenure is too small, it poses a risk of cycling in the algorithm. If tabu tenure is too large, it may restrict the search too much. Rule of thumb: Tabu Tenure = sqrt(n)
107
Why do we need Aspiration Criteria? Under what condition is the aspiration criteria typically met?
Tabus may sometimes be too powerful: they may prohibit attractive moves, even when there is no danger of cycling, or they may lead to an overall stagnation of the searching process. It may, therefore, become necessary to revoke tabus at times. Typically if a move results in a solution with an objective value better than that of the current best-­‐known solution, then it meets the aspiration criteria
108

In Simulated Annealing, does temperature matter if delta C is negative?

No, the temperature will not have an effect on the acceptance of the neighbouring state if delta C is negative. It will always be accepted in that case

109

What 4 factors influence the annealing schedule in simulated annealing?

The initial temperature



The final temperature



The temperature decrement rule



The number of iterations at each temperature

110

In Simulated Annealing, what determines a good initial temperature? What can be used as a guide to set the initial temperature?

It must be high enough to allow a move to possibly any state in the search space. Must not be too hot since SA would behave like a random number generator in that case. A good initial temperature is one that accepts 60% of worse moves



 



The Maximum change in the cost function could be used as a guide to set the value of initial temperature

111

In SA, does the final temperature have to reach zero?

No. It might take a very long time to reach zero when some decrement approaches are used

112

When do you stop the Simulated Annealing algorithm?

- Whenever the temperature is reasonably low

- The system is frozen, and there are no better moves generated, and worse moves are not getting accepted

- Number of iterations reaches a threshold (maybe)

113

What are the 3 temperature decrement rules?

Linear: T = T-a

Geometric: T=T*a

Slow Decrease:
T = T/(1+bT)

a, and b are constants

114

In SA, how many iterations for a given temperature?

Enough iterations should be allowed at every temperature for the system to be stable at that temperature.

115

When does the SA algorithm converge? What other concept can it be likened to?

SA converges when temperature is reduced to zero slowly enough. It can be likened to a Markov chain but non-homogeneous

116

What are the 2 pros and 2 cons of Simulated Annealing?

Pros:
- Easy to use
- Provides good solutions for a wide range of problems

Cons:
- Takes a long time to run
- There are a lot of tuneable parameters

117

What is the heuristic function in Simulated annealing?

The Cost function / fitness function (which is what the algorithm is trying to minimize)

118

Suppose SA is running with T=0. Which algorithm would this be identical to?

Hill Climbing search

119

In Simulated Annealing, what are drawbacks for when temperature is reduced too fast or too slow?

If temperature is reduced too fast, then optima are not found

If temperature is reduced too slow, then time is wasted

120
How can SA be made adaptive?
An algorithm is adaptive if its parameters change as it traverses as a result of the feedback from the environment. SA can be made adaptive by making the temperature change change depending on the feedback from the environment. These parameters are initial temperature, the cooling schedule, and the number of iterations per temperature.
121

What type of cost functions should be avoided in Simulated Annealing?

Cost functions which return the same value for many different states

122

How can constraints be added to Simulated Annealing?

How can such constraints be used to make SA more adaptive?

They can be represented in the cost function as "penalty terms"

SA can be made more adaptive by having dynamically changing weights of the penalty terms

123

What is the major difference between normal SA and Cooperative SA

Cooperative SA implements synchronous concurrent runs of numerous SA processes.

Cooperative SA manipulates multiple solutions (a population of solutions) at once.

Instead of singular states, Cooperative SA uses a population of states as its nodes.

124

In Cooperative SA, how are new solutions produced using neighbours from numerous solutions within the population?

New solutions are iteratively produced from 2 randomly selected states in the population S1 and S2:

The algorithm looks for neighbours of S1 that are closer to S2 than S1 itself.

the CLOSER set is generated, and the next solution is randomly selected from it

Note that sometimes it is 2 solutions, sometimes it is more, this depends on the algorithm. They can even pick the best out of all the solutions
125

In Cooperative SA, what is the CLOSER set defined with?

What significance does the CLOSER set have to the algorithm?

CLOSER set is all the members S_k of S1's neighbours such that the distance between S_k and S_4 is less than the distance between S_k and S1.

S1 and S4 are members of the current population in the algorithm in Cooperative SA

If the CLOSER set is not empty, then the next solution is randomly selected from it

126

How is temperature updated in Cooperative SA?

Temperature is updated based on the difference of mean cost value of the new and old populations

If the mean in cost difference between current and previous populations is less than zero, then it doesn't get updated. Otherwise, it is changed
127

Describe Cooperative SA algorithm step-by-step

Population size must be initialized to POP0

For a determined number of transitions K

  For i=1 to POP

      Select a random cooperator Sj is a member of POPk-1

      Generate Snew = cotrans(Si,Sj)

      Accept the new solution if it is better

      Otherwise, probabilistically determine if a move to be taken to the new solution and              update solution accordingly

  Update Temperature

 

128

What is Tabu Search?

Tabu Search is a meta-heuristic, a general strategy for guiding and controlling inner heuristics.

It is like a combination of local search strategy and memory structures

129

What does Tabu Search use memory structures for?

- To escape local minima
- To implement an explorative strategy & avoid revisiting nodes

130

In Trajectory Methods such as Tabu Search, Why can't algorithms consider all the possible neighbours of current solutions?

It is very computationally demanding to process all possible neighbours.

131

Why does Tabu search accept non-improving solutions

In order to escape from local optima where all the neighbouring solutions are non-improving (dead end)

132

How does Tabu Search exploit memory?

Classifies a subset of moves in the neighbourhood which are not permitted (tabu)

133

What are the 2 uses (types) of memory structures in Tabu Search?

- Short term memory which is based on recency of occurrence of a component. This is used to prevent the search from revisiting previously visited solutions

- Long term memory which is based on frequency of occurrence. This is used to diversity the search and explore unvisited areas of the solution space by avoiding frequently visited components

134

What is the name of the short-term memory structure used in Tabu Search.

What is the purpose of this memory structure

The Tabu List. Recent moves which were used on current solutions are added to the Tabu list. It is used to prevent reverse moves, and promote exploration.
135

In Tabu Search, what is "tabu tenure", and where is it stored?

Tabu Tenure is the number of iterations in which a certain move and its attributes are stored in the tabu list.

Recent states/components are "tabu-active" during their tabu-tenure

136

In Trajectory methods, what is the purpose of the neighbourhood?

To identify adjacent solutions/states that can be reached from the current solution/states

137

What are stopping conditions in Tabu Search?

- No feasible solution in the neighbourhood of current solution - Reached the maximum number of iterations allowed - The number of iterations since the last improvement is larger than a threshold

138

What is the idea behind the aspiration criteria in Tabu Search?

Sometimes it is useful to allow certain moves despite being Tabu - which prevents stagnation.

139

In Tabu Search, what are approaches used to cancel the Tabus referred as?

Aspiration Criteria

140

What is the Basic algorithm in Tabu Search?

Firstly, generate an initial solution.

While the termination criterion are not met {
- Chose the best neighbour "s_next" from N(s) = N(s) - T(s) + A(s)
- Store s_next in A(s) if it improves the best known solution
- Update T(s) and A(s)
}

141

In Tabu Search, are the Tabu solutions included in the neighbourhood function?

Under what 2 conditions would they be brought back?

Tabu Solutions are excluded from N(s) and not revisited during the tabu tenure.

They are added back if:
1. The aspiration conditions are met
2. The tabu tenure is finished (number of iterations in tabu tenure is completed)

142

In Tabu Search, what is the common Aspiration Criteria?

Improved-best or best solution: If a tabu solution encountered at the current iteration is better than the best solution found so far, then it can be added to the aspiration list

143

In Tabu Search, what are 2 examples of Tabu Restrictions

- A move that involves the same exchange of positions of a tabu move

- A move that involves any of the positions that were involved in a tabu move

144

In Tabu Search, what is static vs dynamic Tabu Tenure, T

Static: choose T to be a constant of as a guideline of the problem size

Dynamic: choose T to vary randomly between some T_min and T_max

145
In Tabu Search, does the tabu list always prevent cycling? What must happen to the tabu list to improve it?
Fixed length tabu list cannot always prevent cycling
146

Under what condition does a Tabu move become admissible?

A tabu move becomes admissible if it yields a solution that is better than any obtained solution so far (or better than the aspiration value)

147

In Tabu Search, what is Intensification?

The process of exploiting a small portion of the search space, and penalizing solutions that are far from the current solution

148

In Tabu Search, what is Diversification?

Forcing the search into previously unexplored areas of the search space and penalizing solutions that are close to the current

149

What type of memory is diversification based on?

Long-term memory, then frequency memory

150

In Tabu Search, what are the two major approaches for applying the diversification process? Describe them

Restart diversification: Finds components that rarely appear in the solution & restarts the search from these points

Continuous Diversification: Bias the evaluation of possible moves by adding a term which depends on the neighbour's frequency to the evaluation function

151

What are the pros and cons of Tabu Search?

Pros:
- Accepts non-improving solutions in order to escape from local optima
- Uses a Tabu List
- Can be applied to both discrete and continuous problem spaces
- For larger and more difficult problems (ex: scheduling), tabu search is very efficient

Cons:
- Too many parameters to be determined
- Number of iterations is large
- Global optimum might not be found, and it depends on the parameter settings

152

What is the Linear Assignment problem?

How many different distinct combinations of the assignment problem exist?

Involves the assignment of n objects to n sites. For each assignment, there is a related cost C(i,j) of assignment object "i" to site "j"

There are n! distinct combinations of the solution space

153

What are some applications to the linear assignment problem?

Assignment of offices or services in buildings

Assignment of departure gates to an airport

Distribution of files in a database

154
Describe the Tabu Search Flowchart in detail
155
In Tabu Search, what is the consequence of the length of the tabu list being too small? or too large?
If the length is too small, the search can be trapped into cycles If the length is too large, then too many moves could be prevented at each iteration
156
What is a common approach for adding adaptation into Tabu Search?
Allow the length of the short term memory (tabu list) to vary dynamically
157
In adaptive Tabu Search, what are different ways that the tabu list size can be changed adaptively?
Restricting the length of the tabu list to be between L_min and L_max. In the event that the current solution is better than the best-so-far, then the tabu list length is set to 1 If the move is an improvement, then tabu list length is decreased by 1 If the move is not improving, then the tabu list length is increased by 1
158
What is the aim behind adaptation methods in Tabu Search with respect to the tabu list size?
The idea is to increase and decrease the tabu list length dynamically in order to draw the solution away from bad regions and towards good regions
159
Describe how Cooperation can be achieved in Tabu Search?
Set up P independent Tabu Searches working concurrently, and exchanging information as they go
160
In Cooperative Tabu Search, how do the concurrent tabu searches communicate? How does it assist the algorithm?
A central memory is used to handle the information exchange. Each search process sends its best-so-far solution to the central memory when it gets updated. The central memory can also have its own limits on size & tenure The concurrent processes can use the best solution in the central memory as aspiration condition
161
In SA, What effect do cooling schedules that drop the temperature very quickly or very slowly have on the algorithm?
Quickly: the algorithm may result in a suboptimal solution Slowly: the algorithm will search for much longer and might waste time on a temperature
162
In Cooperative Tabu Search, how does increasing the number of iterations between synchronization steps affect the algorithm performance?
It increased the computational intensity because of the increased message passing overhead
163
When is BFS preferable over DFS?
- Abundant memory space to store the search tree is available - A shallow solution is preferred
164
Can Trajectory Optimization algorithms find the exact optimal point to a solution?
No. They find accurate approximations
165
When is DFS preferable over BFS?
When we have a tree with a fixed depth, and the solution is at the leaf of the tree & we want to find ANY solution, not caring about the minimum depth
166
In what case is it better to use an approximate algorithm rather than an exact algorithm?
Whenever the cost of finding an exact solution is extremely high.
167
What are the elements that should be specified in a problem formulation?
``` State Space Initial State Goal State Set of Actions Cost ```
168
Are the following uninformed search strategies complete? 1. Breadth-first search 2. Uniform-cost search 3. Depth-first search 4. Depth-limited search 5. Iterative deepening search
1. BFS: Yes, if tree is finite 2. UCS: Yes, if tree is finite 3. DFS: Yes, if tree is finite 4. DLS: Yes, if l >= d 5. IDS: Yes, if tree is finite
169
Are the following uninformed search strategies optimal? 1. Breadth-first search 2. Uniform-cost search 3. Depth-first search 4. Depth-limited search 5. Iterative deepening search
1. BFS: Yes, if costs are all constant 2. UCS: Yes 3. DFS: No 4. DLS: No 5. IDS: Yes, if costs are all constant
170
What is the time complexity of the following uninformed search strategies: 1. Breadth-first search 2. Uniform-cost search 3. Depth-first search 4. Depth-limited search 5. Iterative deepening search Assume branching factor "b", depth "d", maximum depth "m", and depth limit "l" (for DLS)
1. BFS: b^d 2. UCS: b^d if costs are the same 3. DFS: b^m 4. DLS: b^l 5. IDS: b^d
171
What is the space complexity of the following uninformed search strategies: 1. Breadth-first search 2. Uniform-cost search 3. Depth-first search 4. Depth-limited search 5. Iterative deepening search Assume branching factor "b", depth "d", maximum depth "m", and depth limit "l" (for DLS) Note that "space complexity" is the scale at which memory usage changes in the search strategy
1. BFS: b^d 2. UCS: b^d if costs are the same 3. DFS: b*m 4. DLS: b*l 5. IDS: b*d
172
Are the following informed search strategies Complete? 1. Greedy Best-first search 2. Beam search 3. Hill climbing search
1. GBFS: No - it can get stuck in loops 2. BS: No 3. HCS: No
173
Are the following informed search strategies Optimal? 1. Greedy Best-first search 2. Beam search 3. Hill climbing search
1. GBFS: No 2. BS: No 3. HCS: No Optimality will purely depend on the quality of the heuristic
174
What is the time complexity of the following informed search strategies: 1. Greedy Best-first search 2. Beam search 3. Hill climbing search Assume branching factor "b", depth "d", maximum depth "m", beam width "ß"
1. GBFS: b^m, but a good heuristic can give a dramatic improvement 2. BS: ß*m 3. HCS: d
175
What is the space complexity of the following uninformed search strategies: 1. Greedy Best-first search 2. Beam search 3. Hill climbing search Assume branching factor "b", maximum depth "m", beam width "ß" Note that "space complexity" is the scale at which memory usage changes in the search strategy
1. GBFS: b^m, keeps all nodes in memory 2. BS: ß*m 3. HCS: 1, there is no memory structure involved