Midterm grind Flashcards

1
Q

<p>In Game Playing Algorithms, what is the evaluation function f(n) (or utility value), where n is a board configuration measure?</p>

A

<p>The "goodness" of the board configuration n. This is the number of possible wins, which are not blocked by the opponent minus the number of possible wins for the opponent not blocked by the player:<br></br>
<br></br>f(n) = m(n) - o(n)<br></br>
<br></br>"m" is the possible winning states, "o" is the opponent's possible winning states</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the open list at a particular state in a search algorithm

A

The set of nodes in the ready queue AFTER the node has been visited and expanded

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

<p></p>

<p></p>

<p>Briefly describe DLS</p>

A

<p></p>

<p></p>

<p>Just like DFS but limits the search to a maximum depth parameter "l". Once the algorithm discovers nodes below depth "l", it does not expand them.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

<p></p>

<p></p>

<p>What is the process of "searching"</p>

A

<p></p>

<p></p>

<p>Moving from state-to-state in the problem space to find a goal (or to terminate)</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

<p></p>

<p>Describe the Simulated Annealing Algorithm Step-by-step</p>

A

<p></p>

<p>Set Current Solution S0,initial temperature T0, and Select temperature reduction function</p>

<p>Repeat:</p>

<p> Repeat:</p>

<p> Select soliution Sifrom the neighborhood N(S)</p>

<p> Calculate the change in cost according to the new solutionΔC</p>

<p> IfΔC < 0, then accept this solution (since it is improving) S=Si</p>

<p> Else, generate a random number x in the range (0, 1)</p>

<p> if x < e-ΔC/Tthen accept this new solution S=Si</p>

<br></br><br></br><p> Until termination criterion met</p><br></br><br></br><p> Decrease temperature T using reduction function</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

<p></p>

<p></p>

<p>Give a reason to use 2 heuristics simultaneously in a searching algorithm</p>

A

<p></p>

<p></p>

<p>Use 1 heuristic as primary, and 1 as secondary to break ties</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

<p></p>

<p></p>

<p>What does SA do to escape from local optimums?</p>

A

<p></p>

<p></p>

<p>Accepts non-improving solutions depending on what the temperature is</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

<p></p>

<p></p>

<p>What is the motivation behind alpha-beta pruning?</p>

A

<p></p>

<p></p>

<p>In the game tree, some branches are never played since rational players don't pick sub-optimal moves. Therefore, there is no need to generate the subtree for such moves.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

<p></p>

<p></p>

<p>What is the difference between trajectory methods and population-based methods?</p>

A

<p></p>

<p></p>

<p>Trajectory methods handle one solution at a time, whereas Population methods handles multiple solutions at a time</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

<p></p>

<p></p>

<p>Under what conditions does UCS become Dijkstra's single-source shortest path algorithm?</p>

A

<p></p>

<p></p>

<p>if g(n) is the overall cost function for all nodes to a source node</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

<p></p>

<p></p>

<p>What are Meta-Heuristic algorithms</p>

A

<p></p>

<p></p>

<p>Meta-Heuristics are algorithms that combine heuristics in a higher level framework aimed at efficiently and effectively exploring the search space.
<br></br>
<br></br>They are strategies that "guide" the search process</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

<p></p>

<p></p>

<p>Are Meta-Heuristics problem specific? are they at risk of getting trapped in confined areas of the search space?</p>

A

<p></p>

<p></p>

<p>Meta-Heuristics are not problem-specific
<br></br>
<br></br>They utilize mechanisms (depending on the method) to avoid getting trapped in dead ends</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

<p></p>

<p></p>

<p>Describe the difference between "A Search" and "A* Search"</p>

A

<p></p>

<p></p>

<p>"A*" is admissible, in which h(n) <= h*(n). Whereas "A" is not admissible, h(n) has no upper bound in the A algorithm.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Is DLS complete?
Does DLS always terminate?
Is DLS optimal?

A

<p></p>

<p></p>

<p>As long as the solution is within the depth "d", then it is guaranteed to be complete since there is a limit on the depth that it can search to.
<br></br>
<br></br>DLS always terminates, and it is not optimal</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

<p></p>

<p></p>

<p>How does the min/max algorithm relate to game playing</p>

A

<p></p>

<p></p>

<p>The game tree alternates between the 2 opponent's moves, and the min/max algorithm can be used to optimize for the best outcome in this game tree.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

<p></p>

<p></p>

<p>What are search trees, and how are they abstracted from graphs.</p>

A

<p></p>

<p></p>

<p>Search trees are superimposed over top of graph representation of a problem. The graph might be finite, but the search tree can be either finite or infinite. infinite if we allow for repeated states due to reversible actions or cycles.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

<p></p>

<p></p>

<p>What do Heuristics Aim to do?</p>

A

<p></p>

<p></p>

<p>Heuristics aim to efficiently generate good solutions, but does not guarantee optimality.

They guide the algorithm
</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

<p></p>

<p></p>

<p>What are the 2 general optimization methods in searching algorithms?</p>

A

<p></p>

<p></p>

<p>Constructive methods: starting from scratch and getting to the solution one component at a time. Ex: Sudoku solver, N-Queens
<br></br><br></br>Local Search Methods: Starts from an initial solution and iteratively tries to replace the current solution with better solution Ex: TSP</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

<p></p>

<p></p>

<p>What does a heuristic function do?</p>

A

<p></p>

<p></p>

<p>Heuristic functions give an estimate of the distance to the goal based on domain-specific information. It guides the algorithm to a better solution by estimating the "goodness" of a node/state.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

<p></p>

<p></p>

<p>Under what condition is A* complete?</p>

A

A* is complete if the branching factor is finite, and every operator (evaluation function) has a fixed positive cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

<p></p>

<p></p>

<p>What is the difference between well-structured problems and ill-structured problems</p>

A

<p></p>

<p></p>

<p>Well Structured: The existing state and desired state are clearly identified, and the methods to reach the desired state are fairly obvious
<br></br>
<br></br>Ill Structured: Existing state is and desired state are unclear and hence the methods of reaching the desired state cannot be found.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

<p></p>

<p></p>

<p>What is the purpose of the minimax or min/max algorithm</p>

A

<p></p>

<p></p>

<p>To determine the best next move and the cost of such move</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

<p></p>

<p></p>

<p>What does "admissible" mean with respect to heuristics?</p>

A

<p></p>

<p></p>

<p>A heuristic is said to be "admissible" if it NEVER overestimates the distance to the goal node.
<br></br>
<br></br>It is also known as an "optimistic" heuristic</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

<p></p>

<p></p>

<p>List the 5 types of informed search algorithms</p>

A
Hill Climbing Search
Greedy Best-first Search
Beam Search
A Search
A* Search
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

<p></p>

<p></p>

<p>Under what condition does A* act like UCS?</p>

A

<p></p>

<p></p>

<p>When h(n) = 0 for all n. This is the null heuristic.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

<p></p>

<p></p>

<p>What is a rational agent</p>

A

<p></p>

<p></p>

<p>An agent that for each perceived sequence of events, it does what is expected to maximize performance on the basis of perceptual history and built-in knowledge</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

<p></p>

<p></p>

<p>Describe Fully observable vs Partially observable environments</p>

A

<p></p>

<p></p>

<p>Fully Observable environment:<br></br>sensors detect all aspects that are relevant to the action
<br></br><br></br>Partially Observable environment:<br></br>Lots of noise and inaccurate sensors due to parts of state missing</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

<p></p>

<p></p>

<p>What does Behaving rationally mean?</p>

A

<p></p>

<p></p>

<p>Perceiving the “world” and acting to achieve
<br></br>some goals given a set of beliefs.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

<p></p>

<p></p>

<p>Agents work within an environment with certain characteristics. What are these 6 characteristics/dimensions?</p>

A
Fully Observable vs Partially Observable
<br>Deterministic vs Stochastic
<br>Episodic vs Sequential
<br>Static vs Dynamic
<br>Discrete vs Continuous
<br>Competitive vs Co-operative
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

<p></p>

<p></p>

<p>Briefly describe UCS</p>

A

<p></p>

<p></p>

<p>Expand the node with the lowest cost in the fringe, use a heap/sorted list, always take the min-cost node. If there are numerous nodes with the same cost, take the most recently added one.
<br></br>
<br></br>Always finds the minimum-cost path to the goal node.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

<p></p>

<p></p>

<p>What is the branching factor in a search Tree?</p>

A

<p></p>

<p></p>

<p>The maximum number of children possible for a node. Denoted with "b"</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

<p></p>

<p></p>

<p>There are 2 types of Games:
<br></br>
<br></br>Perfect information & Imperfect information
<br></br>
<br></br>Describe and Contrast them</p>

A

<p></p>

<p></p>

<p>Perfect information:
<br></br>Each player has complete information about the opponent's position and available choices (ex: chess, monopoly)
<br></br>
<br></br>Imperfect information:
<br></br>Each player does NOT have complete information about the opponent's position and available choices (ex: poker)</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

In Simulated Annealing, what is the cost function

A

The quantity which is trying to be minimized as much as possible. It is a function of the current state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

<p></p>

<p></p>

<p>What is the purpose of using memory structure in trajectory methods?</p>

A

<p></p>

<p></p>

<p>To escape local minima
<br></br>
<br></br>To improve explorative strategy by avoiding visited nodes</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

<p></p>

<p></p>

<p>Describe the "Problem Formulation" of Game Playing as Search</p>

A

<p></p>

<p></p>

<p>State Description: the configuration of the board
<br></br>
<br></br>Initial State: initial board position & who's move it is
<br></br>
<br></br>Operators/Action State: the legal moves that a player can make
<br></br>
<br></br>Goal: is the game over?
<br></br>
<br></br>Utility Function: Measures an outcome of the game and its desirability</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
36
Q

What are the 2 categories of engineering problems in terms of the structure of the problem

A

<p></p>

<p></p>

<p>Well-Structured Problems & Ill-Structured Problems</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
37
Q

<p></p>

<p></p>

<p>Does SA find an approximation or exact value? What size search space does it work well in?</p>

A

<p></p>

<p></p>

<p>SA works very well in large search spaces, and provides a good approximation</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
38
Q

<p></p>

<p></p>

<p>Briefly describe BFS</p>

A

<p></p>

<p></p>

<p>traverse layer-by-layer, use a FIFO structure (queue) as the fringe.
<br></br>
<br></br>Always finds the closest path, but takes long.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
39
Q

<p></p>

<p></p>

<p>What is cooperation</p>

A

<p></p>

<p></p>

<p>A group to solve a join problem or perform a common task, based on sharing the responsibility for reaching a goal.
<br></br>
<br></br>OR
<br></br>
<br></br>Cooperation is the practice of software/hardware entities working together in order to achieve certain objective</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
40
Q

<p></p>

<p></p>

<p>Alpha-Beta Pruning is an optimization of the minimax searching algorithm.
<br></br>
<br></br>What are the tradeoffs? does it affect the completeness of minimax?</p>

A

<p></p>

<p></p>

<p>Alpha-Beta is guaranteed to return exactly the same value as the min/max algorithm.
<br></br>
<br></br>It is a pure optimization of min/max without any tradeoffs. It does not affect the completeness, but reduces the number of computations</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
41
Q

<p></p>

<p></p>

<p>What are the 4 properties of search algorithms?</p>

A

Completeness:
Whether or not the algorithm is guaranteed to find a goal node (if it exists).

Optimality:
Is the algorithm guaranteed to find the BEST goal node - with the cheapest cost.

Time Complexity:
How many nodes are expanded?

Space Complexity:
What is the maximum number of nodes stored in memory?

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
42
Q

<p></p>

<p></p>

<p>Are meta-heuristics exhaustive?</p>

A

<p></p>

<p></p>

<p>Meta-Heuristics are non-exhaustive, and they use their own internal heuristic</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
43
Q

<p></p>

<p></p>

<p>What are strong vs weak methods?</p>

A

<p></p>

<p></p>

<p>Strong methods are those designed to address a specific type of problem.
<br></br>
<br></br>Weak methods are general approaches that may be applied to many types of problems. Weak methods are extremely general and not tailored to a specific situation. They are called "weak" because they do not take advantage of more powerful domain-specific heuristics.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
44
Q

Suppose “h” us a heuristic function.
What does can you say about the current state if:
1. h(n) = 0
2. h(n) = infinity

A

<p></p>

<p></p>

<p>h(n)=0 means that the state is a goal node. h(n) = infinity means that the state "n" is a dead-end that can never lead to a goal</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
45
Q

<p></p>

<p></p>

<p>Is Greedy Best-first search Complete? is it Optimal?</p>

A

<p></p>

<p></p>

<p>No and No, it purely depends on the heuristic that it uses</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
46
Q

<p></p>

<p></p>

<p>What happens to the expanded nodes in A* search if h(n) is too high or too low?</p>

A

<p></p>

<p></p>

<p>h(n) too high: Optimal solution will be overlooked (non admissible heuristic)
<br></br>
<br></br>h(n) too low: More nodes than needed are expanded</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
47
Q

<p></p>

<p></p>

<p>Compare Hill Climbing Search to Beam Search</p>

A

<p></p>

<p></p>

<p>Hill Climbing search is equivalent to Beam search with ß=1. Beam is like the middle ground between Hill Climbing and Greedy Best First Search</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
48
Q

<p></p>

<p></p>

<p>At what point do you prune in minimax alpha beta pruning</p>

A

<p></p>

<p></p>

<p>If there is an alpha/beta value assigned to a parent node, and ANY grandchild node is below/above the cutoff, then you prune the other grandchildren</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
49
Q

<p></p>

<p></p>

<p>Under what condition does A* perform like a perfect algorithm, only expanding the nodes which lead to the solution?</p>

A

<p></p>

<p></p>

<p>When h(n) = h*(n) for all n. This means that h(n) = actual cost to goal node. In this case, the heuristic gives a perfect distance to the solution for all nodes</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
50
Q

<p></p>

<p></p>

<p>Under what circumstances is DFS incomplete? is DFS optimal?</p>

A

<p></p>

<p></p>

<p>spaces with loops, or infinite-depth spaces.
<br></br>
<br></br>DFS is not optimal for finding the shortest path</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
51
Q

<p></p>

<p></p>

<p>What is the strategy for solving ill-structured problems?</p>

A

<p></p>

<p></p>

<p>Retrieve a solution/answer, Start from a guess, and improve it. Search among alternatives. Exploring surrounding states.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
52
Q

<p></p>

<p></p>

<p>in SA, what is the acceptance probability?</p>

A

<p></p>

<p></p>

<p>if delta C <= 0: P = 1
<br></br>
<br></br>if delta C > 0: P = exp(-1*delta C / T)</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
53
Q

<p></p>

<p></p>

<p>What is Greedy Best-First search?</p>

A

<p></p>

<p></p>

<p>A searching algorithm in which the next state is picked using the evaluation function f(n)=h(n). The fringe is a priority queue (heap or self-sorted list).</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
54
Q

What does the min/max algorithm assume for the opposing player’s strategy?

A

<p></p>

<p></p>

<p>Assumes the opponent is rational and optimizes their behaviour & considers their best response/move</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
55
Q

<p></p>

<p></p>

<p>Is the min/max search complete? is it optimal?
<br></br>
<br></br>If so, under what conditions</p>

A

<p></p>

<p></p>

<p>It is complete if the tree is finite,
<br></br>
<br></br>it is optimal assuming that the opponent plays rationally & optimally</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
56
Q

<p></p>

<p></p>

<p>What do α and β parameters denote in alpha-beta pruning?
<br></br>
<br></br>Why do these matter?</p>

A

<p></p>

<p></p>

<p>α: the best value for MAX seen so far. This is the lower bound on the value that a MAX node can be assigned.
<br></br>
<br></br>β: the best value for MIN seen so far. This is the upper bound on the value that a MIN node can be assigned.
<br></br>
<br></br>We use these values to prune cousin nodes and assign cutoff values</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
57
Q

<p></p>

<p></p>

<p>What is Beam Search</p>

A

<p></p>

<p></p>

<p>A greedy best-first search variant which tries to minimize the amount of memory which greedy best-first search uses by only storing ß nodes in the fringe, and using a heuristic. Evaluation function is f(n) = h(n), and the algorithm will not add new nodes to the fringe if it is full.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
58
Q

<p></p>

<p></p>

<p>Briefly describe IDS</p>

A

<p></p>

<p></p>

<p>Iteratively increases the limit of the search. Algorithm performs DLS for d=0, if no solution is found, it increases d=1 and restarts the DLS. If no solution is found it increases again to d=2 and restarts. It keeps on iteratively increasing the maximum depth and restarting the DLS until a solution is found</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
59
Q

<p></p>

<p></p>

<p>Is Beam search Optimal? is it Complete?</p>

A

<p></p>

<p></p>

<p>No and No, it purely depends on the heuristic that it uses</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
60
Q

<p></p>

<p></p>

<p>What are the 3 variations of Hill Climbing Search? Describe them</p>

A

Stochastic Hill Climbing:
Random Selection among the uphill moves

First-Choice Hill Climbing:
Keeps generating successors until a better one is found & takes it (good when there are many successors)

Random-Restart hill climbing:
Tries to avoid getting stuck in local maxima

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
61
Q

<p></p>

<p></p>

<p>Why do we limit the depth in expanding the game tree for min/max search?</p>

A

<p></p>

<p></p>

<p>Typically, the number of states (board configurations) is too large, leading to an unreasonable depth in the game tree. We optimize this by limiting the depth of the game tree in the traversal.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
62
Q

<p></p>

<p></p>

<br></br>
<br></br><p>Describe the step-by-step process of using the min/max algorithm</p>

A

<p></p>

<p></p>

<br></br>
<br></br><p>Generate the game tree (alternating max & min)<br></br>
<br></br><br></br>
<br></br>Apply the utility function to all leaf nodes to give them their utility values<br></br>
<br></br><br></br>
<br></br>From bottom to top; use the leaf nodes’ utility values to determine the utility values of their parents. Repeat this until you get to the root node.<br></br>
<br></br><br></br>
<br></br>Pick the most optimal move from the root node</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
63
Q

<p></p>

<p></p>

<p>Describe Hill-Climbing Search</p>

A

<p></p>

<p></p>

<p>For all successors, it strictly only picks a state which has a lower heuristic value (not even equal), and lower than all the other neighbouring states' heuristic value. It picks this state and ignores all other options.
<br></br>
<br></br>Hill-Climbing search strictly looks just 1 step ahead to determine if a successor is better than the current state, and moves to the best successor. Very memory efficient.
<br></br>
<br></br>It is like climbing a mountain with a thick fog around you</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
64
Q

<p></p>

<p></p>

<p>In SA, what happens when a neighbouring state has higher cost?</p>

A

<p></p>

<p></p>

<p>A random number is generated between 0 and 1. If P is greater than this random number, it is accepted. In this case P = exp(-1*delta C / T)
<br></br>
<br></br>Note that in this case, the probability of accepting depends on the magnitude of change in cost, and the temperature</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
65
Q

<p></p>

<p></p>

<p>What are game playing problems?</p>

A

<p></p>

<p></p>

<p>Problems which have well-defined rules with moves that can be searched intelligently</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
66
Q

<p></p>

<p></p>

<p>What is adaptation</p>

A

<p></p>

<p></p>

<p>The ability to improve behaviour, evolve, adjust to new or different situations</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
67
Q

<p></p>

<p></p>

<p>What condition does UCS need to work?</p>

A

<p></p>

<p></p>

<p>No zero-cost edges or negative-cost edges</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
68
Q

<p></p>

<p></p>

<p>Describe the Process of goal formation</p>

A

<p></p>

<p></p>

<p>We decide which aspects we are interested in, and which aspects can be ignored.
<br></br>
<br></br>We need goals to limit search and allow termination.</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
69
Q

What are the 2 types of optimization Algorithms with respect to solving a non-discrete problem

A

<p></p>

<p></p>

<p>Exact Algorithms and Approximate Algorithms</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
70
Q

<p></p>

<p></p>

<p>What is Artificial Intelligence</p>

A

<p></p>

<p></p>

<p>The Science and Engineering of making intelligent machines, especially intelligent computer programs. Examples: Visual perception, speech recognition, decision making, language translation</p>

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
71
Q

<p></p>

<p></p>

<p>In the minimax algorithm, how do min and max nodes behave in terms of inheriting their children's values?</p>

A

<p></p>

<p></p>

<p>MAX: label node will take the maximum value of its children
<br></br>
<br></br>MIN: label node will take the minimum node of its children</p>

72
Q

<p></p>

<p></p>

<p>What 5 aspects does Problem formulation consist of?
<br></br>
<br></br>Note - Problem formulation is the compact representation of the problem space. Search space is represented by states</p>

A
A State Space
Initial State
Goal State, and Goal Test
Set of Actions
Concept of Cost
73
Q

<p></p>

<p></p>

<p>under what condition is IDS complete?
<br></br>
<br></br>What is a drawback of IDS?</p>

A

IDS is complete if “b” (branching factor) is finite.

The drawback is that nodes on levels above “d” are generated multiple times

74
Q

<p></p>

<p></p>

<p>Briefly describe DFS</p>

A

<p></p>

<p></p>

<p>Expand nodes in pre-order fashion (root -> left -> right), use a LIFO data structure for the fringe (stack)</p>

75
Q

<p></p>

<p></p>

<p>Under what condition is UCS the same as BFS</p>

A

<p></p>

<p></p>

<p>when the costs of all edges in the search space are equal. g(n) is constant</p>

76
Q

What are Greedy/best-first search algorithms

A

They use either g[n] (distance to goal), or h[n] (heuristic value), or both to evaluate the next move in their evaluation function f[n] = g[n] + h[n], or f[n] = h[n], or f[n] = g[n]

Ex: UCS, Hill-Climbing, A*, Best-First

77
Q

<p></p>

<p></p>

<p>In SA, what happens when a neighbouring state has lower cost?</p>

A

<p></p>

<p></p>

<p>It is always accepted</p>

78
Q

<p></p>

<p></p>

<p>What are Heuristics?</p>

A

<p></p>

<p></p>

<p>Heuristics are solution strategies or rules by trial and error to produce acceptable (optimal or suboptimal) solutions to complex problems in a reasonable amount of time.</p>

79
Q

<p></p>

<p></p>

<p>What is Alpha-Beta pruning?</p>

A

<p></p>

<p></p>

<p>A way to reduce the number of nodes that need to be generated and evaluated by pruning.
<br></br>
<br></br>Avoid processing subtrees that have no effect on the result</p>

80
Q

<p></p>

<p></p>

<p>What is Intelligence</p>

A

<p></p>

<p></p>

<p>The ability to acquire and apply knowledge and skills</p>

81
Q

<p></p>

<p></p>

<p>Do Meta-heuristic algorithms find exact solutions or approximates?
<br></br>
<br></br>Are they deterministic?</p>

A

<p></p>

<p></p>

<p>Meta-Heuristics find approximate solutions, and are non-deterministic
<br></br>
<br></br>The goal with Meta-Heuristics is to efficiently explore the search space in order to find near-optimal solutions</p>

82
Q

<p></p>

<p></p>

<p>Describe the difference between static and dynamic environments</p>

A

<p></p>

<p></p>

<p>If the environment can change while the agent is deliberating, then it is dynamic. Otherwise, it is static</p>

83
Q

<p></p>

<p></p>

<p>Why is the "A Search" Algorithm incomplete?</p>

A

<p></p>

<p></p>

<p>If h(n) is infinite, then the algorithm will be stuck in a dead end, and will not be able to find a goal node, making it an incomplete algorithm</p>

84
Q

<p></p>

<p></p>

<p>Describe the difference between Discrete and Continuous environments</p>

A

<p></p>

<p></p>

<p>State is describable in a discrete way, or in a continuous way</p>

85
Q

<p></p>

<p></p>

<p>What are the 2 classes of local search strategies? What is the difference between them</p>

A

<p></p>

<p></p>

<p>Uninformed search strategies have no heuristics, and no information about the likely direction of the goal node.
<br></br>
<br></br>Informed search strategies have a sense of direction, and use heuristics to guide them on their path.</p>

86
Q

<p></p>

<p></p>

<p>What defines a well-defined state-space formulation. Go into detail for each of the 5 aspects.</p>

A

<p></p>

<p></p>

<p>State Space: Either a complete configuration of a problem, or a partial configuration of a problem.
<br></br>Initial State: The search starts here
<br></br><br></br>Goal State, and Goal Test: the Search terminates here.
<br></br><br></br>Set of Actions: This allows movement between states (successor function)
<br></br><br></br>Concept of Cost: Allows to create a costing solution</p>

87
Q

<p></p>

<p></p>

<p>In game theory, what is a utility function?</p>

A

<p></p>

<p></p>

<p>Assigns a player/state a number denoting the outcome of the game and its desirability</p>

88
Q

<p></p>

<p></p>

<p>Compare Hill Climbing Search to Greedy Best-First Search</p>

A

<p></p>

<p></p>

<p>Hill Climbing Search does not allow for backtracking, whereas Greedy Best-First Search does, and it jumps to alternative paths & stores the children of previously visited nodes.</p>

89
Q

What is a good rule of thumb for choosing a tabu tenure in the non-cooperative version of tabu search

A

sqrt (n) where n is the number of states

90
Q

<p></p>

<p></p>

<p>What does Thinking rationally mean?</p>

A

<p></p>

<p></p>

<p>Using logic to achieve goals via logical inferencing</p>

91
Q

<p></p>

<p></p>

<p>What does the min/max value of a node in the min/max algorithm denote with respect to a game theory?</p>

A

<p></p>

<p></p>

<p>The utility of being in the corresponding state, assuming that both players play optimally from that state till the end of the game</p>

92
Q

<p></p>

<p></p>

<p>Formal definition of a meta-heuristic</p>

A

<p></p>

<p></p>

<p>An iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies used to structure information in order to efficiently find near-optimal solutions</p>

93
Q

<p></p>

<p></p>

<p>In SA, describe the relationship between temperature, and probability of accepting a state. What happens to the algorithm at high temperatures vs low temperatures?</p>

A

<p></p>

<p></p>

<p>Probability of accepting a state is directly proportional to temperature.</p>

<br></br>
<br></br><p>At high temperatures, explore parameter space, At lower temperatures, restrict exploration.</p>

94
Q

<p></p>

<p></p>

<p>Under what condition is A/A* Search admissible? What happens when we perform such a search without an admissible heuristic?</p>

A

<p></p>

<p></p>

<p>if h(n) < h*(n), then it is admissible. A* is always admissible.
<br></br>
<br></br>Using an admissible heuristic guarantees that the first solution will be an optimal one</p>

95
Q

<p></p>

<p></p>

<p>What is the analogy for simulated annealing</p>

A

<p></p>

<p></p>

<p>Physical annealing. If a liquid material cools and anneals too quickly, then it will solidify into a sub-optimal configuration. If the metal cools slowly, then the crystals within the material will solidify optimally.</p>

96
Q

<p></p>

<p></p>

<p>Describe Deterministic vs Stochastic Environments</p>

A

<p></p>

<p></p>

<p>If the next state of the environment is completely determined by the current state, and the action is executed by the agent, then it is deterministic. Otherwise, it is Stochastic</p>

97
Q

<p></p>

<p></p>

<p>What are the 2 types of meta-heuristics?</p>

A

<p></p>

<p></p>

<p>Trajectory methods, and Population-based methods</p>

98
Q

<p></p>

<p></p>

<p>List the 5 types of uninformed search algorithms</p>

A

<p></p>

<p></p>

<p>Depth-First Search
<br></br>Breadth-First Search
<br></br>Uniform-Cost Search
<br></br>Depth-Limiting Search
<br></br>Iterative-Deepening Search</p>

99
Q

<p></p>

<p></p>

<p>How are min and max nodes allocated in the min/max algorithm?</p>

A

<p></p>

<p></p>

<p>Root node is always MAX (maximizes) player's utility value, and nodes alternate between MIN and MAX layer by layer</p>

100
Q

<p></p>

<p></p>

<p>What type of systems are we interested in for engineering</p>

A

<p></p>

<p></p>

<p>Rational Systems</p>

101
Q

<p></p>

<p></p>

<p>Compare the performance of Depth-First and Breadth-first searching strategies</p>

A

<p></p>

<p></p>

<p>Depth-First:
<br></br>- Low memory requirements
<br></br>- Can get stuck exploring infinite paths
<br></br>- Better if there are many solutions and you only need one of them
<br></br>
<br></br>Breadth-First:
<br></br>- Uses high amount of memory
<br></br>- Doesn't get stuck
<br></br>- Finds the shortest path always</p>

102
Q

<p></p>

<p></p>

<p>If you have A1* with h1(n), A2* with h2(n) > h1(n), then how can you compare A2* with A1* in terms of the performance of the algorithm?
<br></br>
<br></br>Note that h2(n) will be closer to h*(n)</p>

A

<p></p>

<p></p>

<p>A2* will expand less nodes, and hence take less time to find the solution than A1*. The closer a heuristic is to h*(n), the better its performance.
<br></br>
<br></br>The fewer nodes expanded, the better the heuristic</p>

103
Q

<p></p>

<p></p>

<p>Give a template for a generic searching algorithm for a graph/tree</p>

A

<p></p>

<p></p>

<p>Use a data structure to store nodes in a fringe. This will depend on the algorithm/strategy.
<br></br>
<br></br>Repetitively Choose a node from the list, test it, then expand its children.
<br></br>
<br></br>A particular search strategy will influence how we choose the next node to consider in the search (depending on what algorithm we're using)</p>

104
Q

<p></p>

<p></p>

<p>Describe Episodic vs Sequential Environments</p>

A

<p></p>

<p></p>

<p>In Episodic environments, the choice of action in each state depends only on the state itself.
<br></br>
<br></br>In Sequential environments, the current decision could affect all future decisions - the agent needs to think ahead. Ex: chess</p>

105
Q

<p></p>

<p></p>

<p>In Simulated Annealing, what does the acceptance probability depend on?</p>

A

<p></p>

<p></p>

<p>1. The current Temperature</p>

<br></br>
<br></br><p style="text-align:center;">2. The cost difference between the current and next state. This is next cost minus current cost</p>

106
Q

Define Tabu tenure.

What impact does it have on search performance?

What is the rule of thumb for choosing a value for this control variable?

A

Tabu Tenure is the length of time (number of iterations) for which a move is forbidden.

If tabu tenure is too small, it poses a risk of cycling in the algorithm. If tabu tenure is too large, it may restrict the search too much.

Rule of thumb: Tabu Tenure = sqrt(n)

107
Q

Why do we need Aspiration Criteria?

Under what condition is the aspiration criteria typically met?

A

Tabus may sometimes be too powerful: they may prohibit attractive moves, even when there is no danger of cycling, or they may lead to an overall stagnation of the searching process.

It may, therefore, become necessary to revoke tabus at times. Typically if a move results in a solution with an objective value better than that of the current best-­‐known solution, then it meets the aspiration criteria

108
Q

<p></p>

<p></p>

<p>In Simulated Annealing, does temperature matter if delta C is negative?</p>

A

<p></p>

<p></p>

<p>No, the temperature will not have aneffect on the acceptance of the neighbouring state if delta C is negative. It will always be accepted in that case</p>

109
Q

<p></p>

<p></p>

<p>What 4 factors influence the annealing schedule in simulated annealing?</p>

A

<p></p>

<p></p>

<p>The initial temperature</p>

<br></br>
<br></br><p style="text-align:center;">The final temperature</p>
<br></br>
<br></br><p style="text-align:center;">The temperature decrement rule</p>
<br></br>
<br></br><p style="text-align:center;">The number of iterations at each temperature</p>

110
Q

<p></p>

<p></p>

<p>In Simulated Annealing, what determines a good initial temperature? What can be used as a guide to set the initial temperature?</p>

A

<p></p>

<p></p>

<p>It must be high enough to allow a move to possibly any state in the search space. Must not be too hot since SA would behave like a random number generator in that case. A good initial temperature is one that accepts 60% of worse moves</p>

<br></br>
<br></br><p style="text-align:center;"></p>
<br></br>
<br></br><p style="text-align:center;">The Maximum change in the cost function could be used as a guide to set the value of initial temperature</p>

111
Q

<p></p>

<p>In SA, does the final temperature have to reach zero?</p>

A

<p></p>

<p>No. It might take a very long time to reach zero when some decrement approaches are used</p>

112
Q

<p></p>

<p>When do you stop the Simulated Annealing algorithm?</p>

A

<p></p>

<p>- Whenever the temperature is reasonably low
<br></br>
<br></br>- The system is frozen, and there are no better moves generated, and worse moves are not getting accepted
<br></br>
<br></br>- Number of iterations reaches a threshold (maybe)</p>

113
Q

<p></p>

<p>What are the 3 temperature decrement rules?</p>

A

<p></p>

<p>Linear: T = T-a
<br></br>
<br></br>Geometric: T=T*a
<br></br>
<br></br>Slow Decrease:
<br></br>T = T/(1+bT)
<br></br>
<br></br>a, and b are constants</p>

114
Q

<p></p>

<p>In SA, how many iterations for a given temperature?</p>

A

<p></p>

<p>Enough iterations should be allowed at every temperature for the system to be stable at that temperature.</p>

115
Q

<p></p>

<p>When does the SA algorithm converge? What other concept can it be likened to?</p>

A

<p></p>

<p>SA converges when temperature is reduced to zero slowly enough. It can be likened to a Markov chain but non-homogeneous</p>

116
Q

<p></p>

<p>What are the 2 pros and 2 cons of Simulated Annealing?</p>

A

<p></p>

<p>Pros:
<br></br>- Easy to use
<br></br>- Provides good solutions for a wide range of problems
<br></br>
<br></br>Cons:
<br></br>- Takes a long time to run
<br></br>- There are a lot of tuneable parameters</p>

117
Q

<p></p>

<p>What is the heuristic function in Simulated annealing?</p>

A

<p></p>

<p>The Cost function / fitness function (which is what the algorithm is trying to minimize)</p>

118
Q

<p></p>

<p>Suppose SA is running with T=0. Which algorithm would this be identical to?</p>

A

<p></p>

<p>Hill Climbing search</p>

119
Q

<p></p>

<p>In Simulated Annealing, what are drawbacks for when temperature is reduced too fast or too slow?</p>

A

<p></p>

<p>If temperature is reduced too fast, then optima are not found
<br></br>
<br></br>If temperature is reduced too slow, then time is wasted</p>

120
Q

How can SA be made adaptive?

A

An algorithm is adaptive if its parameters change as it traverses as a result of the feedback from the environment. SA can be made adaptive by making the temperature change change depending on the feedback from the environment.

These parameters are initial temperature, the cooling schedule, and the number of iterations per temperature.

121
Q

<p></p>

<p>What type of cost functions should be avoided in Simulated Annealing?</p>

A

<p></p>

<p>Cost functions which return the same value for many different states</p>

122
Q

<p></p>

<p>How can constraints be added to Simulated Annealing?
<br></br>
<br></br>How can such constraints be used to make SA more adaptive?</p>

A

<p></p>

<p>They can be represented in the cost function as "penalty terms"
<br></br>
<br></br>SA can be made more adaptive by having dynamically changing weights of the penalty terms</p>

123
Q

<p></p>

<p>What is the major difference between normal SA and Cooperative SA</p>

A

<p></p>

<p>Cooperative SA implements synchronous concurrent runs of numerous SA processes.
<br></br>
<br></br>Cooperative SA manipulates multiple solutions (a population of solutions) at once.
<br></br>
<br></br>Instead of singular states, Cooperative SA uses a population of states as its nodes.</p>

124
Q

<p></p>

<p>In Cooperative SA, how are new solutions produced using neighbours from numerous solutions within the population?</p>

A

<p></p>

<p>New solutions are iteratively produced from 2 randomly selected states in the population S1 and S2:
<br></br><br></br>The algorithm looks for neighbours of S1 that are closer to S2 than S1 itself.
<br></br><br></br>the CLOSER set is generated, and the next solution is randomly selected from it</p>

Note that sometimes it is 2 solutions, sometimes it is more, this depends on the algorithm. They can even pick the best out of all the solutions

125
Q

<p></p>

<p>In Cooperative SA, what is the CLOSER set defined with?
<br></br>
<br></br>What significance does the CLOSER set have to the algorithm?</p>

A

<p></p>

<p>CLOSER set is all the members S_k of S1's neighbours such that the distance between S_k and S_4 is less than the distance between S_k and S1.
<br></br>
<br></br>S1 and S4 are members of the current population in the algorithm in Cooperative SA
<br></br>
<br></br>If the CLOSER set is not empty, then the next solution is randomly selected from it</p>

126
Q

<p></p>

<p>How is temperature updated in Cooperative SA?</p>

A

<p></p>

<p>Temperature is updated based on the difference of mean cost value of the new and old populations</p>

If the mean in cost difference between current and previous populations is less than zero, then it doesn’t get updated. Otherwise, it is changed

127
Q

<p></p>

<p>Describe Cooperative SA algorithm step-by-step</p>

A

<p></p>

<p>Population size must be initialized to POP0</p>

<p>For a determined number of transitions K</p>

<p> For i=1 to POP</p>

<p> Select a random cooperator Sjis a member of POPk-1</p>

<p> Generate S<span>new</span>=cotrans(Si,Sj)</p>

<p> Accept the new solution if it is better</p>

<p> Otherwise, probabilistically determine if a move to be taken to the new solution and update solution accordingly</p>

<p> Update Temperature</p>

<p></p>

128
Q

<p>What is Tabu Search?</p>

A

<p>Tabu Search is a meta-heuristic, a general strategy for guiding and controlling inner heuristics.
<br></br>
<br></br>It is like a combination of local search strategy and memory structures</p>

129
Q

<p>What does Tabu Search use memory structures for?</p>

A

<p>- To escape local minima
<br></br>- To implement an explorative strategy & avoid revisiting nodes</p>

130
Q

<p>In Trajectory Methods such as Tabu Search, Why can't algorithms consider all the possible neighbours of current solutions?</p>

A

<p>It is very computationally demanding to process all possible neighbours.</p>

131
Q

<p>Why does Tabu search accept non-improving solutions</p>

A

<p>In order to escape from local optima where all the neighbouring solutions are non-improving (dead end)</p>

132
Q

<p>How does Tabu Search exploit memory?</p>

A

<p>Classifies a subset of moves in the neighbourhood which are not permitted (tabu)</p>

133
Q

<p>What are the 2 uses (types) of memory structures in Tabu Search?</p>

A

<p>- Short term memory which is based on recency of occurrence of a component. This is used to prevent the search from revisiting previously visited solutions
<br></br>
<br></br>- Long term memory which is based on frequency of occurrence. This is used to diversity the search and explore unvisited areas of the solution space by avoiding frequently visited components</p>

134
Q

<p>What is the name of the short-term memory structure used in Tabu Search.
<br></br>
<br></br>What is the purpose of this memory structure</p>

A

The Tabu List.

Recent moves which were used on current solutions are added to the Tabu list. It is used to prevent reverse moves, and promote exploration.

135
Q

<p>In Tabu Search, what is "tabu tenure", and where is it stored?</p>

A

<p>Tabu Tenure is the number of iterations in which a certain move and its attributes are stored in the tabu list.
<br></br>
<br></br>Recent states/components are "tabu-active" during their tabu-tenure</p>

136
Q

<p>In Trajectory methods, what is the purpose of the neighbourhood?</p>

A

<p>To identify adjacent solutions/states that can be reached from the current solution/states</p>

137
Q

<p>What are stopping conditions in Tabu Search?</p>

A

<p>- No feasible solution in the neighbourhood of current solution

- Reached the maximum number of iterations allowed
- The number of iterations since the last improvement is larger than a threshold</p>

138
Q

<p>What is the idea behind the aspiration criteria in Tabu Search?</p>

A

<p>Sometimes it is useful to allow certain moves despite being Tabu - which prevents stagnation.</p>

139
Q

<p>In Tabu Search, what are approaches used to cancel the Tabus referred as?</p>

A

<p>Aspiration Criteria</p>

140
Q

<p>What is the Basic algorithm in Tabu Search?</p>

A

<p>Firstly, generate an initial solution.
<br></br>
<br></br>While the termination criterion are not met {
<br></br> - Chose the best neighbour "s_next" from N(s) = N(s) - T(s) + A(s)
<br></br> - Store s_next in A(s) if it improves the best known solution
<br></br> - Update T(s) and A(s)
<br></br>}</p>

141
Q

<p>In Tabu Search, are the Tabu solutions included in the neighbourhood function?
<br></br>
<br></br>Under what 2 conditions would they be brought back?</p>

A

<p>Tabu Solutions are excluded from N(s) and not revisited during the tabu tenure.
<br></br>
<br></br>They are added back if:
<br></br>1. The aspiration conditions are met
<br></br>2. The tabu tenure is finished (number of iterations in tabu tenure is completed)</p>

142
Q

<p>In Tabu Search, what is the common Aspiration Criteria?</p>

A

<p>Improved-best or best solution: If a tabu solution encountered at the current iteration is better than the best solution found so far, then it can be added to the aspiration list</p>

143
Q

<p>In Tabu Search, what are 2 examples of Tabu Restrictions</p>

A

<p>- A move that involves the same exchange of positions of a tabu move
<br></br>
<br></br>- A move that involves any of the positions that were involved in a tabu move</p>

144
Q

<p>In Tabu Search, what is static vs dynamic Tabu Tenure, T</p>

A

<p>Static: choose T to be a constant of as a guideline of the problem size
<br></br>
<br></br>Dynamic: choose T to vary randomly between some T_min and T_max</p>

145
Q

In Tabu Search, does the tabu list always prevent cycling? What must happen to the tabu list to improve it?

A

Fixed length tabu list cannot always prevent cycling

146
Q

<p>Under what condition does a Tabu move become admissible?</p>

A

<p>A tabu move becomes admissible if it yields a solution that is better than any obtained solution so far (or better than the aspiration value)</p>

147
Q

<p>In Tabu Search, what is Intensification?</p>

A

<p>The process of exploiting a small portion of the search space, and penalizing solutions that are far from the current solution</p>

148
Q

<p>In Tabu Search, what is Diversification?</p>

A

<p>Forcing the search into previously unexplored areas of the search space and penalizing solutions that are close to the current</p>

149
Q

<p>What type of memory is diversification based on?</p>

A

<p>Long-term memory, then frequency memory</p>

150
Q

<p>In Tabu Search, what are the two major approaches for applying the diversification process? Describe them</p>

A

<p>Restart diversification: Finds components that rarely appear in the solution & restarts the search from these points
<br></br>
<br></br>Continuous Diversification: Bias the evaluation of possible moves by adding a term which depends on the neighbour's frequency to the evaluation function</p>

151
Q

<p>What are the pros and cons of Tabu Search?</p>

A

<p>Pros:
<br></br>- Accepts non-improving solutions in order to escape from local optima
<br></br>- Uses a Tabu List
<br></br>- Can be applied to both discrete and continuous problem spaces
<br></br>- For larger and more difficult problems (ex: scheduling), tabu search is very efficient
<br></br>
<br></br>Cons:
<br></br>- Too many parameters to be determined
<br></br>- Number of iterations is large
<br></br>- Global optimum might not be found, and it depends on the parameter settings</p>

152
Q

<p>What is the Linear Assignment problem?
<br></br>
<br></br>How many different distinct combinations of the assignment problem exist?</p>

A

<p>Involves the assignment of n objects to n sites. For each assignment, there is a related cost C(i,j) of assignment object "i" to site "j"
<br></br>
<br></br>There are n! distinct combinations of the solution space</p>

153
Q

<p>What are some applications to the linear assignment problem?</p>

A

<p>Assignment of offices or services in buildings
<br></br>
<br></br>Assignment of departure gates to an airport
<br></br>
<br></br>Distribution of files in a database</p>

154
Q

Describe the Tabu Search Flowchart in detail

A
155
Q

In Tabu Search, what is the consequence of the length of the tabu list being too small? or too large?

A

If the length is too small, the search can be trapped into cycles

If the length is too large, then too many moves could be prevented at each iteration

156
Q

What is a common approach for adding adaptation into Tabu Search?

A

Allow the length of the short term memory (tabu list) to vary dynamically

157
Q

In adaptive Tabu Search, what are different ways that the tabu list size can be changed adaptively?

A

Restricting the length of the tabu list to be between L_min and L_max.

In the event that the current solution is better than the best-so-far, then the tabu list length is set to 1

If the move is an improvement, then tabu list length is decreased by 1

If the move is not improving, then the tabu list length is increased by 1

158
Q

What is the aim behind adaptation methods in Tabu Search with respect to the tabu list size?

A

The idea is to increase and decrease the tabu list length dynamically in order to draw the solution away from bad regions and towards good regions

159
Q

Describe how Cooperation can be achieved in Tabu Search?

A

Set up P independent Tabu Searches working concurrently, and exchanging information as they go

160
Q

In Cooperative Tabu Search, how do the concurrent tabu searches communicate?

How does it assist the algorithm?

A

A central memory is used to handle the information exchange. Each search process sends its best-so-far solution to the central memory when it gets updated.

The central memory can also have its own limits on size & tenure

The concurrent processes can use the best solution in the central memory as aspiration condition

161
Q

In SA, What effect do cooling schedules that drop the temperature very quickly or very slowly have on the algorithm?

A

Quickly: the algorithm may result in a suboptimal solution

Slowly: the algorithm will search for much longer and might waste time on a temperature

162
Q

In Cooperative Tabu Search, how does increasing the number of iterations between synchronization steps affect the algorithm performance?

A

It increased the computational intensity because of the increased message passing overhead

163
Q

When is BFS preferable over DFS?

A
  • Abundant memory space to store the search tree is available
  • A shallow solution is preferred
164
Q

Can Trajectory Optimization algorithms find the exact optimal point to a solution?

A

No. They find accurate approximations

165
Q

When is DFS preferable over BFS?

A

When we have a tree with a fixed depth, and the solution is at the leaf of the tree & we want to find ANY solution, not caring about the minimum depth

166
Q

In what case is it better to use an approximate algorithm rather than an exact algorithm?

A

Whenever the cost of finding an exact solution is extremely high.

167
Q

What are the elements that should be specified in a problem formulation?

A
State Space
Initial State
Goal State
Set of Actions
Cost
168
Q

Are the following uninformed search strategies complete?

  1. Breadth-first search
  2. Uniform-cost search
  3. Depth-first search
  4. Depth-limited search
  5. Iterative deepening search
A
  1. BFS: Yes, if tree is finite
  2. UCS: Yes, if tree is finite
  3. DFS: Yes, if tree is finite
  4. DLS: Yes, if l >= d
  5. IDS: Yes, if tree is finite
169
Q

Are the following uninformed search strategies optimal?

  1. Breadth-first search
  2. Uniform-cost search
  3. Depth-first search
  4. Depth-limited search
  5. Iterative deepening search
A
  1. BFS: Yes, if costs are all constant
  2. UCS: Yes
  3. DFS: No
  4. DLS: No
  5. IDS: Yes, if costs are all constant
170
Q

What is the time complexity of the following uninformed search strategies:

  1. Breadth-first search
  2. Uniform-cost search
  3. Depth-first search
  4. Depth-limited search
  5. Iterative deepening search

Assume branching factor “b”, depth “d”, maximum depth “m”, and depth limit “l” (for DLS)

A
  1. BFS: b^d
  2. UCS: b^d if costs are the same
  3. DFS: b^m
  4. DLS: b^l
  5. IDS: b^d
171
Q

What is the space complexity of the following uninformed search strategies:

  1. Breadth-first search
  2. Uniform-cost search
  3. Depth-first search
  4. Depth-limited search
  5. Iterative deepening search

Assume branching factor “b”, depth “d”, maximum depth “m”, and depth limit “l” (for DLS)

Note that “space complexity” is the scale at which memory usage changes in the search strategy

A
  1. BFS: b^d
  2. UCS: b^d if costs are the same
  3. DFS: b*m
  4. DLS: b*l
  5. IDS: b*d
172
Q

Are the following informed search strategies Complete?

  1. Greedy Best-first search
  2. Beam search
  3. Hill climbing search
A
  1. GBFS: No - it can get stuck in loops
  2. BS: No
  3. HCS: No
173
Q

Are the following informed search strategies Optimal?

  1. Greedy Best-first search
  2. Beam search
  3. Hill climbing search
A
  1. GBFS: No
  2. BS: No
  3. HCS: No

Optimality will purely depend on the quality of the heuristic

174
Q

What is the time complexity of the following informed search strategies:

  1. Greedy Best-first search
  2. Beam search
  3. Hill climbing search

Assume branching factor “b”, depth “d”, maximum depth “m”, beam width “ß”

A
  1. GBFS: b^m, but a good heuristic can give a dramatic improvement
  2. BS: ß*m
  3. HCS: d
175
Q

What is the space complexity of the following uninformed search strategies:

  1. Greedy Best-first search
  2. Beam search
  3. Hill climbing search

Assume branching factor “b”, maximum depth “m”, beam width “ß”

Note that “space complexity” is the scale at which memory usage changes in the search strategy

A
  1. GBFS: b^m, keeps all nodes in memory
  2. BS: ß*m
  3. HCS: 1, there is no memory structure involved