Midterm grind Flashcards
<p>In Game Playing Algorithms, what is the evaluation function f(n) (or utility value), where n is a board configuration measure?</p>
<p>The "goodness" of the board configuration n. This is the number of possible wins, which are not blocked by the opponent minus the number of possible wins for the opponent not blocked by the player:<br></br>
<br></br>f(n) = m(n) - o(n)<br></br>
<br></br>"m" is the possible winning states, "o" is the opponent's possible winning states</p>
What is the open list at a particular state in a search algorithm
The set of nodes in the ready queue AFTER the node has been visited and expanded
<p></p>
<p></p>
<p>Briefly describe DLS</p>
<p></p>
<p></p>
<p>Just like DFS but limits the search to a maximum depth parameter "l". Once the algorithm discovers nodes below depth "l", it does not expand them.</p>
<p></p>
<p></p>
<p>What is the process of "searching"</p>
<p></p>
<p></p>
<p>Moving from state-to-state in the problem space to find a goal (or to terminate)</p>
<p></p>
<p>Describe the Simulated Annealing Algorithm Step-by-step</p>
<p></p>
<p>Set Current Solution S0,initial temperature T0, and Select temperature reduction function</p>
<p>Repeat:</p>
<p> Repeat:</p>
<p> Select soliution Sifrom the neighborhood N(S)</p>
<p> Calculate the change in cost according to the new solutionΔC</p>
<p> IfΔC < 0, then accept this solution (since it is improving) S=Si</p>
<p> Else, generate a random number x in the range (0, 1)</p>
<p> if x < e-ΔC/Tthen accept this new solution S=Si</p>
<br></br><br></br><p> Until termination criterion met</p><br></br><br></br><p> Decrease temperature T using reduction function</p>

<p></p>
<p></p>
<p>Give a reason to use 2 heuristics simultaneously in a searching algorithm</p>
<p></p>
<p></p>
<p>Use 1 heuristic as primary, and 1 as secondary to break ties</p>
<p></p>
<p></p>
<p>What does SA do to escape from local optimums?</p>
<p></p>
<p></p>
<p>Accepts non-improving solutions depending on what the temperature is</p>
<p></p>
<p></p>
<p>What is the motivation behind alpha-beta pruning?</p>
<p></p>
<p></p>
<p>In the game tree, some branches are never played since rational players don't pick sub-optimal moves. Therefore, there is no need to generate the subtree for such moves.</p>
<p></p>
<p></p>
<p>What is the difference between trajectory methods and population-based methods?</p>
<p></p>
<p></p>
<p>Trajectory methods handle one solution at a time, whereas Population methods handles multiple solutions at a time</p>
<p></p>
<p></p>
<p>Under what conditions does UCS become Dijkstra's single-source shortest path algorithm?</p>
<p></p>
<p></p>
<p>if g(n) is the overall cost function for all nodes to a source node</p>
<p></p>
<p></p>
<p>What are Meta-Heuristic algorithms</p>
<p></p>
<p></p>
<p>Meta-Heuristics are algorithms that combine heuristics in a higher level framework aimed at efficiently and effectively exploring the search space.
<br></br>
<br></br>They are strategies that "guide" the search process</p>
<p></p>
<p></p>
<p>Are Meta-Heuristics problem specific? are they at risk of getting trapped in confined areas of the search space?</p>
<p></p>
<p></p>
<p>Meta-Heuristics are not problem-specific
<br></br>
<br></br>They utilize mechanisms (depending on the method) to avoid getting trapped in dead ends</p>
<p></p>
<p></p>
<p>Describe the difference between "A Search" and "A* Search"</p>
<p></p>
<p></p>
<p>"A*" is admissible, in which h(n) <= h*(n). Whereas "A" is not admissible, h(n) has no upper bound in the A algorithm.</p>
Is DLS complete?
Does DLS always terminate?
Is DLS optimal?
<p></p>
<p></p>
<p>As long as the solution is within the depth "d", then it is guaranteed to be complete since there is a limit on the depth that it can search to.
<br></br>
<br></br>DLS always terminates, and it is not optimal</p>
<p></p>
<p></p>
<p>How does the min/max algorithm relate to game playing</p>
<p></p>
<p></p>
<p>The game tree alternates between the 2 opponent's moves, and the min/max algorithm can be used to optimize for the best outcome in this game tree.</p>
<p></p>
<p></p>
<p>What are search trees, and how are they abstracted from graphs.</p>
<p></p>
<p></p>
<p>Search trees are superimposed over top of graph representation of a problem. The graph might be finite, but the search tree can be either finite or infinite. infinite if we allow for repeated states due to reversible actions or cycles.</p>
<p></p>
<p></p>
<p>What do Heuristics Aim to do?</p>
<p></p>
<p></p>
<p>Heuristics aim to efficiently generate good solutions, but does not guarantee optimality.
They guide the algorithm
</p>
<p></p>
<p></p>
<p>What are the 2 general optimization methods in searching algorithms?</p>
<p></p>
<p></p>
<p>Constructive methods: starting from scratch and getting to the solution one component at a time. Ex: Sudoku solver, N-Queens
<br></br><br></br>Local Search Methods: Starts from an initial solution and iteratively tries to replace the current solution with better solution Ex: TSP</p>
<p></p>
<p></p>
<p>What does a heuristic function do?</p>
<p></p>
<p></p>
<p>Heuristic functions give an estimate of the distance to the goal based on domain-specific information. It guides the algorithm to a better solution by estimating the "goodness" of a node/state.</p>
<p></p>
<p></p>
<p>Under what condition is A* complete?</p>
A* is complete if the branching factor is finite, and every operator (evaluation function) has a fixed positive cost
<p></p>
<p></p>
<p>What is the difference between well-structured problems and ill-structured problems</p>
<p></p>
<p></p>
<p>Well Structured: The existing state and desired state are clearly identified, and the methods to reach the desired state are fairly obvious
<br></br>
<br></br>Ill Structured: Existing state is and desired state are unclear and hence the methods of reaching the desired state cannot be found.</p>
<p></p>
<p></p>
<p>What is the purpose of the minimax or min/max algorithm</p>
<p></p>
<p></p>
<p>To determine the best next move and the cost of such move</p>
<p></p>
<p></p>
<p>What does "admissible" mean with respect to heuristics?</p>
<p></p>
<p></p>
<p>A heuristic is said to be "admissible" if it NEVER overestimates the distance to the goal node.
<br></br>
<br></br>It is also known as an "optimistic" heuristic</p>
<p></p>
<p></p>
<p>List the 5 types of informed search algorithms</p>
Hill Climbing Search Greedy Best-first Search Beam Search A Search A* Search
Under what condition does A* act like UCS?
When h(n) = 0 for all n. This is the null heuristic.
What is a rational agent
An agent that for each perceived sequence of events, it does what is expected to maximize performance on the basis of perceptual history and built-in knowledge
Describe Fully observable vs Partially observable environments
Fully Observable environment:
sensors detect all aspects that are relevant to the action
Partially Observable environment:
Lots of noise and inaccurate sensors due to parts of state missing
What does Behaving rationally mean?
Perceiving the “world” and acting to achieve
some goals given a set of beliefs.
Agents work within an environment with certain characteristics. What are these 6 characteristics/dimensions?
Deterministic vs Stochastic
Episodic vs Sequential
Static vs Dynamic
Discrete vs Continuous
Competitive vs Co-operative ```
Briefly describe UCS
Expand the node with the lowest cost in the fringe, use a heap/sorted list, always take the min-cost node. If there are numerous nodes with the same cost, take the most recently added one.
Always finds the minimum-cost path to the goal node.
What is the branching factor in a search Tree?
The maximum number of children possible for a node. Denoted with "b"
There are 2 types of Games:
Perfect information & Imperfect information
Describe and Contrast them
Perfect information:
Each player has complete information about the opponent's position and available choices (ex: chess, monopoly)
Imperfect information:
Each player does NOT have complete information about the opponent's position and available choices (ex: poker)
What is the purpose of using memory structure in trajectory methods?
To escape local minima
To improve explorative strategy by avoiding visited nodes
Describe the "Problem Formulation" of Game Playing as Search
State Description: the configuration of the board
Initial State: initial board position & who's move it is
Operators/Action State: the legal moves that a player can make
Goal: is the game over?
Utility Function: Measures an outcome of the game and its desirability
Well-Structured Problems & Ill-Structured Problems
Does SA find an approximation or exact value? What size search space does it work well in?
SA works very well in large search spaces, and provides a good approximation
Briefly describe BFS
traverse layer-by-layer, use a FIFO structure (queue) as the fringe.
Always finds the closest path, but takes long.
What is cooperation
A group to solve a join problem or perform a common task, based on sharing the responsibility for reaching a goal.
OR
Cooperation is the practice of software/hardware entities working together in order to achieve certain objective
Alpha-Beta Pruning is an optimization of the minimax searching algorithm.
What are the tradeoffs? does it affect the completeness of minimax?
Alpha-Beta is guaranteed to return exactly the same value as the min/max algorithm.
It is a pure optimization of min/max without any tradeoffs. It does not affect the completeness, but reduces the number of computations
What are the 4 properties of search algorithms?
Are meta-heuristics exhaustive?
Meta-Heuristics are non-exhaustive, and they use their own internal heuristic
What are strong vs weak methods?
Strong methods are those designed to address a specific type of problem.
Weak methods are general approaches that may be applied to many types of problems. Weak methods are extremely general and not tailored to a specific situation. They are called "weak" because they do not take advantage of more powerful domain-specific heuristics.
h(n)=0 means that the state is a goal node. h(n) = infinity means that the state "n" is a dead-end that can never lead to a goal
Is Greedy Best-first search Complete? is it Optimal?
No and No, it purely depends on the heuristic that it uses
What happens to the expanded nodes in A* search if h(n) is too high or too low?
h(n) too high: Optimal solution will be overlooked (non admissible heuristic)
h(n) too low: More nodes than needed are expanded
Compare Hill Climbing Search to Beam Search
Hill Climbing search is equivalent to Beam search with ß=1. Beam is like the middle ground between Hill Climbing and Greedy Best First Search
At what point do you prune in minimax alpha beta pruning
If there is an alpha/beta value assigned to a parent node, and ANY grandchild node is below/above the cutoff, then you prune the other grandchildren
Under what condition does A* perform like a perfect algorithm, only expanding the nodes which lead to the solution?
When h(n) = h*(n) for all n. This means that h(n) = actual cost to goal node. In this case, the heuristic gives a perfect distance to the solution for all nodes
Under what circumstances is DFS incomplete? is DFS optimal?
spaces with loops, or infinite-depth spaces.
DFS is not optimal for finding the shortest path
What is the strategy for solving ill-structured problems?
Retrieve a solution/answer, Start from a guess, and improve it. Search among alternatives. Exploring surrounding states.
in SA, what is the acceptance probability?
if delta C <= 0: P = 1
if delta C > 0: P = exp(-1*delta C / T)
What is Greedy Best-First search?
A searching algorithm in which the next state is picked using the evaluation function f(n)=h(n). The fringe is a priority queue (heap or self-sorted list).
Assumes the opponent is rational and optimizes their behaviour & considers their best response/move
Is the min/max search complete? is it optimal?
If so, under what conditions
It is complete if the tree is finite,
it is optimal assuming that the opponent plays rationally & optimally
What do α and β parameters denote in alpha-beta pruning?
Why do these matter?
α: the best value for MAX seen so far. This is the lower bound on the value that a MAX node can be assigned.
β: the best value for MIN seen so far. This is the upper bound on the value that a MIN node can be assigned.
We use these values to prune cousin nodes and assign cutoff values
What is Beam Search
A greedy best-first search variant which tries to minimize the amount of memory which greedy best-first search uses by only storing ß nodes in the fringe, and using a heuristic. Evaluation function is f(n) = h(n), and the algorithm will not add new nodes to the fringe if it is full.
Briefly describe IDS
Iteratively increases the limit of the search. Algorithm performs DLS for d=0, if no solution is found, it increases d=1 and restarts the DLS. If no solution is found it increases again to d=2 and restarts. It keeps on iteratively increasing the maximum depth and restarting the DLS until a solution is found
Is Beam search Optimal? is it Complete?
No and No, it purely depends on the heuristic that it uses
What are the 3 variations of Hill Climbing Search? Describe them
Why do we limit the depth in expanding the game tree for min/max search?
Typically, the number of states (board configurations) is too large, leading to an unreasonable depth in the game tree. We optimize this by limiting the depth of the game tree in the traversal.
Describe the step-by-step process of using the min/max algorithm
Generate the game tree (alternating max & min)
Apply the utility function to all leaf nodes to give them their utility values
From bottom to top; use the leaf nodes' utility values to determine the utility values of their parents. Repeat this until you get to the root node.
Pick the most optimal move from the root node
Describe Hill-Climbing Search
For all successors, it strictly only picks a state which has a lower heuristic value (not even equal), and lower than all the other neighbouring states' heuristic value. It picks this state and ignores all other options.
Hill-Climbing search strictly looks just 1 step ahead to determine if a successor is better than the current state, and moves to the best successor. Very memory efficient.
It is like climbing a mountain with a thick fog around you
In SA, what happens when a neighbouring state has higher cost?
A random number is generated between 0 and 1. If P is greater than this random number, it is accepted. In this case P = exp(-1*delta C / T)
Note that in this case, the probability of accepting depends on the magnitude of change in cost, and the temperature
What are game playing problems?
Problems which have well-defined rules with moves that can be searched intelligently
What is adaptation
The ability to improve behaviour, evolve, adjust to new or different situations
What condition does UCS need to work?
No zero-cost edges or negative-cost edges
Describe the Process of goal formation
We decide which aspects we are interested in, and which aspects can be ignored.
We need goals to limit search and allow termination.
Exact Algorithms and Approximate Algorithms
What is Artificial Intelligence
The Science and Engineering of making intelligent machines, especially intelligent computer programs. Examples: Visual perception, speech recognition, decision making, language translation
In the minimax algorithm, how do min and max nodes behave in terms of inheriting their children's values?
MAX: label node will take the maximum value of its children
MIN: label node will take the minimum node of its children
What 5 aspects does Problem formulation consist of?
Note - Problem formulation is the compact representation of the problem space. Search space is represented by states
under what condition is IDS complete?
What is a drawback of IDS?
Briefly describe DFS
Expand nodes in pre-order fashion (root -> left -> right), use a LIFO data structure for the fringe (stack)
Under what condition is UCS the same as BFS
when the costs of all edges in the search space are equal. g(n) is constant
In SA, what happens when a neighbouring state has lower cost?
It is always accepted
What are Heuristics?
Heuristics are solution strategies or rules by trial and error to produce acceptable (optimal or suboptimal) solutions to complex problems in a reasonable amount of time.
What is Alpha-Beta pruning?
A way to reduce the number of nodes that need to be generated and evaluated by pruning.
Avoid processing subtrees that have no effect on the result
What is Intelligence
The ability to acquire and apply knowledge and skills
Do Meta-heuristic algorithms find exact solutions or approximates?
Are they deterministic?
Meta-Heuristics find approximate solutions, and are non-deterministic
The goal with Meta-Heuristics is to efficiently explore the search space in order to find near-optimal solutions
Describe the difference between static and dynamic environments
If the environment can change while the agent is deliberating, then it is dynamic. Otherwise, it is static
Why is the "A Search" Algorithm incomplete?
If h(n) is infinite, then the algorithm will be stuck in a dead end, and will not be able to find a goal node, making it an incomplete algorithm
Describe the difference between Discrete and Continuous environments
State is describable in a discrete way, or in a continuous way
What are the 2 classes of local search strategies? What is the difference between them
Uninformed search strategies have no heuristics, and no information about the likely direction of the goal node.
Informed search strategies have a sense of direction, and use heuristics to guide them on their path.
What defines a well-defined state-space formulation. Go into detail for each of the 5 aspects.
State Space: Either a complete configuration of a problem, or a partial configuration of a problem.
Initial State: The search starts here
Goal State, and Goal Test: the Search terminates here.
Set of Actions: This allows movement between states (successor function)
Concept of Cost: Allows to create a costing solution
In game theory, what is a utility function?
Assigns a player/state a number denoting the outcome of the game and its desirability
Compare Hill Climbing Search to Greedy Best-First Search
Hill Climbing Search does not allow for backtracking, whereas Greedy Best-First Search does, and it jumps to alternative paths & stores the children of previously visited nodes.
What does Thinking rationally mean?
Using logic to achieve goals via logical inferencing
What does the min/max value of a node in the min/max algorithm denote with respect to a game theory?
The utility of being in the corresponding state, assuming that both players play optimally from that state till the end of the game
Formal definition of a meta-heuristic
An iterative generation process which guides a subordinate heuristic by combining intelligently different concepts for exploring and exploiting the search space, learning strategies used to structure information in order to efficiently find near-optimal solutions
In SA, describe the relationship between temperature, and probability of accepting a state. What happens to the algorithm at high temperatures vs low temperatures?
Probability of accepting a state is directly proportional to temperature.
At high temperatures, explore parameter space, At lower temperatures, restrict exploration.
Under what condition is A/A* Search admissible? What happens when we perform such a search without an admissible heuristic?
if h(n) < h*(n), then it is admissible. A* is always admissible.
Using an admissible heuristic guarantees that the first solution will be an optimal one
What is the analogy for simulated annealing
Physical annealing. If a liquid material cools and anneals too quickly, then it will solidify into a sub-optimal configuration. If the metal cools slowly, then the crystals within the material will solidify optimally.
Describe Deterministic vs Stochastic Environments
If the next state of the environment is completely determined by the current state, and the action is executed by the agent, then it is deterministic. Otherwise, it is Stochastic
What are the 2 types of meta-heuristics?
Trajectory methods, and Population-based methods
List the 5 types of uninformed search algorithms
Depth-First Search
Breadth-First Search
Uniform-Cost Search
Depth-Limiting Search
Iterative-Deepening Search
How are min and max nodes allocated in the min/max algorithm?
Root node is always MAX (maximizes) player's utility value, and nodes alternate between MIN and MAX layer by layer
What type of systems are we interested in for engineering
Rational Systems
Compare the performance of Depth-First and Breadth-first searching strategies
Depth-First:
- Low memory requirements
- Can get stuck exploring infinite paths
- Better if there are many solutions and you only need one of them
Breadth-First:
- Uses high amount of memory
- Doesn't get stuck
- Finds the shortest path always
If you have A1* with h1(n), A2* with h2(n) > h1(n), then how can you compare A2* with A1* in terms of the performance of the algorithm?
Note that h2(n) will be closer to h*(n)
A2* will expand less nodes, and hence take less time to find the solution than A1*. The closer a heuristic is to h*(n), the better its performance.
The fewer nodes expanded, the better the heuristic
Give a template for a generic searching algorithm for a graph/tree
Use a data structure to store nodes in a fringe. This will depend on the algorithm/strategy.
Repetitively Choose a node from the list, test it, then expand its children.
A particular search strategy will influence how we choose the next node to consider in the search (depending on what algorithm we're using)
Describe Episodic vs Sequential Environments
In Episodic environments, the choice of action in each state depends only on the state itself.
In Sequential environments, the current decision could affect all future decisions - the agent needs to think ahead. Ex: chess
In Simulated Annealing, what does the acceptance probability depend on?
1. The current Temperature
2. The cost difference between the current and next state. This is next cost minus current cost
In Simulated Annealing, does temperature matter if delta C is negative?
No, the temperature will not have an effect on the acceptance of the neighbouring state if delta C is negative. It will always be accepted in that case
What 4 factors influence the annealing schedule in simulated annealing?
The initial temperature
The final temperature
The temperature decrement rule
The number of iterations at each temperature
In Simulated Annealing, what determines a good initial temperature? What can be used as a guide to set the initial temperature?
It must be high enough to allow a move to possibly any state in the search space. Must not be too hot since SA would behave like a random number generator in that case. A good initial temperature is one that accepts 60% of worse moves
The Maximum change in the cost function could be used as a guide to set the value of initial temperature
In SA, does the final temperature have to reach zero?
No. It might take a very long time to reach zero when some decrement approaches are used
When do you stop the Simulated Annealing algorithm?
- Whenever the temperature is reasonably low
- The system is frozen, and there are no better moves generated, and worse moves are not getting accepted
- Number of iterations reaches a threshold (maybe)
What are the 3 temperature decrement rules?
Linear: T = T-a
Geometric: T=T*a
Slow Decrease:
T = T/(1+bT)
a, and b are constants
In SA, how many iterations for a given temperature?
Enough iterations should be allowed at every temperature for the system to be stable at that temperature.
When does the SA algorithm converge? What other concept can it be likened to?
SA converges when temperature is reduced to zero slowly enough. It can be likened to a Markov chain but non-homogeneous
What are the 2 pros and 2 cons of Simulated Annealing?
Pros:
- Easy to use
- Provides good solutions for a wide range of problems
Cons:
- Takes a long time to run
- There are a lot of tuneable parameters
What is the heuristic function in Simulated annealing?
The Cost function / fitness function (which is what the algorithm is trying to minimize)
Suppose SA is running with T=0. Which algorithm would this be identical to?
Hill Climbing search
In Simulated Annealing, what are drawbacks for when temperature is reduced too fast or too slow?
If temperature is reduced too fast, then optima are not found
If temperature is reduced too slow, then time is wasted
What type of cost functions should be avoided in Simulated Annealing?
Cost functions which return the same value for many different states
How can constraints be added to Simulated Annealing?
How can such constraints be used to make SA more adaptive?
They can be represented in the cost function as "penalty terms"
SA can be made more adaptive by having dynamically changing weights of the penalty terms
What is the major difference between normal SA and Cooperative SA
Cooperative SA implements synchronous concurrent runs of numerous SA processes.
Cooperative SA manipulates multiple solutions (a population of solutions) at once.
Instead of singular states, Cooperative SA uses a population of states as its nodes.
In Cooperative SA, how are new solutions produced using neighbours from numerous solutions within the population?
New solutions are iteratively produced from 2 randomly selected states in the population S1 and S2:
The algorithm looks for neighbours of S1 that are closer to S2 than S1 itself.
the CLOSER set is generated, and the next solution is randomly selected from it
In Cooperative SA, what is the CLOSER set defined with?
What significance does the CLOSER set have to the algorithm?
CLOSER set is all the members S_k of S1's neighbours such that the distance between S_k and S_4 is less than the distance between S_k and S1.
S1 and S4 are members of the current population in the algorithm in Cooperative SA
If the CLOSER set is not empty, then the next solution is randomly selected from it
How is temperature updated in Cooperative SA?
Temperature is updated based on the difference of mean cost value of the new and old populations
If the mean in cost difference between current and previous populations is less than zero, then it doesn't get updated. Otherwise, it is changed
Describe Cooperative SA algorithm step-by-step
Population size must be initialized to POP0
For a determined number of transitions K
For i=1 to POP
Select a random cooperator Sj is a member of POPk-1
Generate Snew = cotrans(Si,Sj)
Accept the new solution if it is better
Otherwise, probabilistically determine if a move to be taken to the new solution and update solution accordingly
Update Temperature

What is Tabu Search?
Tabu Search is a meta-heuristic, a general strategy for guiding and controlling inner heuristics.
It is like a combination of local search strategy and memory structures
What does Tabu Search use memory structures for?
- To escape local minima
- To implement an explorative strategy & avoid revisiting nodes
In Trajectory Methods such as Tabu Search, Why can't algorithms consider all the possible neighbours of current solutions?
It is very computationally demanding to process all possible neighbours.
Why does Tabu search accept non-improving solutions
In order to escape from local optima where all the neighbouring solutions are non-improving (dead end)
How does Tabu Search exploit memory?
Classifies a subset of moves in the neighbourhood which are not permitted (tabu)
What are the 2 uses (types) of memory structures in Tabu Search?
- Short term memory which is based on recency of occurrence of a component. This is used to prevent the search from revisiting previously visited solutions
- Long term memory which is based on frequency of occurrence. This is used to diversity the search and explore unvisited areas of the solution space by avoiding frequently visited components
What is the name of the short-term memory structure used in Tabu Search.
What is the purpose of this memory structure
In Tabu Search, what is "tabu tenure", and where is it stored?
Tabu Tenure is the number of iterations in which a certain move and its attributes are stored in the tabu list.
Recent states/components are "tabu-active" during their tabu-tenure
In Trajectory methods, what is the purpose of the neighbourhood?
To identify adjacent solutions/states that can be reached from the current solution/states
What are stopping conditions in Tabu Search?
- No feasible solution in the neighbourhood of current solution - Reached the maximum number of iterations allowed - The number of iterations since the last improvement is larger than a threshold
What is the idea behind the aspiration criteria in Tabu Search?
Sometimes it is useful to allow certain moves despite being Tabu - which prevents stagnation.
In Tabu Search, what are approaches used to cancel the Tabus referred as?
Aspiration Criteria
What is the Basic algorithm in Tabu Search?
Firstly, generate an initial solution.
While the termination criterion are not met {
- Chose the best neighbour "s_next" from N(s) = N(s) - T(s) + A(s)
- Store s_next in A(s) if it improves the best known solution
- Update T(s) and A(s)
}
In Tabu Search, are the Tabu solutions included in the neighbourhood function?
Under what 2 conditions would they be brought back?
Tabu Solutions are excluded from N(s) and not revisited during the tabu tenure.
They are added back if:
1. The aspiration conditions are met
2. The tabu tenure is finished (number of iterations in tabu tenure is completed)
In Tabu Search, what is the common Aspiration Criteria?
Improved-best or best solution: If a tabu solution encountered at the current iteration is better than the best solution found so far, then it can be added to the aspiration list
In Tabu Search, what are 2 examples of Tabu Restrictions
- A move that involves the same exchange of positions of a tabu move
- A move that involves any of the positions that were involved in a tabu move
In Tabu Search, what is static vs dynamic Tabu Tenure, T
Static: choose T to be a constant of as a guideline of the problem size
Dynamic: choose T to vary randomly between some T_min and T_max
Under what condition does a Tabu move become admissible?
A tabu move becomes admissible if it yields a solution that is better than any obtained solution so far (or better than the aspiration value)
In Tabu Search, what is Intensification?
The process of exploiting a small portion of the search space, and penalizing solutions that are far from the current solution
In Tabu Search, what is Diversification?
Forcing the search into previously unexplored areas of the search space and penalizing solutions that are close to the current
What type of memory is diversification based on?
Long-term memory, then frequency memory
In Tabu Search, what are the two major approaches for applying the diversification process? Describe them
Restart diversification: Finds components that rarely appear in the solution & restarts the search from these points
Continuous Diversification: Bias the evaluation of possible moves by adding a term which depends on the neighbour's frequency to the evaluation function
What are the pros and cons of Tabu Search?
Pros:
- Accepts non-improving solutions in order to escape from local optima
- Uses a Tabu List
- Can be applied to both discrete and continuous problem spaces
- For larger and more difficult problems (ex: scheduling), tabu search is very efficient
Cons:
- Too many parameters to be determined
- Number of iterations is large
- Global optimum might not be found, and it depends on the parameter settings
What is the Linear Assignment problem?
How many different distinct combinations of the assignment problem exist?
Involves the assignment of n objects to n sites. For each assignment, there is a related cost C(i,j) of assignment object "i" to site "j"
There are n! distinct combinations of the solution space
What are some applications to the linear assignment problem?
Assignment of offices or services in buildings
Assignment of departure gates to an airport
Distribution of files in a database
