Flashcards in Search Deck (84):

1

## Describe a simple search agent.

### Deterministic, goal-driven agent.

2

## What is a solution to a problem given a set of start nodes and goal nodes?

### A path from a start node to a goal node.

3

## What is the forward branching factor of a node?

### It is the number of arcs going out of the node.

4

## What is the backward branching factor of a node?

### It is the number of arcs going into the node

5

## Describe generic search algorithm.

###
-Given a graph, start nodes, and goal nodes, incrementally explore paths from start node.

- Maintain a frontier of paths from the start node that have been explored.

- As search proceeds, the forntier expands the unexplored nodes until a goal node is encountered.

6

## What does it mean for a search algorithm to be complete?

### A search algorithm is complete of whenever there is at least one solution, the algorithm is guaranteed to find it within a finite amount of time.

7

## What does it mean for a search algorithm to be optimal?

### A search algorithm is optimal if when it finds a solution, it is the best one.

8

## What is the time complexity of a search algorithm?

###
The time complexity of a search algorithm is the worst-case amount of time it will take to run, expressed in terms of:

- maximum path length m

- maximum branching factor b

9

## What is the time complexity of a search algorithm?

###
The space complexity of a search algorithm is the worst-case amount of memory that the algorithm will use, expressed in terms of:

- maximum path length m

- maximum branching factor b

10

## What does define the search strategy?

### The way in which the frontier is expanded defines the search strategy.

11

## What is frontier?

### Frontier is the set of paths which could be xplored next.

12

## Define depth-first search.

### Depth-first search is a search algorithm that explores each path on the frontier until its end (or until a goal is found) before considering any other path.

13

## What is the frontier in DFS?

### In DFS, the frontier is a last-in-first-out stack.

14

## Is DFS complete?

### No, it may stuck in infinite loops.

15

## Is DFS optimal?

### No, it can "stumble" on longer solution paths before it gets to shorter ones.

16

## What is the time complexity of DFS?

### DFS time complexity is O(b^m). Must examine every node in the tree

17

## What is the space complexity of DFS?

### DFS space complexity is O(bm). The longest possible path is m, and for every node in that path must maintain a fringe of size b.

18

## When DFS is complete?

### DFS is complete for finite acyclic graphs

19

## When is using DFS appropriate?

###
DFS is appropriate when:

- Space restricted

- Many solutions, with long path length

20

## When is using DFS unappropriate?

###
DFS is a poor method when:

- There are cycles in the graph

- There are sparse solution at shallow depth

21

## Define breadth-first search.

### Breadth-first search is a search algorithm that explores all paths of legth l on the frontier, before looking at path of length l+1

22

## What is the frontier in BFS?

### In BFS, the forntier is a first-in-first-out queue.

23

## Is BFS complete?

### BFS is complete. If a solution exists at level l, the path to it will be explored before any other path of length l+1.

24

## Is BFS optimal?

### Yes. Any goal at level l will be reached before goals at lower levels.

25

## What is the time complexity of BFS?

### Time complexity of BFS is O(b^m). Like DFS, in the worst case BFS must examine every node on the tree.

26

## What is the space complexity of BFS/

###
Space complexity of BFS is O(b^m).

27

## When is using BFS is appropriate?

###
Using BFS is appropriate when:

- The search graph has cycles or is infinite

- We need the shortest path to a solution

- there are some solutions at shallow depth and others deeper

28

## When is using BFS is unappropriate?

###
BFS is a poor method when:

- There are only solutions at grat depth

- No way the search graph will fir into the memory

29

## How are search strategies different?

### Search strategies are different with respect to how they add/remove paths from the frontier.

30

## Define Iterative Deepening DFS.

### Iterative deepening DFS is a search strategy that uses DFS to look for solution at depth 1, then 2, then 3, etc.

31

## What is the time complexity of Iterative Deepening DFS?

### Time complexity of the iterative deepening DFS is O(b^m), but it has an overhead factor (b/(b-1)).

32

## What is the cost of a path?

### The cost of a path is the sum of the costs of its arcs.

33

## Define the Lowest-cost-first search and explain how it works.

### Lowest-cost-first search is a search strategy that finds the path with the lowest cost. At each stage, it selects the path with the lowest cost on the frontier.

34

## What is the frontier in Lowest-cost-first search?

### The frontier in Lowest-cost-first search is implemented as a priority queue.

35

## When arc costs are equal, LCFS is equivalent to...

### BFS

36

## Is LCFS complete and optimal?

###
LCFS is not complete in general, for example a cycle with zero ot negative arc costs could be followed forever. It is complete if arc costs are positive.

LCFS is optimal if arc costs >= 0, not optimal in general.

37

## What is the time complexity of LCFS?

### The time complexity of the LCFS is O(b^m). In the worst case, must examine every node in the tree because it generates all paths from the start that cost less than the cost of the solution.

38

## What is the space complexity of LCFS?

### The space complexity of the LCFS is O(b^m). Just like BFS, in worst case frontier has to store all nodes m-1 sterps from the start node.

39

## What search strategies are called uninformed?

### Uninformed search strategies are those that do not consider any inforamtion about the states and the goals to decide which path to expand first on the frontier. They are blind to the goal and do not take into the account the specific nature of the problem.

40

## What are blind search algorithms?

### Blind search algorithms do not take into account the goal until they are at the goal node.

41

## What is s search heuristic?

### A search heuristic h(n) is an estimate of the distance/cost of the optimal (cheapest) path from node n to a goal node. It can be used to guide the search.

42

## Define the Best First Search.

### Best First Search is a search strategy that always chooses the path on the frontier with the smallest h value. It has greedy approach - expand path whose last node seems closest to the goal - chose the solution that is locally the best.

43

## What is the frontier in BestFS?

### BestFS treats the frontier as a priority queue ordered by h.

44

## Is BestFS complete and optimal?

### No, BestFS is not complete. May get trapped in a loop. And BestFS is not optimal. Path cost may be large while h low. (Does not take into account the path cost)

45

## What is the time complexity of the BestFS?

### The time complexiy of the BestFS is O(b^m). Worst case - has to explore all the nodes

46

## What is the space complexity of the BestFS?

### The space complexity of the BestFS is O(b^m)

47

## Why would we want to use BestFS?

### Because if the heuristic is good it can find the solution very fast.

48

## Define the A* algorithm.

### A* is a search strategy that takes into account both the cost of the path to a node cost(p) and the heiristic value of that path h(p). A* always chooses the path on the frontier with the lowest estimated distance from the start to a goal node constrained to go via that path.

49

## What is f(p) in the A* search?

### f(p) = cost(p) + h(p). It is an estimate of the cost of a path from the start to a goal vie p.

50

## Is A* complete and optimal?

###
A* is complete and optimal if:

1) The branching factor is finite.

2) Arc costs are > 0

3) h(n) is admissible.

51

## What heuristic is called admissible?

### Let cost(n) denote the cost of the optimal path from node n to any goal node. A search heuristic h(n) is called admissible if h(n) <= cost(n) for all nodes n, i.e. if for all nodes it is an underestimate of the cost to any goal.

52

## What is a relaxed version of the problem?

###
A relaxed version of the problem is:

- where one ot more constrains have been dropped

- problem with fewer restrictions on the actions

53

## Why does a relaxed version of the problem leads to an admissible heuristic?

###
- The problem gets easier

- Less costly to solve it

54

## Why is it important to identify constraints which, when dropped make the problem easy to solve?

### Because ot makes the problem easy to solve and heuristic are not useful is they're as hard to solve as the original problem.

55

## Why A* is complete?

### If all the conditions are true. A* does not get caught into the cycle because f(n) of sub paths in the cycle eventually exceed the cost of the optimal solution.

56

## Why A* is optimal?

### A* is optimal because any sub-path of the optimal solution path will be expanded before p' (a suboptimal solution path c(p') > c(optimal)).

57

## Why is A* is optimally efficient among the algorithms that extend the search path from the initial state?

### Because A* finds the goal with the minimum number of expansions. No other optimal algorithm is guatanteed to expand fewer nodes than A*.

58

## What is the effect of a search heuristic that is a better approximation on the actual cost?

### A search heuristic that is a better approximation on the actual cost reduces the number of nodes expanded by A*.

59

## What is the difference between state space graph and search tree?

### State space graph represents the states in a search problen, and how they are connected by the available operators. Search tree shows how the search space is traversed by a given search algorithm: explicitly "unfolds" the paths that are expanded.

60

## Describe size of state space vs size of search tree.

### State space with d states and 2 actions from each state may lead to a search tree with 2^d possible paths. Which makes search tree exponentially larger than state space. Epsecially with cycles or multiple parents.

61

## What is cycle checking good for?

### Cycle checking is good when we want to avoid infinite loops, but also want to find more than one solution, if they exist.

62

## What if multiple path pruning good for?

### Multiple path pruning is good when we only care about finding one solution.

63

## Describe how cycle checking works in general and specifically for DFS, BFS.

###
In general, prune a path that ends in a node already on the path.

DFS: Since DFS looks at one path a time, when a node is encountered for the second time it is guaranteed to be a part of a cycle.

BFS: Since BFS keeps multiple paths going, when a node os encountered for the second time, it could be as part of expanding a different path. Not necessarily a cycle.

64

## What can you say about the cost of cycle checking?

### It depends on the algorithm.

65

## What is the computational cost of cycle checking for depth-first methods and other methods?

### Using depth-first methods, with the graph explicitly stored, cycle checking can be done in constant time, because only one path being explored at a time. For other methods, cost is linear in path length.

66

## Describe how multiple-path pruning works.

###
It is used if we only want one path to the solution. Can prune path to a node that has already been reached via previous path.

67

## What is the problem multiple-path pruning may lead to? And what are 3 possible solutions?

###
Problem: What if a subsequent path p2 to n is better than the first path p1 to n, and we want an optimal solution?

Solutions:

1) Can remove all paths from the frontier that ise the longer path: these can't be optimal.

2) Can change the initial segment of the paths on the frontier to use the shorter.

3) Can prove that this can't happen for an algorithm (LCSF for example)

68

## What does allow A* to do better than the other search algorithms we have seen?

### Arc cost and h(n) combined in f(n)

69

## What is the biggest problem with A*? And name the solution.

###
Problem: Space

Solution: Combine DFS with f (B&B search)

70

## Describe how Branch-and-Bound search works.

###
- Follow exactly the same search path as depth-first search. But to ensure optimality, it does not stop at the first solution found.

- It continues after recording upper bound on the solution cost. UB = cost of the best solution found so far. Initially equals to infinity of any overestimate of optimal solution cost.

- When a path p is selected for expansion, compute lower bound LB(p) = f(p) = cost(p) + h(p).

- If LB(p) >= UB, remove p from frontier without expanding it (pruning step), else expand p, adding all of its neighbors to the frontier.

71

## Is Branch-and-Bound search complete and optimal?

### Branch-and-Bound search is optimal only of h(n) is admissble. And it is complete only if UB initially is not set to infinity.

72

## What it the time and space complexity of B&B search?

### Space complexity is same as DFS O(bm), time complexity of O(b^m)

73

## Describe the idea of using dynamic programming in search.

### Idea: for statically stored graphs, build a table of dist(n) - the actual distance of the shortest path from node n to a goal g. This is perfect heuristic. It requires explicit goal nodes.

74

## How can you implement a dynamic programming method?

### Run LCFS with multiple path pruning in the backwards graph (arc reversed), starting from the goal nodes. When it's time to act (forward): for each node n always pick neighbor m that minimizes (cost(n,m) + dist(m))

75

## What are the problems with using dynamic programming in search?

###
- Needs space to explicitly store the full search graph.

- The dist function need to be recomputed for each goal: method os not suitable when goals vary often.

76

## Describe how Iterative Deepening A* works.

###
Like IDS, but the "depth" bound is measured in terms of f(n). It does not expand the path with lowet f value it is doing DFS.

- Start with f-value = f(start node)

- If you don't find a solution at a given f-value

- Increase the bound: to the minimum of the f-values that exceed the previous bound.

- It will explore all nodes n with f value <= f min (optimal one) Same as with A*

77

## Is IDA* complete and optimal?

### Yes, it is complete and optimal under the same conditions as A*.

78

## What is the time and space complexity of IDA*?

###
Time complexity: O(b^m)

Space complexity: O(bm)

79

## Describe how Heuristic DFS works.

###
When we expand a node, we put all its neighbour on the frontier. The order depends on the heuristic guidance: h or f.

Simply choose promising branches first. Heuristic doesn't have to be admissible.

80

## How can we combine Heuristic DFS with IDA*?

### DFS with an f-value bound (using admissible heuristic h), putting neighbors onto frontier in a smart order (using some heuristic h')

81

## Describe how Memory-bounded A* works.

###
- Limit the amount of memory for A*.

- Keep as much of the frontier in memory as possible.

- When running out of memory, delete worst paths from the frontier, then backup the value of the deleted paths to a common ancestor.

- The corresponding subtrees get regenerated only when all other paths have been shown to be worse than the "forgotten" path.

82

## Is Memory-bounded A* complete and optimal?

### Memory-bounded A* is complete, if there is any reachable solution. And it is optimal if the optimal solution is reachable.

83

## What is Memory-bounded A* used for?

### Memory-bouded A* is considered one of the best algorithms for finding optimal solution under memory limitations.

84