Uninformed Search Flashcards

1
Q

What is a state-based model?

A

Model task as a graph of all possible states.

  • A state captures all the relevant information about the past in order to act (optimally)in the future
  • Actions correspond to transitions from one state to another
  • Solutions are defined as a sequence of steps/actions (path)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the general goal of a search problem?

A

To find a sequence of actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Basic Search Task Assumptions (not including games)

A
Fully observable
Deterministic
Static
Discrete
Single Agent
Solution = sequence of actions
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What knowledge doe the agent need?

A

info needs to be sufficient to describe all relevant aspects for reaching the goal
adequent to describe world state/situation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Fully observable (aka closed world assumption)

A

All necessary information about a problem domain is accessible so that each state is a complete description of the world; there is no missing info at any point

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

problem state

A

representation of all necessary information about the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

state space

A

all possible valid configurations of the environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

goal test

A

defines what it means to have achieved the goal (goal can be task to be accomplished, state to be reached, a set of properties to be satisfied)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What do the actions of the agent need?

A

given an action and description of the current state, action will specify if action can be applied, what state of world will be after performed (no history needed to compute successor)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Actions need to be

A

steps that are discrete and individual, so can be treated as instantaneous, sufficient to describe all necessary changes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

A state space

A

directed graph with set of nodes and arcs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

each node in state space graph

A

contains a state description, maybe link to parent node, name of action that generated node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

each arc in state space graph

A

has fixed, positive cost; corresponds to when action is applied to arc’s source node, destination node is resulting state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

successor nodes

A

correspond to all of the legal actions that can be applied to the source node’s state

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

expanding a node means:

A

generate all successor nodes

add them and associated arcs to the state-space search tree

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

state space graph needs

A

start node(s), goal test, solution (sequence of actions), total cost

state space, inital states, goal states, action function, cost function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Search summary

A

Solution is an ordered sequence of primitive actions, model task as graph of all possible states and actions, solution as a path, state captures all the relevant info about the past

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

frontier

A

generated but not yet expanded states

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

tree search algorithm

A
loop do
if frontier is empty, return false
pick node from frontier
if goal node, return solution
generate n's successors and add to frontier
remove n from frontier
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

tree search does not:

A

detect goal when node is generated

does not detect loops (repeated states) in state space

21
Q

Uninformed Search: we know

A

the goal test, the successors() function, but we do not know which non-goal states are better

22
Q

in state -space search tree, what is a leaf?

A

unexpanded node in frontier, dead end, goal node

23
Q

completeness

A

if a solution exists, will it be found

24
Q

optimality/admissibility

A

if a solution is found, is it guarunteed to be optimal; an admissible algorithm will find a solution with the minimum cost

25
Q

time complexity

A

measured in worst case, measured by counting number of nodes expanded

26
Q

space complexity

A

max size of frontier during search

27
Q

Is BFS complete?

A

Yes

28
Q

Is BSF optimal?

A

yes, if all arcs have the same cost, or costs are positive, non-decreasing with depth, otherwise not optimal but will find shortest length solution

29
Q

BFS time complexity

A

O(b^d) where d is depth of solution and b is branching factor. Usually very solw to find solutions with a large number of steps because must look at all shorter lengths

30
Q

BSF space complexity

A

O(b^d) where b is branching factor and d is depth

31
Q

describe DFS

A

expand the deepest node first…select a direction go deep to the end; slightly change the end; then change the end again, etc

32
Q

Cycle checking

A

Don’t add node to frontier if its state already occurs on path back to root

33
Q

Is DFS complete?

A

No; may not be complete with or w/o cycle checking or depth cutoff

34
Q

Is DFS optimal/admissible?

A

No

35
Q

DFS time complexity

A

O(b^d); b is branching factor; d is depth of solution

36
Q

DFS chronological backtracking

A

when search hits a dead end, back up one level at a time; problematic if the mistake occurs because of a bad action choice near the top of search tree

37
Q

What is Uniform Cost Search (UCS)

A

Use a priority queue to order nodes on the frontier list, sorted by path cost, if g(n) is cost of path from start node to current node, UCS sorts nodes by increasing value g

38
Q

Is UCS complete?

A

Yes

39
Q

Is UCS optimal/admissible?

A

requires that the goal test done when node is removed from frontier instead of when generated by parent

40
Q

UCS time complexity

A

O(b^d); O(b^C/e) and c is the best goal path cost

41
Q

What is iterative deepening search?

A

do DFS to a depth 1, and repeat by increasing “depth bound” until a solution is found

42
Q

Advantages of iterative deepening search

A

complete, optimal if costs are the same, memory efficient, finds paths quickly in practice

43
Q

is IDS complete?

A

Yes

44
Q

is IDS optimal?

A

if costs are constant or positive, non-decreasing

45
Q

Time complexity of IDS?

A

slightly worse than BFS or DFS because nodes near the top of the tree are generated multiple times but O(b*d) -> most nodes near bottom of tree

46
Q

space complexity of IDS

A

O(bd)

47
Q

What is IDS good for?

A

trades time for large reduction in space, good anytime algorithm because can return valid solution before ends, more time= better solution

48
Q

Bidirectional Search -

A

search from start and goal node, stop when frontiers meet, generates O(b^d/2) nodes; takes word to compare state

49
Q

graph search algorithm

A

must remember already-expanded states