Wk 2-3 (Ch 3/4) : Solving Problems by Searching Flashcards

1
Q

Why does the contour map for uniform cost look like a circle while A*’s contour map looks like something else?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does the contour map for A* look like, why?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the definition of completeness? What is the definition of admissible?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Describe the concept of pruning?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the strengths and drawbacks of A*?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is relative error? What is absolute error?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What does it mean for A* to be optimally efficient?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe A* algorithm…see notecard.

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Which types of A* are optimal, when?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the greedy best-first search algorithm

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the basic improvement that informed searches add over uninformed searches?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is the main difference between IDA* and the std ID algorithm? What are its main drawbacks?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

How does recursive best-first search work? What are its strengths? What are its drawbacks?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How do admissibility and monotonicity make algorithms optimal?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does SMA work? How does thrashing relate to it? How does it improve on A*? How does it use memory?

A

[INSERT]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain the statement….memory limitations can make a problem intractable from the point of view of computation time….

A

[INSERT]

17
Q

What is metalevel learning? What is a metalevel state space? object-level state space? metalevel learning? How does this improve search?

A

[INSERT]

18
Q

When would I want to use tree search vs. graph search?

A

[INSERT]

19
Q

What is effective branching factor? How does it relate to the effectiveness of a heuristic?

A

[INSERT]

20
Q

Explain the statement “ the cost of an optimal solution to a relaxed problem is an admissible heuristic for the original problem”

A

[INSERT]

21
Q

Explain triangle inequality and how it relates to a heuristic being consistent.

A

[INSERT]

22
Q

What is a composite heuristic, when and how would you use it?

A

[INSERT]

23
Q

How does a pattern database work?

A

[INSERT]

24
Q

How does a disjoint database improve even further on pattern databases?

A

[INSERT]

25
Q

How do features help inductive learning methods? How are these constructed to create faster algorithms?

A

[INSERT]

26
Q

What environments do we use “beyond-classical” search to solve problems?

A

Classical search, apparently, has worked well in observable, deterministic, and discrete environments. [INSERT]

27
Q

Explain the hill-climbing search method.

A

.

28
Q

What is simulated annealing? How does it relate to the hill-climbing search method?

A

.

29
Q

What is a complete state formulation? How is it relevant to hill climbing?

A

.

30
Q

How is goal formulation related to problem formulation?

A

In goal formation, we’re talking about the parts of the world we’re interested in, what we can abstract away, and what should be included. Must come before problem formulation or we have no idea what we’re including and whether we’re even in near or far from the goal. In practice, they go back an forth in their definitions and implementations.

31
Q

What is a well defined problem?

A

assuming that reaching a goal is a series of fixed actions, we can search through all those actions to find the sequence of actions that ends in the goal state…..

So, we have an initial state,
the set of possible actions
the transition model that takes states to other states based on those actions.
A path which is the sequence of states connected by sequence of actions that ends in a goal, as verified by a goal test.

The cost function exists as well which says how much each action penalizes us.

32
Q

Define state space

A

the initial state, actions and transition model.

33
Q

what are the things we can legitimately abstract away from a problem?

A

things that for the set of all possible states, they all either have or dont have this thing I’m taking away.

34
Q

how can state space be different than physical space?

A

even if the problem I’m solving may be over physical space, the state space is the collection of all possible representations of the part of the problem that matters. The distance between jets in the intercept….may not take into account the actual physical space of the merge, etc. (or consider the 2 friends meeting in a city problem)

35
Q

what are parts of problem formulation

A
initial state
goal test
successor function
cost function
state space.
36
Q

whats the book definition of state?

State space?

A
A state is a situation that an agent can find itself in. We distinguish two types of states:
world states (the actual concrete situations in the real world) and representational states (the
abstract descriptions of the real world that are used by the agent in deliberating about what to
do).

A state space is a graph whose nodes are the set of all states, and whose links are
actions that transform one state into another

37
Q

What is the book definition of transition model? branching factor? search tree

A

The branching factor in a search tree is the number of actions available to the agent.

A search tree is a tree (a graph with no undirected loops) in which the root node is the
start state and the set of children for each node consists of the states reachable by taking any
action.
A search node is a node in the search tree.
A goal is a state that the agent is trying to reach.
An action is something that the agent can choose to do.

38
Q

Mathematcially describe the successor function?

A

A successor function described the agent’s options: given a state, it returns a set of
(action, state) pairs, where each state is the state reachable by taking the action.

39
Q

What’s the difference betwen a world state, a state description, and a search node? Why is this distinction important?

A

world state is how reality is or could be. In one world state we’re in Arad, in another
we’re in Bucharest. The world state also includes which street we’re on, what’s currently on
the radio, and the price of tea in China. A state description is an agent’s internal description
of a world state. Examples are In(Arad) and In(Bucharest). These descriptions are
necessarily approximate, recording only some aspect of the state.
We need to distinguish between world states and state descriptions because state description
are lossy abstractions of the world state,

because the agent could be mistaken about how the world is, because the agent might want to imagine things that aren’t true but it could
make true, and because the agent cares about the world not its internal representation of it.

Search nodes are generated during search, representing a state the search process knows
how to reach. They contain additional information aside from the state description, such as
the sequence of actions used to reach this state. This distinction is useful because we may
generate different search nodes which have the same state, and because search nodes contain
more information than a state representation.