Midterm 1 Flashcards

1
Q

What is an AGENT?

A

Anything that can be viewed as perceiving through its ENVIRONMENT through SENSORS and ACTING upon that environment through ACTUATORS.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is a PERCEPT SEQUENCE?

A

The complete history of everything the agent has perceived.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the role of an AGENT FUNCTION?

A

To map any given percept to an action.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the difference between an agent FUNCTION and an agent PROGRAM?

A

FUNCTION: Mathematical abstraction. PROGRAM: Concrete implementation, running within some physical system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What are the components of the TASK ENVIRONMENT?

A

Performance measure: desirable qualities Environment Actuators: methods of control over device/vehicle Sensors: sensory input

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe a SIMPLE REFLEX AGENT.

A

An agent whose action depends only on the current percept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe a MODEL-BASED AGENT.

A

An agent whose action is derived directly from an internal model of the current world sate that is updated over time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe a GOAL-BASED AGENT.

A

An agent that selects actions that it believes will achieve explicitly represented goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe a UTILITY-BASED AGENT.

A

An agent that selects actions that it beileves will maximize the expected utility of the outcome state.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the components of a LEARNING AGENT.

A

An agent whose bhavior improves over time based on its experience.

LEARNING ELEMENT: responsible for making improvements

PERFORMANCE ELEMENT: responsible for selecting external actions

CRITIC: provides feedback to agent and determines how performance element should be modified to do better in the future

PROBLEM GENERATOR: responsible for suggesting actions that will lead to new and informative experiences

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What are the FIVE components of a PROBLEM?

A
  1. Initial State
  2. Actions
  3. Transition Model
  4. Goal Test
  5. Path Cost
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define STATE SPACE.

A

State space is defined by initial state, actions, and transition model. It is the set of all states reachable from the initial state by any sequence of actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define FRINGE/FRONTIER.

A

The set of nodes available for expansion.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Properties of Breadth-First Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A

Complete: Yes
Time: O(b^d)
Space? O(b^d)
Optimal? Yes, not optimal in general

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Properties of Uniform-Cost Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A

Expand least-cost unexpanded node.

  • Complete: Yes, for step > e where e = some small, positive constant
  • Time: O(b ^ C* / e)
  • Space: O(b ^ C* / e), which can be wose than b^d
  • Optimal: Yes
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Properties of Depth-First Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A
  • Complete: No, fails in infinite spaces, loops
  • Time: O(b^m)
  • Space: O(bm), linear
  • Optimal: No
17
Q

What is the difference between SEARCH and TOTAL cost?

A

Search cost depends on time complexity and can include a term for memory usage. Total cost combines the search cost and the path cost of the solution found.

18
Q

Why does Uniform Cost Search do more work than Breadth First Search even when all step costs are equal?

A

BFS stops as soon as it generates a goal. Uniform examines all the nodes at the goal’s depth to see if one has a lower cost.

19
Q

Properties of Depth-Limited Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A
  • Complete: No
  • Time: O(b ^ L), where L = depth limit
  • Space: O(bL)
  • Optimal: No
20
Q

Properties of Iterative Deeping Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A
  • Complete: Yes
  • Time: O(b ^ d)
  • Space: O(bd)
  • Optimal: Yes
21
Q

Properties of Bidirectional Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A
  • Complete: Yes
  • Time: O(b ^ d/2)
  • Space: O(b ^ d/2)
  • Optimal: Yes
22
Q

For A* Search, what is f(n), g(n), and h(n)? How do they relate to each other?

A

f(n) = g(n) + h(n)

  • f(n) : estimated cost of cheapest solution through n
  • g(n) : cost to reach node n
  • h(n) : cost to get from node n to the goal
23
Q

What is an admissible heuristic?

A

A heuristic that never overestimates the coast to reach the goal.

24
Q

What is the Manhattan distance?

A

Distance from goal in the sum of horizontal or vertical distances.

25
Q

What is the evaluation function for uniform-cost search? How does it compare to A* search?

A

Uniform-cost search f(n) = g(n)

Uniform-cost search is A* search with h(n) = 0

26
Q

What is the evaluation function for greedy best-first search?

A

f(n) = h(n) (heuristic) = estimate of cost from n to the closest goal

g(n) = 0, root to n cost is not considered

27
Q

Properties of Greedy Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A
  • Complete: No, can get stuck in loop
  • Time: O(b ^ m)
  • Space: O(b ^ m)
  • Optimal: No
28
Q

What is the evaluation function for A* Search?

A

f(n) = g(n) + h(n)

29
Q

Describe Local Beam Search and its stochastic version.

A
  1. Begin with k randomly-generated states.
  2. Keep k states instead of 1, choose top k of all their successors.

Stochastic: choose k successors randomly, biased toward good ones

30
Q

What are the main components of a game?

A
  1. Initital State
  2. Actions - set of legal moves
  3. Transition Function - returns list of legal moves and resulting state
  4. Terminal Test - determines when game is over
  5. Utility Function - value of terminal state
31
Q

Properties of Depth-Limited Search:

  • Complete?
  • Time?
  • Space?
  • Optimal?
A
  • Complete: Yes
  • Time: Yes
  • Space: O(b ^ m)
  • Optimal: O(bm)
32
Q

Define quiescent.

A

Unlikely to exhibit wild swings in value.

33
Q

What is the horizon effect?

A

It arises when program is facing an opponent’s move that causes serious damage and is ultimately unavoidable, but can be temporarily avoided by delaying tactics.

34
Q

What is a belief state?

A

Set of all logically possible board states given the complete history of percepts to date.