AI 2012/13 Flashcards

1
Q

What is an agent program?

A

The implementation of the agent function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an agent function?

A

Mapping from sensors to actuators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an agent?

A

Architecture (hardware: sensors/actuators) + Program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Are input and output part of the program?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the input to the agent program?

A

Only the next percept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is the performance meassure part of the skeleton?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name three alternative architecture designs!

A
  • Touring Machine
  • Fergusson, Subsumption Architecture
  • Brooks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

As what kind of agent could a chess-computer be realized?

A

Table-Driven-Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two “persistent” parts of a table-driven-agent-function?

A
  • percepts (a sequence, initially empty)

- table (table of actions, indexed by percept sequences, initally fully specified)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What could a simple agent-program for a table-driven-agent look like?

A

function TABLE-DRIVEN-AGENT(percept) returns an action
persistent: percepts
table
append percept to the end of percepts
action <- LOOKUP(percepts, table)
return action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does action selection work in a seimple reflex agent?

A

The action selection is based on current percept only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Give an abstract example for a production rule of a simple reflex agent!

A

if condition then action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Write down the shortest possible agent program for a reflex-vaccuum-agent and specify a longer but more efficient possibility!

A
function REFLEX-VACUUM-AGENT([location, status]) returns action
    if status = Dirty then return Suck
    else if location = A then return Right
    else if location = B then return Left

more efficient production rules (because they can be checked and executed very efficiently):
if status = Dirty and location = A then return Suck
if status = Dirty and location = B then return Suck
if status = Clean and location = A then return Right
if status = Clean and location = B then return Left

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does the agent program of a simple reflex agent generally look like?

A
function SIMPLE-REFLEX-AGENT(percept) returns action
    state <- rule.ACTION
    return action
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Name one advantages of a simple-reflex-agent!

A
  • very efficient implementation by exploiting relevant properties of the environment (e.g. logical gates implementing Boolean circuits)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the consequences of the perceptual aliasing problem for a simple reflex agent?

A

It is applicable only to fully observable environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the perceptual aliasing problem?

A

States that are perceptually undistinguishable but sementically different.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to improve a simple reflex agent?

A
  • implementation of “internal state” -> requires a “world model”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the difference between a simple reflex agent and a model-based reflex agent?

A

The model-based reflex agent acts on the basis of a world model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a function, generally speaking?

A

A directed relation between two sets, the domain and the range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
What kind of function is illustrated here?
Set A        Set B
# -----------> #
#
# -----------> #
                     #
#
# -----------> #
A

An injective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
What kind of function is illustrated here?
Set A        Set B
# -----------> #
# -----------> #
# -----------> #
# -----------> #
# -----------> #
A

A bijective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
What kind of function is illustrated here?
Set A        Set B
# -----------> #
# -----------> #
#, # -----------> #
# -----------> #
# -----------> #
#
#
A

A surjective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does rationality NOT mean?

A

being perfect, omniscient, clairvoyant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Q

What does PEAS stand for?

A

P: Performance Measure
E: Environment
A: Actuators
S: Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
26
Q

What does it mean for an agent to be autonomous? (3 points)

A
  • Its behavior is dependend on its own perception.
  • It is internally motivated.
  • “Real” autonomy implies flexibility of behavior.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
27
Q

8 Properties of Environments:

A
  • episodic vs. sequential
  • static vs. dynamic
  • discret vs. continuous
  • fully observable vs. partially observable vs. unobservable
  • deterministic vs. stochastic vs. nondeterministic
  • uncertain
  • single vs. multi-agent environment
  • known vs. unknown environments
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
28
Q

Task environment specification can also be called…

A

…PEAS descriptions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
29
Q

Name one differences between a model-based reflex agent and a model-based goal-based agent?

A
  • The condition-action rules are exchanged for a desired goal. This means that in the latter the goal is represented explicitly rather than implicitely through production rules.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
30
Q

Describe: static vs. dynamic

A
  • Can the environment change, while the agent is deliberating?
  • consequences of situatedness:
    Is continuous observation and timely acting required?
    (Not-acting as action)
  • semi-dynamic environments:
    Environment itself does not change, but agent’s performance measure does
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
31
Q

Describe: discret vs. continuous

A

Discrete:
A finite amount of things to sense, like in chess: finite amount of action choices and a finite amount of things to sense
Continuous:
Infinite amount of possible actions and things to perceive (e.g. Darts: infinite degree of angles in which to throw the dart)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
32
Q

Describe: fully observable vs. partially observable vs. unobservable

A

Do the agent’s sensors give it access to the complete state of the environment at each point in time? -> fully observable
If not, then you need memory (no simple reflex agent)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
33
Q

Describe: deterministic vs. stochastic vs. nondeterministic

A

Is everything predictable from the agent’s point of view?

Deterministic:
The agent's action uniquely determine the outcome (eg. chess)
(Strategic environments)
Stochastic:
You can't predict the outcome (eg. of the dice in backgammon)
-> outcomes with probabilities.
Non-deterministic:
Outcomes without probabilities.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
34
Q

Describe: uncertain environment

A

All environments that are not deterministic or not fully observable.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
35
Q

Describe: single vs. multi-agent environment

A

When does an agent have to consider something else as another agent?

  • (partly) competitive, or
  • (partly) cooperative multi-agent environments

Side remark: multi-agent environments can also be categorized as:

  • adversarial or
  • benign
36
Q

Describe: known vs. unknown environment

A

This refers to the designer’s knowledge of the environment (e.g. its laws of physics).
If it is unknown to the designer, the agent has to learn on its own.

37
Q

What is a coercing action and what is it good for?

A

Coercing actions reduce the uncertainty along some dimension of the successor problem solving state space over the predecessor space.

38
Q

What does a node n contain?

A

n. STATE
n. PARENT
n. ACTION
n. PATH-COST

39
Q

Name 4 dimensions along which search strategies are evaluated!

A
  • completness
  • time complexity
  • space complexity
  • optimality
40
Q

d?

A

depth of a search tree

41
Q

b?

A

maximum branching factor of the search tree

42
Q

m?

A

maximum depth of the search tree (may be infinite)

43
Q

What’s included in a problem definition/formulation?

A
  • initial state
  • (available) actions
  • a transition model (results)
  • a goal test
  • (an additive) path cost
44
Q

What is a solution?

A

A solution to a problem is a sequence of actions from the initial to a goal state.

45
Q

What is an optimal solution?

A

An optimal solution has the lowest path cost of all solutions.

46
Q

What is a successor?

A

A successor is any state reachable from a given state by a single (!) action.

47
Q

What is a state space?

A

The state space of a problem …
… is the set of all states reachable from the initial state by any sequence of actions.
… is defined implicitly by initial state, actions and transition model.
… forms a directed graph (nodes = states, links = actions)

48
Q

Explain: episodic vs. sequential

A

Is the agent’s experience divided into atomic perception-action episodes?
Do short-term actions have long-term consequences (e.g. in chess, or the taxi driver)?

49
Q

When is it a strategic environment?

A

If the environment is deterministic except for the strategic actions of other agents.

50
Q

What does it depend on, whether all action relevant aspects of the “world” can be gathered completely and reliably?

A

The performance measure!

51
Q

h(n) has to be …

A

… under-estimated!

to be admissible!

52
Q

What is a state?

A

A representation of a physical configuration.

53
Q

What is a node?

A

A data structure constituting part of a search tree.

54
Q

What does g(n) stand for?

A

n.PATH-COST

55
Q

What is an uninformed search strategy?

A

An uninformed search strategy uses only the information available in the problem definition.

56
Q

Name 5 uninformed search strategies!

A
  • breadth-first search
  • uniform-cost search
  • depth-first search
  • depth-limited search
  • iteratice deepening search
57
Q

How can the frontier of a breadth-first search be classified?

A

FIFO

58
Q

Properties of breadth-first search:

A

Complete? Yes, if b is finite.
Time? O(b-hoch-d)
Space? O(b-hoch-d). (Keeps every node on memory)
Optimal? Yes, if step costs are identical.

59
Q

By what is the frontier of a uniform-cost search sorted?

A

By the path-cost g(n).

59
Q

How can the behavior of the frontier of a depth-first search be classified?

A

LIFO (behaving like a stack)

59
Q

Properties of depth-first search?

A

Complete? No. Fails in infinite-depth spaces, spaces with loops
Time? O(b-hoch-m)
Space? O(bm) -> linear space
Optimal? No.

59
Q

Name three informed search strategies!

A
  • best-first search
  • greedy search
  • A*
63
Q

What elements are important for a general mol of a learning agent?

A
  • performance element
  • critic
  • performance standard
  • learning element
  • problem generator
64
Q

General model of learning agent:

Performance element:

A

Carries out action selection

65
Q

General model of learning agent:

Critic?

A

Provides feedback about performance with respect to performance standard

66
Q

General model of learning agent:

Performance standard?

A

Is the external and immutable reference the agent fits its own behavior to

67
Q

General model of learning agent:

Learning element?

A

Is responsible for making improvements based on feedback from critic (modification of performance element)

68
Q

General model of learning agent:

Problem generator?

A

Suggests exploratory actions (not random) leading to new and informative experiences

69
Q

Open issues in the model-based goal-based agent?

A

Choice among alternative plans or conflicting goals (-> utility function useful)

70
Q

Tell me three thing about the concept of a utility function!

A
  • maps (sequence of) state(s) to the associated degree of happiness described by a real number
  • specification of trade-off between conflicting goals
  • highest expected value for uncertain outcome: average over all possible outcomes, weighted by probability of occurrence
71
Q

What is the difference between:

  • a single-state problem
  • conformant problem
  • contingency problem
A

Single-state problem: single initial state
Conformant problem: set of possible initial states (actual initial state unknown)
Contingency problem: using if functions, because e.g. actuators could be faulty
(moving right, if (still) in A -> moving right) or
(Right, if dirt then suck)

72
Q

Three operations defined on a queue?

A

EMPTY?(queue)
POP(queue)
INSERT(element, queue)

73
Q

Three ways to order inserted nodes:

A

FIFO
LIFO
priority queue

74
Q

Properties of uniform-cost search?

A

Complete? Yes, if step cost is bigger than 0.
Time? # of nodes with g<= cost of optimal solution
Optimal? Yes, nodes expand in increasing order of g(n)

75
Q

Properties of depth-limited search:

A

Complete? No, if l < d.
Time? O(b-hoch-l), where l is the depth-limit
Space? O(bl)
Optimal? No, if l > d.

76
Q

Properties of iterative deepening search:

A

Complete? Yes, if b is finite.
Time? O(b-hoch-d)
Space? O(bd)
Optimal? Yes, if step costs are identical.

77
Q

What does f(n) stand for?

A

An evaluation function.

78
Q

How does f(n) look like in greedy search?

A

f(n) = h(n)

79
Q

Tell me the evaluation function of A* search!

A

f(n) = g(n) + h(n)

80
Q

Properties of greedy search:

A

Complete? No, can get stuck in loops. Complete in finite state with repeated-state-checking.

Time? O(b-hoch-m)
Space? O(b-hoch-m)
Optimal? No.

81
Q

Properties of A* search:

A

Complete? Yes,
unless there are infinitely many nodes with f <= f(G)
Time? Exponential in [relative error in h * length of solution]
Space? Keeps all nodes in memory.
Optimal? Yes

82
Q

What kind of representation do search algorithms treat states and actions?

A

A atomic representations.

83
Q

Difference between TREE-SEARCH and GRAPH-SEARCH?

A

GRAPH-SEARCH avoids consideration of redundant paths by recording visited nodes in an “explored-set”.

84
Q

What does time and space complexity of search algorithms depend on generally?

A

b (branching factor) and d (depth of shallowest solution)

85
Q

Can good heuristics dramatically reduce search cost?

A

Yes!

86
Q

Briefly characterize greedy-search/best-first-search!

A

It expands lowest h(n).

Incomplete and not always optimal, but often efficient.

87
Q

Briefly characterize A* search!

A

It expands nodes with lowest g(n)+h(n).
Complete and optimal for TREE-SEARCH if h(n) is admissible.
Space complexity remains a problem.