AI 2012/13 Flashcards

(87 cards)

1
Q

What is an agent program?

A

The implementation of the agent function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an agent function?

A

Mapping from sensors to actuators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is an agent?

A

Architecture (hardware: sensors/actuators) + Program

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Are input and output part of the program?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the input to the agent program?

A

Only the next percept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Is the performance meassure part of the skeleton?

A

No.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Name three alternative architecture designs!

A
  • Touring Machine
  • Fergusson, Subsumption Architecture
  • Brooks
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

As what kind of agent could a chess-computer be realized?

A

Table-Driven-Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What are the two “persistent” parts of a table-driven-agent-function?

A
  • percepts (a sequence, initially empty)

- table (table of actions, indexed by percept sequences, initally fully specified)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What could a simple agent-program for a table-driven-agent look like?

A

function TABLE-DRIVEN-AGENT(percept) returns an action
persistent: percepts
table
append percept to the end of percepts
action <- LOOKUP(percepts, table)
return action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

How does action selection work in a seimple reflex agent?

A

The action selection is based on current percept only.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Give an abstract example for a production rule of a simple reflex agent!

A

if condition then action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Write down the shortest possible agent program for a reflex-vaccuum-agent and specify a longer but more efficient possibility!

A
function REFLEX-VACUUM-AGENT([location, status]) returns action
    if status = Dirty then return Suck
    else if location = A then return Right
    else if location = B then return Left

more efficient production rules (because they can be checked and executed very efficiently):
if status = Dirty and location = A then return Suck
if status = Dirty and location = B then return Suck
if status = Clean and location = A then return Right
if status = Clean and location = B then return Left

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

How does the agent program of a simple reflex agent generally look like?

A
function SIMPLE-REFLEX-AGENT(percept) returns action
    state <- rule.ACTION
    return action
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Name one advantages of a simple-reflex-agent!

A
  • very efficient implementation by exploiting relevant properties of the environment (e.g. logical gates implementing Boolean circuits)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are the consequences of the perceptual aliasing problem for a simple reflex agent?

A

It is applicable only to fully observable environments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is the perceptual aliasing problem?

A

States that are perceptually undistinguishable but sementically different.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

How to improve a simple reflex agent?

A
  • implementation of “internal state” -> requires a “world model”
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

What is the difference between a simple reflex agent and a model-based reflex agent?

A

The model-based reflex agent acts on the basis of a world model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

What is a function, generally speaking?

A

A directed relation between two sets, the domain and the range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q
What kind of function is illustrated here?
Set A        Set B
# -----------> #
#
# -----------> #
                     #
#
# -----------> #
A

An injective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q
What kind of function is illustrated here?
Set A        Set B
# -----------> #
# -----------> #
# -----------> #
# -----------> #
# -----------> #
A

A bijective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q
What kind of function is illustrated here?
Set A        Set B
# -----------> #
# -----------> #
#, # -----------> #
# -----------> #
# -----------> #
#
#
A

A surjective function.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

What does rationality NOT mean?

A

being perfect, omniscient, clairvoyant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
What does PEAS stand for?
P: Performance Measure E: Environment A: Actuators S: Sensors
26
What does it mean for an agent to be autonomous? (3 points)
- Its behavior is dependend on its own perception. - It is internally motivated. - "Real" autonomy implies flexibility of behavior.
27
8 Properties of Environments:
- episodic vs. sequential - static vs. dynamic - discret vs. continuous - fully observable vs. partially observable vs. unobservable - deterministic vs. stochastic vs. nondeterministic - uncertain - single vs. multi-agent environment - known vs. unknown environments
28
Task environment specification can also be called...
...PEAS descriptions.
29
Name one differences between a model-based reflex agent and a model-based goal-based agent?
- The condition-action rules are exchanged for a desired goal. This means that in the latter the goal is represented explicitly rather than implicitely through production rules.
30
Describe: static vs. dynamic
- Can the environment change, while the agent is deliberating? - consequences of situatedness: Is continuous observation and timely acting required? (Not-acting as action) - semi-dynamic environments: Environment itself does not change, but agent's performance measure does
31
Describe: discret vs. continuous
Discrete: A finite amount of things to sense, like in chess: finite amount of action choices and a finite amount of things to sense Continuous: Infinite amount of possible actions and things to perceive (e.g. Darts: infinite degree of angles in which to throw the dart)
32
Describe: fully observable vs. partially observable vs. unobservable
Do the agent's sensors give it access to the complete state of the environment at each point in time? -> fully observable If not, then you need memory (no simple reflex agent)
33
Describe: deterministic vs. stochastic vs. nondeterministic
Is everything predictable from the agent's point of view? ``` Deterministic: The agent's action uniquely determine the outcome (eg. chess) (Strategic environments) Stochastic: You can't predict the outcome (eg. of the dice in backgammon) -> outcomes with probabilities. Non-deterministic: Outcomes without probabilities. ```
34
Describe: uncertain environment
All environments that are not deterministic or not fully observable.
35
Describe: single vs. multi-agent environment
When does an agent have to consider something else as another agent? - (partly) competitive, or - (partly) cooperative multi-agent environments Side remark: multi-agent environments can also be categorized as: - adversarial or - benign
36
Describe: known vs. unknown environment
This refers to the designer's knowledge of the environment (e.g. its laws of physics). If it is unknown to the designer, the agent has to learn on its own.
37
What is a coercing action and what is it good for?
Coercing actions reduce the uncertainty along some dimension of the successor problem solving state space over the predecessor space.
38
What does a node n contain?
n. STATE n. PARENT n. ACTION n. PATH-COST
39
Name 4 dimensions along which search strategies are evaluated!
- completness - time complexity - space complexity - optimality
40
d?
depth of a search tree
41
b?
maximum branching factor of the search tree
42
m?
maximum depth of the search tree (may be infinite)
43
What's included in a problem definition/formulation?
- initial state - (available) actions - a transition model (results) - a goal test - (an additive) path cost
44
What is a solution?
A solution to a problem is a sequence of actions from the initial to a goal state.
45
What is an optimal solution?
An optimal solution has the lowest path cost of all solutions.
46
What is a successor?
A successor is any state reachable from a given state by a single (!) action.
47
What is a state space?
The state space of a problem ... ... is the set of all states reachable from the initial state by any sequence of actions. ... is defined implicitly by initial state, actions and transition model. ... forms a directed graph (nodes = states, links = actions)
48
Explain: episodic vs. sequential
Is the agent's experience divided into atomic perception-action episodes? Do short-term actions have long-term consequences (e.g. in chess, or the taxi driver)?
49
When is it a strategic environment?
If the environment is deterministic except for the strategic actions of other agents.
50
What does it depend on, whether all action relevant aspects of the "world" can be gathered completely and reliably?
The performance measure!
51
h(n) has to be ...
... under-estimated! | to be admissible!
52
What is a state?
A representation of a physical configuration.
53
What is a node?
A data structure constituting part of a search tree.
54
What does g(n) stand for?
n.PATH-COST
55
What is an uninformed search strategy?
An uninformed search strategy uses only the information available in the problem definition.
56
Name 5 uninformed search strategies!
- breadth-first search - uniform-cost search - depth-first search - depth-limited search - iteratice deepening search
57
How can the frontier of a breadth-first search be classified?
FIFO
58
Properties of breadth-first search:
Complete? Yes, if b is finite. Time? O(b-hoch-d) Space? O(b-hoch-d). (Keeps every node on memory) Optimal? Yes, if step costs are identical.
59
By what is the frontier of a uniform-cost search sorted?
By the path-cost g(n).
59
How can the behavior of the frontier of a depth-first search be classified?
LIFO (behaving like a stack)
59
Properties of depth-first search?
Complete? No. Fails in infinite-depth spaces, spaces with loops Time? O(b-hoch-m) Space? O(bm) -> linear space Optimal? No.
59
Name three informed search strategies!
- best-first search - greedy search - A*
63
What elements are important for a general mol of a learning agent?
- performance element - critic - performance standard - learning element - problem generator
64
General model of learning agent: | Performance element:
Carries out action selection
65
General model of learning agent: | Critic?
Provides feedback about performance with respect to performance standard
66
General model of learning agent: | Performance standard?
Is the external and immutable reference the agent fits its own behavior to
67
General model of learning agent: | Learning element?
Is responsible for making improvements based on feedback from critic (modification of performance element)
68
General model of learning agent: | Problem generator?
Suggests exploratory actions (not random) leading to new and informative experiences
69
Open issues in the model-based goal-based agent?
Choice among alternative plans or conflicting goals (-> utility function useful)
70
Tell me three thing about the concept of a utility function!
- maps (sequence of) state(s) to the associated degree of happiness described by a real number - specification of trade-off between conflicting goals - highest expected value for uncertain outcome: average over all possible outcomes, weighted by probability of occurrence
71
What is the difference between: - a single-state problem - conformant problem - contingency problem
Single-state problem: single initial state Conformant problem: set of possible initial states (actual initial state unknown) Contingency problem: using if functions, because e.g. actuators could be faulty (moving right, if (still) in A -> moving right) or (Right, if dirt then suck)
72
Three operations defined on a queue?
EMPTY?(queue) POP(queue) INSERT(element, queue)
73
Three ways to order inserted nodes:
FIFO LIFO priority queue
74
Properties of uniform-cost search?
Complete? Yes, if step cost is bigger than 0. Time? # of nodes with g<= cost of optimal solution Optimal? Yes, nodes expand in increasing order of g(n)
75
Properties of depth-limited search:
Complete? No, if l < d. Time? O(b-hoch-l), where l is the depth-limit Space? O(bl) Optimal? No, if l > d.
76
Properties of iterative deepening search:
Complete? Yes, if b is finite. Time? O(b-hoch-d) Space? O(bd) Optimal? Yes, if step costs are identical.
77
What does f(n) stand for?
An evaluation function.
78
How does f(n) look like in greedy search?
f(n) = h(n)
79
Tell me the evaluation function of A* search!
f(n) = g(n) + h(n)
80
Properties of greedy search:
Complete? No, can get stuck in loops. Complete in finite state with repeated-state-checking. Time? O(b-hoch-m) Space? O(b-hoch-m) Optimal? No.
81
Properties of A* search:
Complete? Yes, unless there are infinitely many nodes with f <= f(G) Time? Exponential in [relative error in h * length of solution] Space? Keeps all nodes in memory. Optimal? Yes
82
What kind of representation do search algorithms treat states and actions?
A atomic representations.
83
Difference between TREE-SEARCH and GRAPH-SEARCH?
GRAPH-SEARCH avoids consideration of redundant paths by recording visited nodes in an "explored-set".
84
What does time and space complexity of search algorithms depend on generally?
b (branching factor) and d (depth of shallowest solution)
85
Can good heuristics dramatically reduce search cost?
Yes!
86
Briefly characterize greedy-search/best-first-search!
It expands lowest h(n). | Incomplete and not always optimal, but often efficient.
87
Briefly characterize A* search!
It expands nodes with lowest g(n)+h(n). Complete and optimal for TREE-SEARCH if h(n) is admissible. Space complexity remains a problem.