AI 2012/13 Flashcards
What is an agent program?
The implementation of the agent function.
What is an agent function?
Mapping from sensors to actuators.
What is an agent?
Architecture (hardware: sensors/actuators) + Program
Are input and output part of the program?
No.
What is the input to the agent program?
Only the next percept.
Is the performance meassure part of the skeleton?
No.
Name three alternative architecture designs!
- Touring Machine
- Fergusson, Subsumption Architecture
- Brooks
As what kind of agent could a chess-computer be realized?
Table-Driven-Agent
What are the two “persistent” parts of a table-driven-agent-function?
- percepts (a sequence, initially empty)
- table (table of actions, indexed by percept sequences, initally fully specified)
What could a simple agent-program for a table-driven-agent look like?
function TABLE-DRIVEN-AGENT(percept) returns an action
persistent: percepts
table
append percept to the end of percepts
action <- LOOKUP(percepts, table)
return action
How does action selection work in a seimple reflex agent?
The action selection is based on current percept only.
Give an abstract example for a production rule of a simple reflex agent!
if condition then action
Write down the shortest possible agent program for a reflex-vaccuum-agent and specify a longer but more efficient possibility!
function REFLEX-VACUUM-AGENT([location, status]) returns action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left
more efficient production rules (because they can be checked and executed very efficiently):
if status = Dirty and location = A then return Suck
if status = Dirty and location = B then return Suck
if status = Clean and location = A then return Right
if status = Clean and location = B then return Left
How does the agent program of a simple reflex agent generally look like?
function SIMPLE-REFLEX-AGENT(percept) returns action state <- rule.ACTION return action
Name one advantages of a simple-reflex-agent!
- very efficient implementation by exploiting relevant properties of the environment (e.g. logical gates implementing Boolean circuits)
What are the consequences of the perceptual aliasing problem for a simple reflex agent?
It is applicable only to fully observable environments.
What is the perceptual aliasing problem?
States that are perceptually undistinguishable but sementically different.
How to improve a simple reflex agent?
- implementation of “internal state” -> requires a “world model”
What is the difference between a simple reflex agent and a model-based reflex agent?
The model-based reflex agent acts on the basis of a world model.
What is a function, generally speaking?
A directed relation between two sets, the domain and the range.
What kind of function is illustrated here? Set A Set B # -----------> # # # -----------> # # # # -----------> #
An injective function.
What kind of function is illustrated here? Set A Set B # -----------> # # -----------> # # -----------> # # -----------> # # -----------> #
A bijective function.
What kind of function is illustrated here? Set A Set B # -----------> # # -----------> # #, # -----------> # # -----------> # # -----------> # # #
A surjective function.
What does rationality NOT mean?
being perfect, omniscient, clairvoyant
What does PEAS stand for?
P: Performance Measure
E: Environment
A: Actuators
S: Sensors
What does it mean for an agent to be autonomous? (3 points)
- Its behavior is dependend on its own perception.
- It is internally motivated.
- “Real” autonomy implies flexibility of behavior.
8 Properties of Environments:
- episodic vs. sequential
- static vs. dynamic
- discret vs. continuous
- fully observable vs. partially observable vs. unobservable
- deterministic vs. stochastic vs. nondeterministic
- uncertain
- single vs. multi-agent environment
- known vs. unknown environments
Task environment specification can also be called…
…PEAS descriptions.
Name one differences between a model-based reflex agent and a model-based goal-based agent?
- The condition-action rules are exchanged for a desired goal. This means that in the latter the goal is represented explicitly rather than implicitely through production rules.
Describe: static vs. dynamic
- Can the environment change, while the agent is deliberating?
- consequences of situatedness:
Is continuous observation and timely acting required?
(Not-acting as action) - semi-dynamic environments:
Environment itself does not change, but agent’s performance measure does
Describe: discret vs. continuous
Discrete:
A finite amount of things to sense, like in chess: finite amount of action choices and a finite amount of things to sense
Continuous:
Infinite amount of possible actions and things to perceive (e.g. Darts: infinite degree of angles in which to throw the dart)
Describe: fully observable vs. partially observable vs. unobservable
Do the agent’s sensors give it access to the complete state of the environment at each point in time? -> fully observable
If not, then you need memory (no simple reflex agent)
Describe: deterministic vs. stochastic vs. nondeterministic
Is everything predictable from the agent’s point of view?
Deterministic: The agent's action uniquely determine the outcome (eg. chess) (Strategic environments) Stochastic: You can't predict the outcome (eg. of the dice in backgammon) -> outcomes with probabilities. Non-deterministic: Outcomes without probabilities.
Describe: uncertain environment
All environments that are not deterministic or not fully observable.