Week 1 (23) Flashcards

1
Q

agent

A

-Perceives environment through sensors
-acts on environment through actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Percept

A

Agents perceptual input at any given instant

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Percept sequence

A

Complete history of everything the agent has ever perceived

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Agent function

A

Describes an agents behaviour

Maps any given Percept sequence to an action

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Agent program

A

Concrete implementation of an agent function running within some physical system

(Check if this is exact as textbook says ‘it is important to keep these two ideas distinct’)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Rational agent

A

For each possible Percept sequence,

a rational agent should select an action that is expected to maximise its performance measure,

given the evidence provided by the Percept sequence and whatever built in knowledge the agent has

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Performance measure

A

Something that evaluate any given sequence of environment states (not agent states)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

4 factors that determine rationality

A

Performance measure defining criterion of success

Agents prior knowledge of env

Actions that agent can perform

Agents Percept sequence to date

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Omniscience

A

Agent knows actual outcome of actions and can act accordingly

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Information gathering

A

Doing actions in order to modify future Percepts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Exploration

A

A type of information gathering in which the search space is inspected

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Autonomy

A

Relying on own precepts over prior knowledge

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Task environment

A

PEAS

Performance

Environment

Actuators

Sensors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Softbots

A

Software agents

Software robots

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Fully observable v partial observable

A

If an agents sensors give it access to the complete state of the env at each point in time

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Example of emergent behaviour in competitive environment

A

Randomised behaviour (to avoid being predictable)

17
Q

Example of emergent behaviour in cooperative environment

A

Communication

18
Q

Deterministic env

A

If next state of env is completely determined by the current state and the action executed by the agent

19
Q

When can an environment appear to be stochastic

A

When it is partially observable

20
Q

Uncertain environment

A

Not fully observable

Not deterministic

21
Q

Distinction between non deterministic environment and stochastic environment

A

Non determinism: actions are characterised by possible outcomes

Stochastic: same as above + probabilities associated to possible outcomes

22
Q

Episodic env v sequential env

A

Episodic: the actions taken in a previous episode do not effect subsequent episodes

Episode: agent receives a Percept then performs a single action

23
Q

Discrete env v Continuous env

A

Refers to handling of time

Discrete set of percepts and actions

24
Q

Known env v unknown env

A

Known: outcomes for all actions are given

Unknown: agent must learn how env works to make good decisions

25
Q

Example of distinction between (un)known and partially/fully observable

A

Solitaire:

Rules known

Exact cards not yet turned over

26
Q

Environment generator

A

Selects particular environments from environment class with given likelihoods to evaluate agent

27
Q

Agent =

A

Architecture + program

28
Q

Difference between agent function and agent program inputs

A

Program takes just current Percept as input

Function taken entire Percept history

29
Q

Simple reflex agent

A

Selection actions based on current Percept ignoring history

30
Q

How can a simple reflex agent escape a loop

A

If the agent can randomise actions

31
Q

Internal state

A

For model based reflex agent

Depends on Percept history and reflects the data that an agent maintains internally of environment

32
Q

2 pieces of info needed by model based agent

A

Information about how world evolves independent of agent

Information about how agents actions effect world

33
Q

Utility function

A

An agents internalisation of the performance measure

34
Q

Difference between performance measure and utility function

A

Performance measure is objective measurement of environment not done by agent

35
Q

Rational utility based agent chooses the action that maximises

A

EXPECTED utility