Wk 1 (Ch1/2) : Introduction, Intelligent Agents Flashcards

1
Q

What is the definition of a rational agent?

A

Rational agent : for each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built in knowledge the agent has.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Explain environments, rational actions, and performance measures.

A
In my words (sort of) rational actions are those that maximize performance measures in an environment.  Of course, actions need to be available and you also need to be able to perceive the environment and be doing it on purpose.  ---PEAS
Performance measure
Environment
Acuators
Sensors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is the best way to design performance measures?

A

Based on what you actually want to achieve in the environment, not based on how you think the agent should behave.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What 4 things does rationality depend on?

A
  1. performance measure
  2. percepts sequence to date
  3. prior knowledge of the environment
  4. actions available
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

How can time be part of an environment?

A

Just one other parameter to take in to account. Doing this will lead to no two states ever being the same as time is continually changing.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

How do you show an agent is rational?

A

For all possible environments, (and all possible start states and potential external states?) , this agent performs at least as well as any other agent. ???? this is a very strict interpretation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How can an agent keep awareness on potential states it can’t see without internal memory?

A

Using the environment as external memory (think appt calendars and knots in handkerchiefs) external triggers, writing things down, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How does action cost affect movement decisions?

A

Any expenditure of cost, should be an investment toward increasing future payout and performance measure improvement.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the biggest problem with reflex agents? What IS a reflex agent?

A

They have to continually do the same thing in environments that look the same, but really are quite different. Reflex agents essentially always act based on the current percept, regardless of percept history.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Does partial information imply non rationality?

A

absolutely not.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What kind of task environment would prevent pure reflex agents from acting rationally?

A

partially observable… think correspondence chess and reacting to A4 the same way every time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

In what environment is every agent rational?

A

reward-invariance under permutations of actions…. in other words, what I do has basically no affect on the reward I receive.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

what’s the difference between agent functions and programs?

A

functions take into account all percepts up to this point…. programs are only the current percept.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What are the 6 types of task environments and how do I remember them? What is the options of each environment

A
Observable (fully / partially)
Agents (single /multiple)
Deterministic (vs. stochastic)
Episodic
Static (vs. dynamic)
Discrete (vs. continuous)

In other words…. EASY = :
its only me, I can see everything, and its not changing. I have black and white choices that directly determine the outcome. If I make a mistake, no later environments are affected.

As opposed to …..HARD = :
There are many agents here, I can’t see everything that’s happening, and it’s all changing anyway. My choices are grey, not black and white, and if I make a mistake, it will affect later states.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define agent in the book’s words.

A

Agent: an entity that perceives and acts; or, one that can be viewed as perceiving and
acting. Essentially any object qualifies; the key point is the way the object implements
an agent function. (Note: some authors restrict the term to programs that operate on
behalf of a human, or to programs that can cause some or all of their code to run on
other machines on a network, MOBILE AGENT as in mobile agents.)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define agent function vs. agent program in the book’s words

A

Agent function: a function that specifies the agent’s action in response to every possible
percept sequence.
• Agent program: that program which, combined with a machine architecture, implements
an agent function. In our simple designs, the program takes a new percept on
each invocation and returns an action.

17
Q

Define rationality in the book’s words.

A

Rationality: a property of agents that choose actions that maximize their expected utility,
given the percepts to date.

18
Q

Define autonomy in the book’s words.

A

Autonomy: a property of agents whose behavior is determined by their own experience
rather than solely by their initial programming.

19
Q

Define reflex agents, model-based agents, goal-based agents, and utility based agents and learning based agents in the book’s words.

A

Reflex agent: an agent whose action depends only on the current percept.

• Model-based agent: an agent whose action is derived directly from an internal model
of the current world state that is updated over time.

• Goal-based agent: an agent that selects actions that it believes will achieve explicitly
represented goals.

• Utility-based agent: an agent that selects actions that it believes will maximize the
expected utility of the outcome state.

• Learning agent: an agent whose behavior improves over time based on its experience

20
Q

When does randomization help?

A

reflex agent that doesn’t know where it is, but knows that it’s at a boundary somehow ??(bump sensor?)