Module 2: Intelligent Agents Flashcards

(118 cards)

1
Q

An ___ is anything that perceiving its environment through sensors and acting upon that environment through actuators.

A

agent

Example:
* Human is an agent
* A robot is also an agent with cameras and motors
* A thermostat detecting room temperature

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

___ has eyes, ears, and other organs which work for sensors and hand, legs, vocal tract work for actuators.

A

Human agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

___ can have cameras, infrared range finder, NLP for sensors and various motors for actuators.

A

Robotic agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

___ can have keystrokes, file contents as sensory input and act on those inputs and display output on the screen.

A

Software agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

An agent is anything that perceiving its environment through ___ and acting upon that environment through ___.

A

sensors, actuators

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

___ is a device which detects the change in the environment and sends the information to other electronic devices.

A

Sensor

An agent observes its environment through sensors.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

___ are the component of machines that converts energy into motion.

A

Actuators

The actuators are only responsible for moving and controlling a system. An actuator can be an electric motor, gears, rails, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

___ are the devices which affect the environment. These can be legs, wheels, arms, fingers, wings, fins, and display screen.

A

Effectors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q
A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

An ___ is a program that can make decisions or perform a service based on its environment, user input and experiences.

A

intelligent agent

These programs can be used to autonomously gather information on a regular, programmed schedule or when prompted by the user in real time.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

An intelligent agent is a program that can make ___ or perform a service based on its environment, user input and experiences.

A

decisions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Intelligent agents may also be referred to as a ___, which is short for robot.

A

bot

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

An ___ is an autonomous entity which act upon an environment using sensors and actuators for achieving goals.

A

intelligent agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

An intelligent agent may learn from the ___ to achieve their goals.

A

environment

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

The main four rules for an AI agent

A
  • Rule 1: An AI agent must have the ability to perceive the environment.
  • Rule 2: The observation must be used to make decisions.
  • Rule 3: Decision should result in an action.
  • Rule 4: The action taken by an AI agent must be a rational action.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Agent’s perceptual inputs at any given instant

A

Percept

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Complete history of everything that the agent has ever perceived.

A

Percept sequence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

Agent’s behavior is mathematically described by ___.

A

Agent function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Agent’s behavior is ___ described by agent function.

A

mathematically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

A function mapping any given percept sequence to an action

A

Agent function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Agent’s behavior is ___ described by agent program.

A

practically

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Agent’s behavior is practically described by ___.

A

Agent program

The real implementation

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

A ___ is an agent which has clear preference, models uncertainty, and acts in a way to maximize its performance measure with all possible actions.

A

rational agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

A ___ is said to perform the right things.

A

rational agent

AI is about creating rational agents to use for game theory and decision theory for various real-world scenarios.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
For an AI agent, the rational action is most important because in AI ___ algorithm, for each best possible action, agent gets the positive reward and for each wrong action, an agent gets a negative reward.
reinforcement learning
26
Rational agents in AI are very similar to ___.
intelligent agents
27
One that does the right thing
Rational agent
28
Every entry in the table for the agent function is correct
Rationality ## Footnote Rational agent
29
What is correct? The actions that cause the agent to be most successful, so, we need ways to measure ___.
success
30
An objective function that determines how the agent does successfully
Performance measure
31
The rationality of an agent is measured by its ___.
performance measure
32
Rationality can be judged on the basis of these points
* Performance measure which defines the success criterion. * Agent prior knowledge of its environment. * Best possible actions that an agent can perform. * The sequence of percepts.
33
Rationality differs from ___ because an Omniscient agent knows the actual outcome of its action and act accordingly, which is not possible in reality.
Omniscience
34
A rational agent should select an action expected to maximize its ___, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has.
performance measure
35
An ___ knows the actual outcome of its actions in advance, with no other possible outcomes. However, impossible in real world.
omniscient agent
36
The task of AI is to design an agent program which implements the ___.
agent function
37
The structure of an intelligent agent is a combination of ___ and ___.
architecture, agent program
38
___ is machinery that an AI agent executes on.
Architecture
39
___ is used to map a percept to an action.
Agent function
40
___ is an implementation of agent function.
Agent program
41
An ___ executes on the physical architecture to produce function f. f:P* → A
agent program
42
___ is when after experiencing an episode, the agent adjusts its behaviors to perform better for the same job next time.
Learning
43
Does a rational agent depend on only current percept? No, the past ___ should also be used.
percept sequence ## Footnote This is called learning.
44
If an agent just relies on the prior knowledge of its designer rather than its own percepts then the agent lacks ___.
autonomy
45
A rational agent should be ___ - it should learn what it can to compensate for partial or incorrect prior knowledge.
autonomous ## Footnote E.g., a clock * No input (percepts) * Run only but its own algorithm (prior knowledge) * No learning, no experience, etc.
46
Sometimes, the environment may not be the real world. They are all artificial but very complex environments. Those agents working in these environments are called ___.
Software agent (softbots) ## Footnote Because all parts of the agent are software.
47
Task environments are the ___ while the rational agents are the ___.
problems, solutions
48
___ environments are the problems while the ___ agents are the solutions.
Task [environments], rational [agents]
49
In designing an agent, the first step must always be to specify the ___ as fully as possible.
task environment
50
___ is a type of model on which an AI agent works upon.
PEAS
51
When we define an AI agent or rational agent, then we can group its properties under PEAS representation model. It is made up of four words: P: ___ E: ___ A: ___ S: ___
Performance measure Environment Actuators Sensors ## Footnote Let's suppose a self-driving car then PEAS representation will be: * Performance: Safety, time, legal drive, comfort * Environment: Roads, other vehicles, road signs, pedestrian * Actuators: Steering, accelerator, brake, signal, horn * Sensors: Camera, GPS, speedometer, odometer, accelerometer, sonar.
52
An ___ is everything in the world which surrounds the agent, but it is not a part of an agent itself.
environment
53
An environment can be described as a situation in which an agent is ___.
present
54
The ___ is where agent lives, operate and provide the agent with something to sense and act upon it.
environment
55
An environment is mostly said to be ___.
non-feministic
56
As per ___ and ___, an environment can have various features from the point of view of an agent: * Fully observable vs Partially Observable * Static vs Dynamic * Discrete vs Continuous * Deterministic vs Stochastic * Single-agent vs Multi-agent * Episodic vs sequential * Known vs Unknown * Accessible vs Inaccessible
Russell, Norvig
57
If an agent sensor can sense or access the complete state of an environment at each point of time then it is a ___ environment, else it is ___.
fully observable, partially observable
58
A ___ environment is easy as there is no need to maintain the internal state to keep track history of the world.
fully observable
59
An agent with no sensors in all environments then such an environment is called as ___.
unobservable
60
If an agent's current state and selected action can completely determine the next state of the environment, then such environment is called a ___ environment.
deterministic
61
A ___ environment is random in nature and cannot be determined completely by an agent.
stochastic
62
In a ___, ___ environment, agent does not need to worry about uncertainty.
deterministic, fully observable
63
If next state of the environment is completely determined by the current state and the actions executed by the agent, then the environment is ___, otherwise, it is ___.
deterministic, stochastic
64
An environment that is deterministic except for actions of other agents
Strategic environment
65
In an ___ environment, there is a series of one-shot actions, and only the current percept is required for the action.
episodic
66
In ___ environment, an agent requires memory of past actions to determine the next best actions.
Sequential
67
Agent’s single pair of perception and action
Episode
68
The quality of the agent’s action does not depend on other episodes, making every episode independent of each other.
Episodic ## Footnote Episodic environment is simpler as the agent does not need to think ahead.
69
An environment where the current action may affect all future decisions
Sequential
70
A ___ environment is always changing over time
dynamic
71
An environment is not changed over time but the agent’s performance score does
Semidynamic
72
If the environment can change itself while an agent is deliberating then such environment is called a ___ environment else it is called a ___ environment.
dynamic, static
73
___ environments are easy to deal because an agent does not need to continue looking at the world while deciding for an action.
Static
74
In ___ environment, agents need to keep looking at the world at each action.
dynamic ## Footnote Taxi driving is an example of a dynamic environment whereas crossword puzzles are an example of a static environment.
75
If in an environment there are a finite number of percepts and actions that can be performed within it, then such an environment is called a ___ environment, else it is called ___ environment.
discrete, continuous
76
If there are a limited number of distinct states, clearly defined percepts and actions, the environment is ___.
discrete
77
A chess game comes under ___ environment as there is a finite number of moves that can be performed.
discrete
78
If only one agent is involved in an environment, and operating by itself then such an environment is called ___ environment.
single agent
79
If multiple agents are operating in an environment, then such an environment is called a ___ environment.
multi-agent
80
The agent ___ problems in the multi-agent environment are different from single agent environment.
design
81
___ and ___ are not actually a feature of an environment, but it is an agent's state of knowledge to perform an action.
Known, unknown
82
Known and unknown are not actually a feature of an environment, but it is an agent's or designer's ___ to perform an action.
state of knowledge
83
In ___ environment, the outcomes for all actions are given.
known
84
If the environment is ___, the agent will have to learn how it works in order to perform an action and make good decisions.
unknown
85
It is quite possible that a known environment to be ___ and an unknown environment to be ___.
partially observable, fully observable
86
Some sort of computing device (sensors + actuators)
Architecture
87
Agent = ___ + ___
architecture, program
88
Some function that implements the agent mapping = “?”
(Agent) Program
89
Job of AI
Agent Program
90
If an agent can obtain complete and accurate information about the state's environment, then such an environment is called an ___ environment else it is called ___.
Accessible, inaccessible
91
An empty room whose state can be defined by its temperature is an example of an ___ environment.
accessible
92
Information about an event on earth is an example of ___ environment.
Inaccessible
93
Input for Agent Program
Only the **current percept**
94
Input for Agent Function
The entire **percept sequence** ## Footnote The agent must remember all of them.
95
Implement the agent program as a ___
look up table (agent function)
96
Types of agent programs
* Simple reflex agents * Model-based reflex agents * Goal-based agents * Utility-based agents * Learning agents
97
It uses just condition-action rules
Simple reflex agents
98
Simple reflex agents use ___ rules
condition-action
99
Simple reflex agents work only if the environment is ___.
fully observable
100
Simple reflex agents are efficient but have narrow range of ___ because knowledge sometimes cannot be stated explicitly.
applicability
101
Model-based Reflex Agents are for the world that is ___.
partially observable
102
Agent that has to keep track of an internal state that depends on the percept history, reflecting some of the unobserved aspects.
Model-based Reflex Agents
103
Model-based Reflex Agents require two types of knowledge
* How the world evolves independently of the agent * How the agent’s actions affect the world
104
An agent where the current state of the environment is always not enough for it, the goal is another issue to achieve.
Goal-based agents ## Footnote Judgment of rationality / correctness
105
Goal-based agents choose actions (goals) based on the ___ and ___.
current state, current percept
106
Goal-based agents are less ___ but more ___.
efficient, flexible ## Footnote Agent <--- Different goals <--- different tasks
107
Two other sub-fields in AI
Search and planning
108
[Goal-based agents] To find out the action sequences to achieve its goal
Search and planning
109
An agent where goals alone are not enough to generate high-quality behavior.
Utility-based agents
110
If goal means success, then ___ means the degree of success.
utility
111
[Utility-based agents] It is said state A has higher ___ if state A is more preferred than others.
utility
112
Utility is therefore a function that maps a state onto a real number; the degree of ___.
success
113
Utility has several advantages: When there are conflicting goals, only some of the goals but not all can be achieved; utility describes the appropriate ___.
trade-off
114
When there are several goals, none of the goals are achieved certainly, so, utility provides a way for the ___.
decision-making
115
After an agent is programmed, can it work immediately? No, it still need teaching. In AI, once an agent is done, we teach it by giving it a set of examples. We test it by using another set of examples. We then say the agent ___.
learns (Learning Agent)
116
Learning Agents has four conceptual components:
* Learning element * Making improvement * Performance element * Selecting external actions
117
# Learning Agents Tells the Learning element how well the agent is doing with respect to fixed performance standard.
Critic ## Footnote Feedback from user or examples, good or not?
118
# Learning Agents Suggest actions that will lead to new and informative experiences.
Problem generator