Midterm review Flashcards

(34 cards)

1
Q

What’s a rational agent

A

An agent that performs an action that will be most successful/max performance measure based on what it perceives/percept sequence and actions it can perform

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does PEAS stand for what are the parts

A

Performance measure (goal ie safety ), environment (location/what’s in location), actuators (perform actions through), sensors(how thing will sense things for input)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What do we mean by PEAS analysis?

A

M

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is an example of software agent

A

An AI program/ program to do something in ai, ex program that brakes when it sees a car too close: camera sensors, mechanism to push brake as effector

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is abstraction and example

A

Remove details, focus on bigger idea . Ex. Graph of sibiu

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Simple reflex agent

A

Looks at latest percept or 2, acts as a reflex, ex dirty so clean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Model based agents

A

Has a ds that models environment (other situations going on, what my actions will do) makes decisions off of that

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Goal based agents

A

Model with a goal, ie destination, more sophisticated

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Utility based agents

A

Model, goal, and utility function, considers future state ie will this action help me meet my goal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Percept sequence

A

Everything the agent has perceived so far

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Uninformed searches

A

Uninformed is blind, has no info about steps or path cost (good with no additional info, but less effective )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Informed searches

A

Considers problem specific knowledge, may use heuristics or functions that consider extra info, more effective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Uninformed searches examples

A

Breadth first, depth first, iterative deepening depth first, uniforms cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Informed searches examples

A

Greedy best, a*

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Graph vs tree based models

A

Tree has at most 1 parent, may have many children, graph nodes may have many parents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

G (n) function

A

Cost of getting to node

17
Q

H (n) function

A

Cost of getting from node to goal

18
Q

fully observable vs partially observable env

A

sensors give access to complete state of env at ea pt in time (chess with a clock) (poker)

19
Q

deterministic vs stochatic env

A

next state of env is completely determined by the current state and action executed by agent(chess with clock)(poker)

20
Q

strategic env

A

env is deterministic except for actions of other agents(chess with clock)

21
Q

episodic vs sequential

A

agent’s exp is divided into episodes (perceive then one action) and choice of action in ea episode depends only on the episode itself (part picking robot)(kchess with clock)

22
Q

static vs dynamic env

A

env is unchanged while an agent is deliberating (poker)(taxi driving)

23
Q

semidynamic env

A

env itself doesn’t change with passing time but agent’s performance score does(image analysis)

24
Q

discrete vs cont env

A

limited number of distinct, clearly defined percepts and actions (chess with clock)(taxi driving)

25
single agent vs multiagent env
agent operating by itself in env
26
real world env types?
partially observable, stochastic, sequential, dynamic, continuous, multi-agent
27
greedy best-first search
uses h(n) - cost from n to goal, gets to goal but may not be optimal, can get stuck in loops, keeps nodes in memory
28
A* search
uses f(n) = g(n) + h(n) total cost from n to goal = cost so far to get to n + cost from n to goal; optimal solution when heuristic is admissible, fairly efficient; puts children in fringe list; keeps all nodes
29
heuristic is admissible
h(n)
30
breadth-first search
goes across all children in first node, FIFO put at end of fringe list; lots of time and space
31
depth-first
LIFO, children put at front of fringe list; could go infinitely down
32
iterative deepening depth-first
same as depth first but can have branching factor of say 10, then goes down from next original child ... doesn't keep track forgets memory, more efficient
33
uniform cost search
uses g(n) if g is low puts at front of fringe list, if high puts at end of fringe list; takes timme
34
steps of search (for all)
take node off fringe list, expand by rule, is it goal? if yes return value, if no follow rule for which to expand/where to put children