Midterm review Flashcards Preview

Artificial Intelligence > Midterm review > Flashcards

Flashcards in Midterm review Deck (34)
Loading flashcards...
1
Q

What’s a rational agent

A

An agent that performs an action that will be most successful/max performance measure based on what it perceives/percept sequence and actions it can perform

2
Q

What does PEAS stand for what are the parts

A

Performance measure (goal ie safety ), environment (location/what’s in location), actuators (perform actions through), sensors(how thing will sense things for input)

3
Q

What do we mean by PEAS analysis?

A

M

4
Q

What is an example of software agent

A

An AI program/ program to do something in ai, ex program that brakes when it sees a car too close: camera sensors, mechanism to push brake as effector

5
Q

What is abstraction and example

A

Remove details, focus on bigger idea . Ex. Graph of sibiu

6
Q

Simple reflex agent

A

Looks at latest percept or 2, acts as a reflex, ex dirty so clean

7
Q

Model based agents

A

Has a ds that models environment (other situations going on, what my actions will do) makes decisions off of that

8
Q

Goal based agents

A

Model with a goal, ie destination, more sophisticated

9
Q

Utility based agents

A

Model, goal, and utility function, considers future state ie will this action help me meet my goal

10
Q

Percept sequence

A

Everything the agent has perceived so far

11
Q

Uninformed searches

A

Uninformed is blind, has no info about steps or path cost (good with no additional info, but less effective )

12
Q

Informed searches

A

Considers problem specific knowledge, may use heuristics or functions that consider extra info, more effective

13
Q

Uninformed searches examples

A

Breadth first, depth first, iterative deepening depth first, uniforms cost

14
Q

Informed searches examples

A

Greedy best, a*

15
Q

Graph vs tree based models

A

Tree has at most 1 parent, may have many children, graph nodes may have many parents

16
Q

G (n) function

A

Cost of getting to node

17
Q

H (n) function

A

Cost of getting from node to goal

18
Q

fully observable vs partially observable env

A

sensors give access to complete state of env at ea pt in time (chess with a clock) (poker)

19
Q

deterministic vs stochatic env

A

next state of env is completely determined by the current state and action executed by agent(chess with clock)(poker)

20
Q

strategic env

A

env is deterministic except for actions of other agents(chess with clock)

21
Q

episodic vs sequential

A

agent’s exp is divided into episodes (perceive then one action) and choice of action in ea episode depends only on the episode itself (part picking robot)(kchess with clock)

22
Q

static vs dynamic env

A

env is unchanged while an agent is deliberating (poker)(taxi driving)

23
Q

semidynamic env

A

env itself doesn’t change with passing time but agent’s performance score does(image analysis)

24
Q

discrete vs cont env

A

limited number of distinct, clearly defined percepts and actions (chess with clock)(taxi driving)

25
Q

single agent vs multiagent env

A

agent operating by itself in env

26
Q

real world env types?

A

partially observable, stochastic, sequential, dynamic, continuous, multi-agent

27
Q

greedy best-first search

A

uses h(n) - cost from n to goal, gets to goal but may not be optimal, can get stuck in loops, keeps nodes in memory

28
Q

A* search

A

uses f(n) = g(n) + h(n) total cost from n to goal = cost so far to get to n + cost from n to goal; optimal solution when heuristic is admissible, fairly efficient; puts children in fringe list; keeps all nodes

29
Q

heuristic is admissible

A

h(n)

30
Q

breadth-first search

A

goes across all children in first node, FIFO put at end of fringe list; lots of time and space

31
Q

depth-first

A

LIFO, children put at front of fringe list; could go infinitely down

32
Q

iterative deepening depth-first

A

same as depth first but can have branching factor of say 10, then goes down from next original child … doesn’t keep track forgets memory, more efficient

33
Q

uniform cost search

A

uses g(n) if g is low puts at front of fringe list, if high puts at end of fringe list; takes timme

34
Q

steps of search (for all)

A

take node off fringe list, expand by rule, is it goal? if yes return value, if no follow rule for which to expand/where to put children