Terms Flashcards

1
Q

What are agents?

A

Computer systems capable of autonomous action in some environment in order to meet its design objectives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What are the three types of agent behavior?

A

Reactive agents - Respond to specific environmental cues. Operate without the ability to form long-term plans or anticipate events.

Pro-active agents - Able to take initiative and exhibit goal-oriented behavior

Social agents - Able to interact with other agents (human or artificial).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

List different kinds of environments (observability)

A
  • Accessible or fully observable environment: agent can obtain complete, accurate, up-to-date information about the environment’s state

Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible or partially observable

The more accessible an environment is, the simpler it is to build agents to operate in it

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

List different kinds of environments (determinism)

A

Deterministic environment: any action has a single guaranteed effect

There is no uncertainty about the state that will result from performing an action

The physical world can be regarded as non-deterministic

Non-deterministic environments present greater problems for the agent designer

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

List different kinds of environments (episodic)

A

In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent indifferent scenarios

Episodic environments are simpler from the agent developer’s perspective

The agent can decide what to do based only on the current episode

No need to reason about interactions between episodes

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

List different kinds of environments (percepts)

A

A static environment can be assumed to remain unchanged except by the performance of actions by the agent

A dynamic environment has other processes operating on it, and hence changes in ways beyond the agent’s control

Other processes can interfere with the agent’s actions (as in concurrent systems theory)

The physical world is highly dynamic

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

List different kinds of environments (number of actions)

A

An environment is discrete if there are a fixed, finite number of actions and percepts in it

Continuous environments have a certain level of mismatch with computer systems

Discrete environments could in principle be handled by a kind of “lookup table”

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Define simple reflex agents

A

Make decisions based solely on the current perceptual input, without considering the history of inputs or the current state of the environment.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Define reflex agents with state

A

Incorporate memory or an internal state to make decisions, allowing them to consider past perceptions and react differently based on the current situation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Define goal-based agents

A

Operate by selecting actions that lead them closer to achieving predefined goals or objectives, often employing search and planning mechanisms to determine their actions.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Define utility-based agents

A

Evaluate and select actions based on a measure of utility or desirability, considering the overall quality or satisfaction associated with different outcomes in the pursuit of their goals.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Define the three kinds of state representation

A

Atomic: monolithic states, useful for high-level algorithms

Factored: states have discernible internal structures, which we can exploit

Structured: internal structure includes relationships between structural elements

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What are the four problem types?

A

Deterministic, fully observable ⇒ single-state problem. Agent knows exactly which state it will be in; solution is a sequence

Non-observable ⇒ conformant problemAgent may have no idea where it is; solution (if any) is a sequence

Nondeterministic and/or partially observable ⇒ Contingency problem percepts provide new information about current state solution is a contingent plan or a policy interleave search, execution

Unknown state space ⇒ exploration problem (“online”)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is a state-space graph?

A

A graph in which all possible states are listed in a tree-like diagram

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Define state, parent, path cost, and action

A

State - The corresponding state in the state-space

Parent - The node that generated n.

Action - The action applied to the parent that resulted in the node.

Path-cost - The cost of the path from the initial node to the current node.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What are successor functions?

A

A successor function, denoted as succ(n), generates all the possible states that can be reached from a given state nn by applying valid actions or operators.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

What is a frontier?

A

The next set of nodes to be explored by a search algorithm.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

What are the four ways of evaluating a search algorithm?

A

Completeness: Does it always find a solution if one exists?

Optimality: Is the solution optimal (i.e., lowest cost)?

Time Complexity: How long does it take to find a solution?

Space Complexity: How much memory does it need to perform the search?

19
Q

How does BFS work?

A

Uninformed. All nodes at a given depth are explored before any new nodes are explored. Stores “fringe” (frontier) in a queue (FIFO).

Optimal when all step costs are equal as it always expandsthe shallowest unexpanded nodes.

20
Q

How does DFS work?

A

Uninformed. Explores as far as possible along each branch before backtracking. It starts from the source node and explores as deeply as possible along each branch before backtracking to the previous node and continuing the process.

Fails at infinite depth. Can be solved by limiting depth by number of nodes (depth-limited search), or by iteratively deepening limit (Iterative deepening depth-first search)

21
Q

How does Uniform Cost Search work?

A

Uninformed. Expands the nodes in the search space in order of their cumulative cost from the start node. UCS uses a priority queue to maintain the frontier of the search, always selecting and expanding the node with the lowest cost.

22
Q

What’s the difference between uninformed and informed search?

A

Also known as blind search, make decisions about which node to explore next without using any additional information about the problem or the state of the search space.

Incorporate additional information about the problem domain to guide the search process more effectively. Often in the form of a heuristic function, which estimates the cost or distance from the current state to the goal state.

23
Q

What is Greedy Best-First Search?

A

Informed. Selects the next node to explore based on its heuristic evaluation, prioritizing nodes that are estimated to be closest to the goal. Makes locally optimal choices at each step without considering the overall path, potentially leading to suboptimal solutions in some cases.

24
Q

What is A* search?

A

Informed. Like Dijkstra’s algorithm, A* considers the cost to reach a node from the start, but it enhances this by also incorporating a heuristic estimate of the remaining cost to the goal, similar to Greedy Best-First Search.

25
Q

What is weighted A* search?

A

Introduces a weighting factor to the edges. Each edge has a different cost, and the algorithm multiplies the heuristic estimate by this cost when evaluating nodes. The weighting factor allows for adjusting the trade-off between finding a shorter path and exploring more efficiently

26
Q

What is a search contour?

A

A graph using rings to show the expansion of a search algorithm. Imagine sonar waves.

27
Q

What is Euclidean distance?

A

Straight-line distance. Given by:

d = sqrt( (x2​−x1​)^2 + (y2​−y1​)^2 )

28
Q

What is Manhattan distance?

A

Sum of the absolute differences between corresponding coordinates.
Given by:

d=∣x2​−x1​∣+∣y2​−y1​∣

29
Q

What is Hamming distance?

A

Specifically used for comparing binary strings of equal length. It calculates the minimum number of substitutions needed to change one string into the other.

30
Q

What does stochastic mean?

A

Refers to a system or process that involves a random element. In a stochastic system, outcomes are not completely predictable and can vary due to chance or randomness.

31
Q

Compare normal form and extensive form.

A

Concise representation of a game where players simultaneously choose their strategies without considering the sequential nature of their decisions.

Captures the sequential nature of decision-making by representing a game as a tree, specifying the order of moves and the information available at each decision point. Very useful for modeling games with imperfect information and sequential decision-making (“turns”).

32
Q

Define alpha-beta pruning

A

Optimization technique for the minimax algorithm, commonly used in game tree search for two-player games. It reduces the number of nodes evaluated in the game tree by intelligently pruning branches that are guaranteed to be suboptimal, improving computational efficiency in decision-making processes.

33
Q

Define adversarial search

A

An agent aims to find the best move in a competitive, two-player game by anticipating the actions of an opponent. The agent explores possible moves and outcomes, taking into account the adversary’s responses to optimize its decision-making strategy.

34
Q

Define Constraint Satisfaction Problems

A

mathematical problems defined by a set of objects with associated variables, each having a domain of possible values, and constraints limiting the combinations of values that the variables can take. The goal in CSPs is to find a consistent assignment of values to variables that satisfies all the given constraints.

35
Q

What do CSPs consist of?

A

A set of variables, domains, and constraints.

36
Q

What is a payoff matrix?

A

A table consisting of possible outcomes. Imagine the table with -1s and 1s and 0s

37
Q

What is a knowledge base?

A

A set of sentences in a formal language

38
Q

What is first-order logic?

A

System that extends propositional logic by introducing variables, quantifiers, and predicates, enabling the representation of more complex relationships and properties within a domain. It includes the use of quantifiers such as ∀ (for all) and ∃ (there exists), allowing statements to express universal or existential claims over variables.

39
Q

Define forward- and backward- chaining

A

Begin with the known facts or initial conditions and apply rules or logic to derive new information. Keep moving forward until the desired goal or conclusion is reached.

Start with the desired goal and work backward to find the supporting evidence or conditions.

40
Q

Define diachronic interpretation

A

probability changing over time, as we see data

41
Q

What is Moravec’s paradox?

A

High-level (“human”) reasoning requires little computation while sensor/motor (“computer”) reasoning is much more difficult.

42
Q

What is a Bayesian Network?

A

Represents the dependencies among variables and encodes the full joint probability distribution concisely. A directed graph, where each node is annotated with probability information.

43
Q

What are the conditions for two actions to be “mutex”?

A

An effect of one negates an effect of the other
One deletes a precondition of the other
They have mutually exclusive preconditions

44
Q
A