Terms Flashcards
(44 cards)
What are agents?
Computer systems capable of autonomous action in some environment in order to meet its design objectives.
What are the three types of agent behavior?
Reactive agents - Respond to specific environmental cues. Operate without the ability to form long-term plans or anticipate events.
Pro-active agents - Able to take initiative and exhibit goal-oriented behavior
Social agents - Able to interact with other agents (human or artificial).
List different kinds of environments (observability)
- Accessible or fully observable environment: agent can obtain complete, accurate, up-to-date information about the environment’s state
Most moderately complex environments (including, for example, the everyday physical world and the Internet) are inaccessible or partially observable
The more accessible an environment is, the simpler it is to build agents to operate in it
List different kinds of environments (determinism)
Deterministic environment: any action has a single guaranteed effect
There is no uncertainty about the state that will result from performing an action
The physical world can be regarded as non-deterministic
Non-deterministic environments present greater problems for the agent designer
List different kinds of environments (episodic)
In an episodic environment, the performance of an agent is dependent on a number of discrete episodes, with no link between the performance of an agent indifferent scenarios
Episodic environments are simpler from the agent developer’s perspective
The agent can decide what to do based only on the current episode
No need to reason about interactions between episodes
List different kinds of environments (percepts)
A static environment can be assumed to remain unchanged except by the performance of actions by the agent
A dynamic environment has other processes operating on it, and hence changes in ways beyond the agent’s control
Other processes can interfere with the agent’s actions (as in concurrent systems theory)
The physical world is highly dynamic
List different kinds of environments (number of actions)
An environment is discrete if there are a fixed, finite number of actions and percepts in it
Continuous environments have a certain level of mismatch with computer systems
Discrete environments could in principle be handled by a kind of “lookup table”
Define simple reflex agents
Make decisions based solely on the current perceptual input, without considering the history of inputs or the current state of the environment.
Define reflex agents with state
Incorporate memory or an internal state to make decisions, allowing them to consider past perceptions and react differently based on the current situation.
Define goal-based agents
Operate by selecting actions that lead them closer to achieving predefined goals or objectives, often employing search and planning mechanisms to determine their actions.
Define utility-based agents
Evaluate and select actions based on a measure of utility or desirability, considering the overall quality or satisfaction associated with different outcomes in the pursuit of their goals.
Define the three kinds of state representation
Atomic: monolithic states, useful for high-level algorithms
Factored: states have discernible internal structures, which we can exploit
Structured: internal structure includes relationships between structural elements
What are the four problem types?
Deterministic, fully observable ⇒ single-state problem. Agent knows exactly which state it will be in; solution is a sequence
Non-observable ⇒ conformant problemAgent may have no idea where it is; solution (if any) is a sequence
Nondeterministic and/or partially observable ⇒ Contingency problem percepts provide new information about current state solution is a contingent plan or a policy interleave search, execution
Unknown state space ⇒ exploration problem (“online”)
What is a state-space graph?
A graph in which all possible states are listed in a tree-like diagram
Define state, parent, path cost, and action
State - The corresponding state in the state-space
Parent - The node that generated n.
Action - The action applied to the parent that resulted in the node.
Path-cost - The cost of the path from the initial node to the current node.
What are successor functions?
A successor function, denoted as succ(n), generates all the possible states that can be reached from a given state nn by applying valid actions or operators.
What is a frontier?
The next set of nodes to be explored by a search algorithm.
What are the four ways of evaluating a search algorithm?
Completeness: Does it always find a solution if one exists?
Optimality: Is the solution optimal (i.e., lowest cost)?
Time Complexity: How long does it take to find a solution?
Space Complexity: How much memory does it need to perform the search?
How does BFS work?
Uninformed. All nodes at a given depth are explored before any new nodes are explored. Stores “fringe” (frontier) in a queue (FIFO).
Optimal when all step costs are equal as it always expandsthe shallowest unexpanded nodes.
How does DFS work?
Uninformed. Explores as far as possible along each branch before backtracking. It starts from the source node and explores as deeply as possible along each branch before backtracking to the previous node and continuing the process.
Fails at infinite depth. Can be solved by limiting depth by number of nodes (depth-limited search), or by iteratively deepening limit (Iterative deepening depth-first search)
How does Uniform Cost Search work?
Uninformed. Expands the nodes in the search space in order of their cumulative cost from the start node. UCS uses a priority queue to maintain the frontier of the search, always selecting and expanding the node with the lowest cost.
What’s the difference between uninformed and informed search?
Also known as blind search, make decisions about which node to explore next without using any additional information about the problem or the state of the search space.
Incorporate additional information about the problem domain to guide the search process more effectively. Often in the form of a heuristic function, which estimates the cost or distance from the current state to the goal state.
What is Greedy Best-First Search?
Informed. Selects the next node to explore based on its heuristic evaluation, prioritizing nodes that are estimated to be closest to the goal. Makes locally optimal choices at each step without considering the overall path, potentially leading to suboptimal solutions in some cases.
What is A* search?
Informed. Like Dijkstra’s algorithm, A* considers the cost to reach a node from the start, but it enhances this by also incorporating a heuristic estimate of the remaining cost to the goal, similar to Greedy Best-First Search.