lv. 3 - CS37 Flashcards

(125 cards)

1
Q

An entity that perceives its environment and acts upon that environment.

A

Agent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

A configuration of an agent in its environment.

A

State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

The state from which the search algorithm starts.

A

Initial State

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Choices that can be made in a state.

A

Actions

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

A description of what state results from performing any applicable action in any state.

A

Transition Model

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

The set of all states reachable from the initial state by any sequence of actions.

A

State Space

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

The condition that determines whether a given state is a goal state.

A

Goal Test

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

A numerical cost associated with a given path.

A

Path Cost

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

A sequence of actions that leads from the initial state to the goal state.

A

Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

A solution that has the lowest path cost among all solutions.

A

Optimal Solution

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

contains the following data:
* A state
* Its parent node, through which the current node was generated
* The action that was applied to the state of the parent to get to the current node
* The path cost from the initial state to this node

A

node

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

the mechanism that “manages” the nodes

A

frontier

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

search algorithm that exhausts each one direction before trying another direction

A

Depth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

search algorithm where the frontier is managed as a stack data structure

A

Depth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

search algorithm that will follow multiple directions at the same time, taking one step in each possible direction before taking the second step in each direction.

A

Breadth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

search algorithm where the frontier is managed as a queue data structure

A

Breadth-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

A type of algorithm that considers additional knowledge to try to improve its performance

A

Informed Search Algorithm

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

search algorithm that expands the node that is the closest to the goal, as determined by a heuristic function h(n).

A

Greedy Best-First Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

ignores walls and counts how many steps up, down, or to the sides it would take to get from one location to the goal location

A

Manhattan Distance

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

function that estimates how close to the goal the next node is, but it can be mistaken.

A

heuristic function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

The efficiency of the greedy best-first algorithm depends on

A

how good a heuristic function is

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

considers not only h(n), the estimated cost from the current location to the goal, but also g(n), the cost that was accrued until the current location.

A

A* Search

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

For a* search to be optimal, the heuristic function has to be:

A

Admissible & Consistent

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

In the heuristic function h(n) of an A* search algorithm, it is consistent if

A

for every node n and successor node n’ with step cost c, n ≤ n’ + c

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
In the heuristic function h(n) of an A* search algorithm, what does it mean to be admissible?
Never overestimates the actual cost to reach a goal from any node
26
algorithm that faces an opponent that tries to achieve the opposite goal.
Adversarial Search
27
represents winning conditions as (-1) for one side and (+1) for the other side.
Minimax
28
Recursively, the algorithm simulates all possible games that can take place beginning at the current state and until a terminal state is reached. Each terminal state is valued as either (-1), 0, or (+1).
Minimax
29
an optimization technique for minimax that skips some of the recursive computations that are decidedly unfavorable
Alpha-Beta Pruning
30
Minimax that considers only a pre-defined number of moves before it stops, without ever getting to a terminal state.
Depth-Limited Minimax
31
estimates the expected utility of the game from a given state, or, in other words, assigns values to states.
Evaluation function
32
agents that reason by operating on internal representations of knowledge.
Knowledge-Based Agents
33
an assertion about the world in a knowledge representation language
Sentence
34
based on propositions, statements about the world that can be either true or false
Propositional Logic
35
letters that are used to represent a proposition.
Propositional Symbols
36
logical symbols that connect propositional symbols in order to reason in a more complex way about the world.
Logical Connectives
37
List all logical connectives:
Not (¬) And (∧) Or (∨) Implication (→) Biconditional (↔)
38
inverses the truth value of the proposition.
Not
39
connects two different propositions
And
40
is true as as long as either of its arguments is true.
Or
41
represents a structure of “if P then Q.”
Implication
42
In the case of P implies Q (P → Q), P is the ____
Antecedent
43
In the case of P implies Q (P → Q), Q is the ____
Consequent
44
an implication that goes both directions
Biconditional
45
an assignment of a truth value to every proposition.
Model
46
set of sentences known by a knowledge-based agent.
Knowledge Base (KB)
47
a relation that means that if all the information in α is true, then all the information in β is true.
Entailment (⊨)
48
the process of deriving new sentences from old ones.
Inference
49
Define the Model Checking algorithm
Model Checking algorithm.
50
the process of figuring out how to represent propositions and logic in AI
Knowledge Engineering
51
What makes the Model Checking algorithm inefficient?
It has to consider every possible model before giving the answer
52
allows the generation of new information based on existing knowledge without considering every possible model.
Inference Rules
53
if we know an implication and its antecedent to be true, then the consequent is true as well.
Modus Ponens
54
If an And proposition is true, then any one atomic proposition within it is true as well
And Elimination
55
A proposition that is negated twice is true
Double Negation Elimination
56
An implication is equivalent to an Or relation between the negated antecedent and the consequent
Implication Elimination
57
A biconditional proposition is equivalent to an implication and its inverse with an And connective.
Biconditional Elimination
58
It is possible to turn an And connective into an Or connective
De Morgan’s Law
59
A proposition with two elements that are grouped with And or Or connectives can be distributed, or broken down into, smaller units consisting of And and Or
Distributive Property
60
inference rule that states that if one of two atomic propositions in an Or proposition is false, the other has to be true
Resolution
61
two of the same atomic propositions where one is negated and the other is not
Complementary Literals
62
disjunction of literals
Clause
63
consists of propositions that are connected with an Or logical connective
disjunction
64
consists of propositions that are connected with an And logical connective
conjunction
65
conjunction of clauses
Conjunctive Normal Form (CNF)
66
Steps in Conversion of Propositions to Conjunctive Normal Form
* Eliminate biconditionals Turn (α ↔ β) into (α → β) ∧ (β → α). * Eliminate implications Turn (α → β) into ¬α ∨ β. * Move negation inwards until only literals are being negated (and not clauses), using De Morgan’s Laws. Turn ¬(α ∧ β) into ¬α ∨ ¬β
67
Process used when a case where a clause contains the same literal twice is encountered
Factoring
68
process to remove a duplicate literal
Factoring
69
Result after resolving a literal and its negation
empty clause ()
70
Why is an empty clause always false?
it is impossible that both P and ¬P are true
71
Define the resolution algorithm
* To determine if KB ⊨ α: * Check: is (KB ∧ ¬α) a contradiction? * If so, then KB ⊨ α. * Otherwise, no entailment.
72
If our knowledge base is true, and it contradicts ¬α, it means that ¬α is false, and, therefore, α must be true.
Proof by Contradiction
73
Define the proof by contradiction algorithm
To determine if KB ⊨ α: * Convert (KB ∧ ¬α) to Conjunctive Normal Form. * Keep checking to see if we can use resolution to produce a new clause. * If we ever produce the empty clause (equivalent to False), congratulations! We have arrived at a contradiction, thus proving that KB ⊨ α. * However, if contradiction is not achieved and no more clauses can be inferred, there is no entailment.
74
logic that allows us to express more complex ideas more succinctly than propositional logic
First Order Logic
75
Types of symbols used by first order logic:
Constant Symbols & Predicate Symbols
76
these symbols represent objects
Constant Symbols
77
these symbols are like relations or functions that take an argument and return a true or false value
Predicate Symbols
78
tool that can be used in first order logic to represent sentences without using a specific constant symbol
Universal Quantification
79
used to create sentences that are true for at least one x
Existential Quantification
80
Uncertainty can be represented as a number of events and the likelihood, or probability, of each of them happening.
Probability
81
Axioms in Probability
0 < P(ω) < 1
82
the degree of belief in a proposition in the absence of any other evidence.
Unconditional Probability
83
the degree of belief in a proposition given some evidence that has already been revealed.
Conditional Probability
84
variable in probability theory with a domain of possible values that it can take on
Random Variable
85
the knowledge that the occurrence of one event does not affect the probability of the other event
Independence
86
commonly used in probability theory to compute conditional probability.
Bayes' Rule
87
the likelihood of multiple events all occurring.
Joint Probability
88
data structure that represents the dependencies among random variables.
Bayesian Networks
89
Properties of Inference
Query X: the variable for which we want to compute the probability distribution. Evidence variables E: one or more variables that have been observed for event e. Hidden variables Y: variables that aren’t the query and also haven’t been observed. The goal: calculate P(X | e).
90
a scalable method of calculating probabilities, but with a loss in precision.
approximate inference
91
technique of approximate inference.
Sampling
92
Sampling is inefficient because it discards samples. Likelihood weighting addresses this by incorporating the evidence into the sampling process.
Likelihood Weighting vs Sampling
93
Start by fixing the values for evidence variables. Sample the non-evidence variables using conditional probabilities in the Bayesian network. Weight each sample by its likelihood: the probability of all the evidence occurring.
Likelihood Weighting Steps
94
an assumption that the current state depends on only a finite fixed number of previous states.
Markov Assumption
95
a sequence of random variables where the distribution of each variable follows the Markov assumption.
Markov Chain
96
a type of a Markov model for a system with hidden states that generate some observed event
Hidden Markov Model
97
choosing the best option from a set of possible options.
Optimization
98
search algorithm that maintains a single node and searches by moving to a neighboring node
Local Search
99
a function that we use to maximize the value of the solution.
Objective Function
100
a function that we use to minimize the cost of the solution
Cost Function
101
the state that is currently being considered by the function.
Current State
102
a state that the current state can transition to.
Neighbor State
103
In this algorithm, the neighbor states are compared to the current state, and if any of them is better, we change the current node from the current state to that neighbor state.
Hill Climbing
104
a state that has a higher value than its neighboring states
A local maximum
105
a state that has the highest value of all states in the state-space.
global maximum
106
Variants of Hill Climbing
Steepest-ascent Stochastic First-choice Random-restart Local Beam Search
107
Variant that chooses the highest-valued neighbor.
Steepest-ascent
108
Variant that chooses randomly from higher-valued neighbors.
Stochastic
109
Variant that chooses the first higher-valued neighbor
First-choice
110
Variant that conduct hill climbing multiple times
Random-restart
111
Variant that chooses the k highest-valued neighbors.
Local Beam Search
112
allows the algorithm to “dislodge” itself if it gets stuck in a local maximum.
Simulated Annealing
113
the task is to connect all points while choosing the shortest possible distance.
Traveling Salesman Problem
114
a family of problems that optimize a linear equation
Linear Programming
115
Components of Linear Programming
Cost Function we want to minimze Constraint represented as a sum of variables that is either less than or equal to a value or precisely equal to this value Individual bounds on variables
116
a class of problems where variables need to be assigned values while satisfying some conditions.
Constraint Satisfaction
117
Constraint Satisfaction properties
Set of Variables Set of domains for each variable Set of constraints C
118
a constraint that must be satisfied in a correct solution.
Hard Constraint
119
a constraint that expresses which solution is preferred over others.
Soft Constraint
120
a constraint that involves only one variable.
Unary Constraint
121
a constraint that involves two variables.
Binary Constraint
122
when all the values in a variable’s domain satisfy the variable’s unary constraints.
Node consistency
123
when all the values in a variable’s domain satisfy the variable’s binary constraints
Arc consistency
124
a type of a search algorithm that takes into account the structure of a constraint satisfaction search problem.
Backtracking search
125
This algorithm will enforce arc-consistency after every new assignment of the backtracking search.
Maintaining Arc-Consistency algorithm