Ch 6 Logical Agents Flashcards

1
Q

What is the definition of entailment?

A

one thing follows from another.

KB |= alpha iff
alpha is true in all worlds where KB is true

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is the definition of a model?

A

A formally structured world with respect to which truth can be evaluated

m is a model of sentence alpha if alpha is true in m

M(alpha) is the set of all models of alpha

KB |= alpha iff
M(KB) is subset of M(alpha)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What is inference?

What is the definition of soundness and completemeness? Officially and in own words?

A

Soundness means it doesn’t make any mistakes. It finds truth and only truths…

whenever KB|–i alpha, it is also true that KB |= alpha

Completeness means if there is an answer there, it will find it..

Whenever KB|= alpha, KB |–i alpha.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the definition of syntax and semantics?

A

Syntax, the structure of a sentence says wheter a sentence even exists in the language.

Semantics, the meaning is the truths that are the case in particular worlds (??)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

When is a sentence valid?

A

It is true in ALL models….

True, A OR !A
A => A
(A AND (A => B)) => B

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the deduction theorem?

A

KB |= alpha IFF

KB => alpha is valid

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

When is a sentence satisfiable?

When is a sentence unsatisfiable?

A

It is true in some model.

It is true in no models.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How is satisfiability connected to inference?

A

KB |= alpha

IFF

KB AND !alpha is unsatisfiable.

There is no world where my knowledge base is true and alpha is not true.

(prove alpha by reductio ad absurdum)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What is the definition of logically equivalent?

A

Two sentences are logically equivalent iff they are true in the same models.

a _= b IFF

alpha |= Beta,
AND
Beta |= alpha

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

When is an inference false?

A

When there are states in my Knowledge Base that are outside of what I’m trying to infer.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What is the analogy with multiplication and zero that relates to entailment?

A

alpha |= Beta

IFF

M(alpha) is a subset of M(Beta)….

alpha is a stronger assertion than Beta, it rules out more possible worlds.

think x = 0. That’s a stronger statement since any model where x is zero will make xy = 0 regardless of what y is.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is Horn form

A

KB = a conjunction of Horn clauses.

Proposition symbol (diamond) or conjunction of symbols with the => symbol

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What’s the definition of a proof

A

chain of conclusions that leads to the desired goal.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

When would you want to use a forward chaining very a backward chaining algorithm to to proofs in Propositional logic?

A

FC is data-driven (I know alot of propositions). for ex unconscious processing, object recognition, routine decisions…..

I may do lots of work that is irrelevant to the goal.

BC is goal-driven, appropriate for problem solving.

Complexity of BC is often MUCH LESS than linear in size of KB.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

What’s the biggest issue to avoid with backward chaining? Why is it a factor? How do you avoid it.

A

Avoid looping. Because it’s recursive. You can avoid it by storing subgoals and see if the goal you’re about to work is already on the stack.

It’s also key to avoid repeated work: check if a new subgoal

  1. has already been proved true or
  2. has already failed.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Why is backward chaining fast? What is forward chaining potentially wasteful?

A

Because I only look at rules that are potentially useful in BC.

In FC, …all for the want of a nail. checking everything for one little change. But if things are data driven and there are alot of quick changes, this is the way to go.

17
Q

What are 3 reasons why KB’s containing only definite clauses are interesting?

A
  1. every definite clause can be written as an implication whose premise is a conjunction of positive literals and whose conclusion is a single positive literal.
  2. Inference with Horn clauses can be done through forward and backward chaining.
  3. Deciding entailment with Horn clauses can be done in time that is linear in the size of the KB.
18
Q

What is a definite clause?
What is a Horn clause?
What is a goal clause?

What is the relationship between them?

A

a disjunciton of literals of which EXACTLY ONE is positive. = definite clause

a disjunction of literals of which AT MOST ONE is positive. = Horn clause.

clauses with no positive literals.

All definite clauses are Horn clauses.

19
Q

What is the definition of monotonicity?

A

the set of entailed sentences can only INCREASE as information is added to the knowledge base.

IF KB |= alpha, then KB AND Beta |= alpha.

new knowledge can bring new conclusions, but it can’t invalidate what I already know and the conclusion of rules follow regardless of what else is in the data base.

20
Q

How does the resolution rule lead to a complete inference procedure for all propositional logic?

A

Because all propositional logic sentences can be put into CNF which what resolution needs.