Week 3 Flashcards

1
Q

Key assumption of Naive bayes

A

Each effect only depends on cause

<=> effects don’t affect each other

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Why is conditional independence assumed for naive bayes

A

Preserve linearity in number of effects for P table

If we don’t do this, P table grows exponentially as new effects are introduced

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

A bayesian network cant

A

Have any cycles

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Graph of Bayesian Network is

A

Directed Acyclic Graph (DAG)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

P of a selection of states of given variables
On a Bayesian network

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Local semantics of a node in a Bayesian network

A

A node X is independent of its non-descendants given its parents

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Markov Blanket

A

A node X is conditionally independent of all others given its Markov Blanket (parents, children, children’s parents)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

How to compress Markov blankets further

A

Boolean functions (eg NorthAmerican <=> Canadian v US v Mexican) (prior knowledge)

Numerical relationships eg(image)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Simple queries

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Conjunctive Queries

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Sensitivity Analysis

A

Which P values are most critical

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

4 ways to compute posterior marginal

A

Enumeration

Rejection sampling

Likelihood weighting

Gibbs Sampling

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Inference by enumeration: pro and con

A

Pro: deterministic

Con: inefficient

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Variable elimination for enumeration

A

Evaluate enumeration tree bottom up

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Time and space cost of variable elimination

A
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Exact inference is

A

P complete

NP - Hard

17
Q

NP Hard

A

Nondeterministic polynomial time hard

At least as hard as the hardest problems in NP. (The class of NP hard problems)

18
Q

What is “# P”

A

P is the class of difficulty in counting the solutions

Related to NP

NP Hard is a class of times to find solutions

19
Q

LLN

A
20
Q

Why Rejection sampling over prior sampling

A

Prior sampling has no notion of conditioning

21
Q

How does rejection sampling work

A

We do prior sampling and then reject those for which e doesn’t hold

22
Q

Likelihood weighting

A
23
Q

Summarise Gibbs sampling

A

Algorithm wanders randomly around state space… flipping one var at a time but keeping evidence variables fixed

24
Q

Steps for Gibbs sampling

A

Begin with a query with evidence vars fixed to obs vals

Randomly initialise non-evidence vars

With entire state now set sample first non-evidence var, if this causes it to change value , update state and save

Then move to next non-evidence var

Repeat until sample size reached

25
Q

Gibbs sampling pseudo code

A
26
Q

Chain rule

A
27
Q

Locally structured system

A

Each sub component interacts directly with only a bounded number of components

28
Q

Leak node

A

If causes for an effect may not be included, leak node can generally represent ‘miscellaneous causes’

29
Q

Nonparametric representation

A

(For continuous vars) Define conditional distribution implicitly with a collection of instances, each containing specific values (or ranges of) for all vars

30
Q

Hybrid Bayesian network

A

Network with both discrete and continuous RVs

31
Q

Difference between Probit and logit

A

Link function!
For Probit is CDF of standard N dist
For Logit is logistic function:

32
Q

Total set of variables in Bayesian Network and what are they

A

X is query variable
E is evidence variables
Y is hidden variables

33
Q

Enumeration algorithm pseudocode

A
34
Q

Calculating α for posterior marginal ?

A

When you have a final vector, just find ratio of 2 to get probabilities of each

35
Q

How to interpret CPT with random numbers

A

Eg 0.1 for A = True

Any random number less than 0.1 is True as this gives 10% chance of True

36
Q

How to do prior sampling

A

With random number generator
Starting from top, moving from left to right on each row
If random number < P then True

37
Q

How to rejection sampling

A

Same as prior sampling BUT
If conditions are not as in query, discard sample. Eg;
P( a | b, c) if sample has not b and/or not c, reject

38
Q

How to do likelihood sampling

A

For P(T|h, s)

h and s are fixed evidence variable.
Therefore when going through sampling multiply weight by the probability that we are fixing

At end, each sample has values and a weight associated to the sample