Bayesian inference Flashcards

1
Q

Define prior and posterior distribution, and give formulas for the posterior distribution of π(θ|x)

A

The prior distribution π(θ) of θ is the probability distribution of θ before observing the data. It represents our beliefs or uncertainty about the parameter before collecting any data. After observing data X = x, we update the distribution of θ to obtain the posterior distribution π(θ|x) representing our updated beliefs in light of seeing x.
pg 31

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What is an improper prior?

A

A non-negative prior function where the integral over the parameter space is not finite.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Define a Jeffreys prior. Is it always proper?

A

Prior proportional to sqrt[ det( I(θ) ) ]
No
pg 34

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Define a loss function

A

Non negative function that determines the cost of a particular action for a given parameter θ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Define the risk function for loss function L and decision rule δ

A

E[ L(δ(X), θ) ] = integral

pg 37

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

When is a decision rule δ inadmissible?

A

pg 38

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Define the π-Bayes risk for decision rule δ

A

pg 38

The estimator that minimizes this risk is called the Bayes estimator

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the posterior risk?

A

The average loss under the posterior distribution for an observation X
pg 39

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Does minimizing the posterior risk also minimize the π-Bayes risk?

A

Yes (proof on pg 39)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What is the minimax risk?

A

The minimax risk is defined as the infimum (‘min’) over all decision rules δ of the maximal (‘max’) risk over the whole parameter space Θ
pg 40

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What happens if a Bayes rule δ has constant risk in θ?

A

If a (unique) Bayes rule δ has constant risk in θ then it is (unique) minimax.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

What is a uniformly minimum variance unbiased estimator?

A

An unbiased estimator g^(X) of g(θ) s.t. var(g^) <= any other unbiased estimator g(X) of g(θ)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What does it mean to say a statistic is complete for θ?

A

if E_θ [g(T)] = 0 for all θ then P_θ (g(T)=0) = 1

pg43

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Give an example of a complete statistic for a k-parameter exponential family

A

T = ( T1(X), …, Tk(X) )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Can a sufficient, complete satistic be minimal?

A

If a sufficient statistic T is complete, then it is minimal, but not all minimal sufficient statistics are complete

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Define ancillary statistic

A

A statistic is an ancillary statistic if its distribution does not depend on the parameter θ.

17
Q

State Basu’s theorem

A

If T is a complete sufficient statistic for θ, then any ancillary statistic V is independent of T.

18
Q

State the Lehmann-Scheffe theorem.

A

Let T be a sufficient and complete statistic for θ and
g˜ be an unbiased estimator of g(θ) with var_θ(˜g) < ∞ for all θ∈ Θ. If gˆ(T(X)) = E[˜g(X)|T(X)],
then gˆ is the unique uniformly minimum variance unbiased estimator (UMVUE) of g(θ).

19
Q

What is the likelihood principle?

A

The likelihood principle says that all bayesian inferences are based on the likelihood function only.
ie. if the likelihood function with two different thetas are the same then all their inferences (prior, MLE,…) are the same

20
Q

How can we prove T is not complete?

A

Find a function g such that E(g(T))=0 but g(T) =/= 0