Bayesian Flashcards

1
Q

Marginal likelihood

A

Likelihood of the observed data for each value of theta. Tells us something about how well theta predicted the data, averaged over all possible values of theta. Values that are higher than the marginal likelihood get a boost, lower ones get a penalty

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Bayesian cycle

A

Start with prior beliefs, updating those with the data. Values of theta that predicted the data better than average receive a boost in plausibility and worse than average receive penalty.
Ending with posterior beliefs

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

95% credible interval

A

under this model 95% of the values fall in that range.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Hypothesis testing

A

Model H0 and H1. Implicitly operate under H1; there is an effect

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Prior odds in testing hypothesis

A

How plausible a hypothesis is, relative to another hypothesis. Usually equally likely, 0.5

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Predictive updating factor hypothesis testing

A

Test value is the chance level fe 50%. You specify on how likely the values in the H1 are. The marginal likelihood tells us how well the H1 predicted the data

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Bayes Factor

A

BF10 or BF01. More likely under H1 than under H0. If you flip it it’ll be H0 more likely than under H1.
Meaning evidence against H1 is evidence in favor of H0 and vice versa. BF10=1 means data is equally likely under both hypotheses.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

BF interpretation

A
1-3 anecdotal
3-10 Moderate
10-30 strong
30-100 very strong
above 100 extreme
(continuous metric for predictive quality).
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Pie chart BF

A

Red is alternative H
White is null H
1 = equal

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Parsimony

A

Concept of rewarding a more specific model, while it predicted the data equally well. A simpler model (one sided) gets preferred because it has a higher BF than the two sided.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly