Bayesian ML Flashcards

(28 cards)

1
Q

What is the main goal of Bayesian machine learning?

A

To model uncertainty in parameters, data, and models consistently.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

What does a Bayesian model treat as uncertain?

A

Parameters, predictions, hypotheses, and even model structure.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

What key idea underlies the Bayesian approach to knowledge?

A

Beliefs are updated through evidence using probability.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What separates inference from decision in Bayesian ML?

A

Inference models belief given data; decision uses that belief to act.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

What is the purpose of the prior in Bayes’ theorem?

A

It represents our belief before seeing data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

What is the purpose of the likelihood in Bayes’ theorem?

A

It measures how well parameters explain the data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

What is the posterior in Bayesian inference?

A

The updated belief about parameters after seeing data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

What is the marginal likelihood (evidence) in Bayes’ rule?

A

The normalising constant that ensures a proper posterior.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

What principle does Bayesian inference naturally implement?

A

Occam’s Razor—preferring simpler models that still explain data.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

What does marginalisation allow in Bayesian inference?

A

Averaging over uncertainty and discarding irrelevant variables.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does it mean to say Bayesian inference is probabilistic?

A

All predictions are distributions, not point estimates.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Why is Bayesian ML good for small datasets?

A

It leverages prior knowledge and doesn’t overfit easily.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

What is one advantage of Bayesian methods in streaming data?

A

They allow incremental updating as new data arrives.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

What is Occam’s Razor?

A

A principle that favours simpler explanations when multiple are possible.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

How does Bayesian ML use Occam’s Razor?

A

By penalising overly complex models via the marginal likelihood.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

What real-world problem was used to show Bayesian sparsity?

A

Classifying leukemia with gene expression data.

17
Q

What is a key benefit of Bayesian models in high dimensions?

A

They automatically ignore irrelevant features.

18
Q

What does a sparse prior encourage in Bayesian inference?

A

Solutions that use only the most relevant components.

19
Q

What is an inverse problem in ML?

A

Inferring causes from observed effects, often under uncertainty.

20
Q

Why is Bayesian inference useful for inverse problems?

A

It models uncertainty and finds sparse, plausible explanations.

21
Q

What method was used to infer radioactive sources in the slides?

A

Bayesian inversion using sparse gamma ray data.

22
Q

What is a key downside of Bayesian inference?

A

Computations are often intractable and require approximation.

23
Q

What is an example of a Bayesian approximation method?

A

Markov Chain Monte Carlo (MCMC) or variational inference.

24
Q

Who first proposed Bayesian inference?

A

Thomas Bayes, in 1763.

25
How did Laplace view probability?
As common sense reduced to calculus.
26
What is the difference between frequentist and Bayesian probability?
Frequentists model random processes; Bayesians model belief.
27
What happens if a model fits but ignores uncertainty?
It risks overconfidence and incorrect conclusions.
28
What lesson was drawn from the Challenger disaster?
Neglecting uncertainty in models can lead to catastrophic failure.