Topic 4 Fitting probability Distributions Flashcards

1
Q

What is statistical inference?

A

Inferring the nature of a population distribution based on a sample.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Fitting a probability model to data is known as _______.

A

statistical inference

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Match summary stats ↔ ______
Maximise likelihood ↔ ______

A

Method of Moments (MoM)
Maximum Likelihood Method (MLM)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

What is the main principle of the Method of Moments?

A

Match population moments (mean, variance, etc.) to sample moments.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

METHOD OF MOMENTS (MoM)
E(X) = __
Var(X) = __

A


s²ₓ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Minimise weighted difference:
Min_θ ∑ₖ wₖ (Mk − mk)²
Where:

Mk = __
mk = __

A
  • Population moment
  • Sample moment
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Estimated parameters are denoted with a ___.

A

hat (e.g. ω̂, ε̂)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

If x̄ and s²ₓ are sample mean and variance:
p̂ = ____
n̂ = ____

A
  • 1 − s²ₓ / x̄
  • x̄² / (x̄ − s²ₓ)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Geometric MoM Estimator
p̂ = ____

A

x̄ / s²ₓ

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Gamma MoM Estimators
ε̂ = _____
ω̂ = _____

A
  • s²ₓ / x̄
  • x̄² / s²ₓ
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

What does MLM aim to do?

A

Find the parameter values that maximise the probability of observing the sample

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Formula
Likelihood function:

A

L(θ) = ∏ fₓ(xᵢ | θ)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Condition for Maximum

First derivative =
Second derivative <

A

0

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Why use log-likelihood?

A
  • Easier to differentiate
  • Converts product to sum
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Log-Likelihood
Formula

A

ln(L) = ∑ ln(fₓ(xᵢ | θ))

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

List 3 graphical methods to assess fit:

A
  • Empirical histogram vs PMF/PDF
  • Empirical vs theoretical CDF
  • QQ plot
17
Q

In QQ plots, the theoretical quantile is plotted against the ______ quantile.

18
Q

QQ Plot Formula
p = (j − 0.5)/n → used to find ______ quantiles

A

theoretical

19
Q

x̄ → E(X) as n → ∞ is known as the ______

A

Law of Large Numbers

20
Q

What makes an estimator efficient?

A

It has the lowest possible variance (Cramér-Rao lower bound)

21
Q

Use s²ₓ instead of σ²ₓ and ĝ₁(x) instead of g₁(x) to ensure _______ sample moments.

22
Q

What are the parameter estimators obtained by MoM and MLM?

A
  • Consistent
  • Biased
  • Inefficient
23
Q

When is a an estimator consistent?

A

An estimator is consistent if the sample moments
converge to the population moments as the sample size
tends to infinity e.g., σ²ₓ → Var(X)

24
Q

What is a biased estimator?

A

A biased estimator has a mean sample moment that is
not equal to the population moment

25
Do MoM and MLM produce efficient or inefficient estimators?
Inefficient
26
What does it mean by the MLM approach being asymptotically normal?
For large sample sizes the estimators are normally-distributed random variables