Chapter 5 Statistical Estimation Flashcards

(24 cards)

1
Q

Q: How to determine if an estimator is good?

A

A: Check for unbiasedness (E[Tn] = θ). Evaluate mean square error (MSE), where MSE = Var(Tn) + Bias(Tn)^2. The lower the MSE, the better the estimator.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Q: What is the method of moments for estimating parameters?

A

A: Calculate the sample moments and equate them to the theoretical moments to form a system of equations. Solve these equations for the unknown parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Q: What is the least squares method for estimating parameters?

A

A: Minimize the sum of squared differences between observed values and the estimated values. For regression models, solve ∂R/∂θ = 0 for the parameters.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Q: How to perform maximum likelihood estimation (MLE)?

A

A: Write the likelihood function L(θ) based on the sample data, then maximize it with respect to θ. Use the log-likelihood for easier computation.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Q: How do you estimate parameters in cases where the likelihood function is difficult to maximize?

A

A: For complex cases, check for local maxima and ensure the global maximum is found. You may need to analyze the boundary of the parameter space.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Q: What is the Fisher information and its role in MLE?

A

A: Fisher information is the negative second derivative of the log-likelihood function. It determines the precision of the MLE estimator, and the variance of the estimator is approximately 1/I(θ).

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Q: What is the asymptotic behavior of MLE?

A

A: For large n, the MLE is consistent, asymptotically normal with mean θ, and variance 1/I(θ). The MLE estimator approaches the true parameter as n → ∞.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Q: How do you construct a confidence interval using MLE?

A

A: Use the asymptotic normality of the MLE, where θ̂ML ∼ N(θ, 1/I(θ)). Construct a confidence interval using θ̂ML ± Z * sqrt(1/I(θ)) for large n.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Q: How to determine the variance of the MLE?

A

A: The variance of the MLE for large n is approximately 1/I(θ), where I(θ) is the Fisher information, estimated using the MLE itself.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Q: How to find the MLE for a Bernoulli random variable?

A

A: For a sample of Bernoulli variables, the MLE for p is p̂ML = (sum of X_i) / n, where X_i are the observed values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Q: How to find the MLE for an Exponential distribution?

A

A: For a sample of Exponential variables, the MLE for λ is λ̂ML = 1 / (mean of X), where X are the observed values.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Q: What is an unbiased estimator?

A

A: An estimator is unbiased if its expected value equals the true parameter value, E[Tn] = θ. It gives correct estimates on average.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Q: What is the method of moments?

A

A: The method of moments estimates parameters by equating the sample moments (mean, variance, etc.) to their theoretical counterparts. No distributional assumption is required.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Q: What is the least squares method?

A

A: The least squares method estimates parameters by minimizing the sum of squared differences between observed and estimated values. It is commonly used in regression.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Q: What is the maximum likelihood estimation (MLE)?

A

A: MLE estimates parameters by maximizing the likelihood function, which represents the probability of the observed data. The log-likelihood function is often used for convenience.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Q: What is Fisher information in the context of MLE?

A

A: Fisher information measures the amount of information that the sample data provides about the parameter θ. It is used to determine the precision of MLE estimators.

17
Q

Q: What is the asymptotic behavior of MLE?

A

A: As the sample size increases, the MLE becomes consistent (converges to the true parameter) and follows a normal distribution with variance 1/I(θ), where I(θ) is the Fisher information.

18
Q

Q: How to use MLE for constructing a confidence interval?

A

A: To construct a confidence interval, use the asymptotic normality of the MLE. The interval is θ̂ML ± Z * sqrt(1/I(θ)) for large n, where θ̂ML is the MLE.

19
Q

Q: What is the variance of the MLE?

A

A: The variance of the MLE is approximately 1/I(θ), where I(θ) is the Fisher information. This measures the uncertainty in the estimation.

20
Q

Q: What is the MLE for a Bernoulli distribution?

A

A: For a Bernoulli distribution, the MLE for p is the sample mean p̂ML = (sum of X_i) / n, where X_i are the observed outcomes.

21
Q

Q: What is the MLE for an Exponential distribution?

A

A: For an Exponential distribution, the MLE for λ is λ̂ML = 1 / (mean of X), where X are the observed values.

22
Q

Q: What is the least squares estimator of a mean?

A

A: The least squares estimator of the mean is the sample mean, obtained by minimizing the sum of squared deviations from the sample mean.

23
Q

Q: What is the bias of an estimator?

A

A: The bias of an estimator is the difference between its expected value and the true parameter value. A bias of zero means the estimator is unbiased.

24
Q

Q: What is mean square error (MSE)?

A

A: MSE is a criterion for evaluating an estimator. It combines both variance and squared bias. MSE = Var(Tn) + Bias(Tn)^2, aiming to minimize both components.