Chapter 5 Statistical Estimation Flashcards
(24 cards)
Q: How to determine if an estimator is good?
A: Check for unbiasedness (E[Tn] = θ). Evaluate mean square error (MSE), where MSE = Var(Tn) + Bias(Tn)^2. The lower the MSE, the better the estimator.
Q: What is the method of moments for estimating parameters?
A: Calculate the sample moments and equate them to the theoretical moments to form a system of equations. Solve these equations for the unknown parameters.
Q: What is the least squares method for estimating parameters?
A: Minimize the sum of squared differences between observed values and the estimated values. For regression models, solve ∂R/∂θ = 0 for the parameters.
Q: How to perform maximum likelihood estimation (MLE)?
A: Write the likelihood function L(θ) based on the sample data, then maximize it with respect to θ. Use the log-likelihood for easier computation.
Q: How do you estimate parameters in cases where the likelihood function is difficult to maximize?
A: For complex cases, check for local maxima and ensure the global maximum is found. You may need to analyze the boundary of the parameter space.
Q: What is the Fisher information and its role in MLE?
A: Fisher information is the negative second derivative of the log-likelihood function. It determines the precision of the MLE estimator, and the variance of the estimator is approximately 1/I(θ).
Q: What is the asymptotic behavior of MLE?
A: For large n, the MLE is consistent, asymptotically normal with mean θ, and variance 1/I(θ). The MLE estimator approaches the true parameter as n → ∞.
Q: How do you construct a confidence interval using MLE?
A: Use the asymptotic normality of the MLE, where θ̂ML ∼ N(θ, 1/I(θ)). Construct a confidence interval using θ̂ML ± Z * sqrt(1/I(θ)) for large n.
Q: How to determine the variance of the MLE?
A: The variance of the MLE for large n is approximately 1/I(θ), where I(θ) is the Fisher information, estimated using the MLE itself.
Q: How to find the MLE for a Bernoulli random variable?
A: For a sample of Bernoulli variables, the MLE for p is p̂ML = (sum of X_i) / n, where X_i are the observed values.
Q: How to find the MLE for an Exponential distribution?
A: For a sample of Exponential variables, the MLE for λ is λ̂ML = 1 / (mean of X), where X are the observed values.
Q: What is an unbiased estimator?
A: An estimator is unbiased if its expected value equals the true parameter value, E[Tn] = θ. It gives correct estimates on average.
Q: What is the method of moments?
A: The method of moments estimates parameters by equating the sample moments (mean, variance, etc.) to their theoretical counterparts. No distributional assumption is required.
Q: What is the least squares method?
A: The least squares method estimates parameters by minimizing the sum of squared differences between observed and estimated values. It is commonly used in regression.
Q: What is the maximum likelihood estimation (MLE)?
A: MLE estimates parameters by maximizing the likelihood function, which represents the probability of the observed data. The log-likelihood function is often used for convenience.
Q: What is Fisher information in the context of MLE?
A: Fisher information measures the amount of information that the sample data provides about the parameter θ. It is used to determine the precision of MLE estimators.
Q: What is the asymptotic behavior of MLE?
A: As the sample size increases, the MLE becomes consistent (converges to the true parameter) and follows a normal distribution with variance 1/I(θ), where I(θ) is the Fisher information.
Q: How to use MLE for constructing a confidence interval?
A: To construct a confidence interval, use the asymptotic normality of the MLE. The interval is θ̂ML ± Z * sqrt(1/I(θ)) for large n, where θ̂ML is the MLE.
Q: What is the variance of the MLE?
A: The variance of the MLE is approximately 1/I(θ), where I(θ) is the Fisher information. This measures the uncertainty in the estimation.
Q: What is the MLE for a Bernoulli distribution?
A: For a Bernoulli distribution, the MLE for p is the sample mean p̂ML = (sum of X_i) / n, where X_i are the observed outcomes.
Q: What is the MLE for an Exponential distribution?
A: For an Exponential distribution, the MLE for λ is λ̂ML = 1 / (mean of X), where X are the observed values.
Q: What is the least squares estimator of a mean?
A: The least squares estimator of the mean is the sample mean, obtained by minimizing the sum of squared deviations from the sample mean.
Q: What is the bias of an estimator?
A: The bias of an estimator is the difference between its expected value and the true parameter value. A bias of zero means the estimator is unbiased.
Q: What is mean square error (MSE)?
A: MSE is a criterion for evaluating an estimator. It combines both variance and squared bias. MSE = Var(Tn) + Bias(Tn)^2, aiming to minimize both components.