Chapter 3 Flashcards Preview

Statistics 1 > Chapter 3 > Flashcards

Flashcards in Chapter 3 Deck (14):

define the likelihood function L(theta) for single observation

For a given observation x, we call theta->L(theta;x)= p(x;theta) for x discrete and f(x;theta) for x continuous


define the maximum likelihood estimate of theta

the mle is the value of theta that maximises the likelihood function L(theta;x)


for a single observation how do we find theta hat mle

for x=x1 we have L(theta;x1)=p(x1;theta) or =f(x1;theta). we then maximise L(theta;x1) by differentiating and setting equal to 0. Remember theta E (0,1). Then show it is a maximum


for multiple independent observations how do we find theta hat mle

set L(theta)=L(theta;x1,x2,...,xn)= Px1,x,...,xn(x1,x2,...,xn)= Px1(x1;theta)Px2(x2;theta)...Pxn(xn;theta)
then differentiate ands et equal to 0 and solve for theta


define likelihood function for multiple observations

Assume the data x1,..,xn are the observed values of random variables X1,...Xn whose joint distribution depends on one or more
unknown parameters θ. The likelihood function L(θ) ≡ L(θ; x1, x2, . . . , xn) is
the joint probability mass function (discrete case) or joint probability density
function (continuous case) regarded as a function of the unknown parameter θ
for these fixed numerical values of x1, x2, . . . , xn.


define mle for multiple observations

For observed values {x1, . . . , xn}, the maximum likelihood estimator
(mle) θ hat mle(x1, . . . xn) is the value of θ which maximises the likelihood
function L(θ; x1, . . . , xn).


define log-likelihood function

For observed values {x1, . . . , xn} and associated likelihood
function L(θ) ≡ L(θ; x1, x2, . . . , xn), the log-likelihood function is defined as
l(θ) := log L(θ), where log is the natural logarithm (and we take log 0 = −∞)


likelihood function for simple random sample

If X1, X2, . . . , Xn, is a random sample of size n from a distribution
with probability mass function p (x; θ) (or probability density function
f(x; θ)) then the Xi are i.i.d. and their joint distribution factorises into the
product of marginals. Thus for a random sample
L(θ) ≡ L(θ; x1, x2, . . . , xn) =
p(x1; θ) p(x2; θ)· · · p(xn; θ) (discrete scenario)
f(x1; θ) f(x2; θ)· · · f(xn; θ) (continuous scenario


for observations taken froma simple random sample what does the log-likelihood function=

i=1-n log p(xi; θ). (discrete scenario)

i=1-n log f(xi; θ). (continuous scenario)


what is the likelihood equation

∂/∂θ l(θ) = sum i=1-n ∂/∂θ log f(xi; θ) = 0


procedure of calculating theta hat mle in the random sample case

1.Calculate ∂/∂θlogf(x;θ)
2. compute the sum ∂/∂θlogf(x;θ)
3.set sum =0 and θhat mle is the value satisfying the likelihood equation


the invariable property of mle

if the quantity of interest is a function t(θ) of θ the mle of t(θ) is the plug-in estimate t(θ)hat=t(θhat)


procedure of calculating mle for multiple parameters alpha and beta

for two parameters α and β, the αhat mle and βhat mle are the simultaneous solutions to the two likelihood equations
0 = sum i=1-n ∂/∂αlog f(xi; α, β)
0 = sum i=1-n ∂/∂β log f(xi; α, β).


if the density is not regular how to work out mle

the likelihood can be maximise at one endpoint of the interval so find Lθ and differentiate or see if it is a decreasing/increasing function.