Clark Flashcards

1
Q

Loglogistic G(x) where G(x) = 1/LDFx

A

G(x|w,ø) = xw/(xww)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Loglogistic LDFx

A

1+øw * x-w

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Weibull G(x)

A

G(x|w,ø) = 1 - exp(-(x/ø)w)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Advantages to using parameterized curves to describe the emergence pattern

A
  • Only have to estimate two parameters
  • Can used data that is not from a triangle with evenly spaced evaluation data
  • final pattern is smooth and does not follow random movements in the historical age-to-age factors
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

uAY:x,y (expected incremental loss dollars in accident year AY between ages x and y)

LDF Method

A

=ULTAY * [G(y|w,ø) - G(x|w,ø)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

uAY;x,y Cape Cod Method

A

=PremiumAY * ELR * [G(y|w,ø) - G(x|w,ø)]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Reasons Cape Cod Method is preferred

A
  • data is summarized into a loss triangle with relatively few data points. LDF method requires an estimation of a number of parameters (one for each AY ultimate loss, as well as ø and w), it tends to be over-parameterized when few data points exist
  • Cape Cod method has a smaller parameter variance. Process variance can be higher or lower than LDF method, but in general produces a lower total variance than LDF method.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Variance/Mean (σ2)

A

1/(n-p) * AY,tnΣ[(cAY,t - uAY,t)2/uAY,t]

where n= # of data points

p= # of parameters

cAY,t = actual incremental loss emergence

uAY,t = expected incremental loss emergence

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

over-dispersed Poisson mean and variance

A

E[c] = ^σ2 = u

Var(c) = ^σ4 = uσ2

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Key advantages of over-dispersed Poisson distribution

A
  1. Inclusion of scaling factors allows us to match the first and second moments of any distribution, allowing high flexibility
  2. MLE estimation produces the LDF and Cape Cod estimates of ultimate losses, so the results can be presented in a familiar format
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

log likelihood, l, of over-dispersed Poisson

A

=iΣci * ln(ui) - ui

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

MLE estimate for ULTi

A

tΣci,t/tΣ[G(xt)-G(xt-1)]

estimate for each ULTi is equivalent to LDF Ultimate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

MLE estimate for ELR

A

i,tΣci,t/i,tΣPi*[G(xt) - G(xt-1)]

equvalent to Cape Cod Ultimate

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

advantage of MLE function

A

works in the presence of negative or zero incremental losses

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Process Variance of R

A

σ2 * ΣuAY;x,y

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Key Assumptions of Clark Model

A
  1. Incremental losses are independent and identically distributed
  2. The variance/mean scale parameter σ2 is fixed and known
  3. Variance estimates are based on an approximation to the Rao-Cramer lower bound