Taylor and McGuire Flashcards

1
Q

Mean and variance of exponential dispersion family (EDF) of distributions

A

mean = mu

variance = dispersion parameter * variance function

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Variance function for the Tweedie sub-family of EDFs

A

variance = mu ^ p
And p between 0 and 1 (inclusive)

in words: variance is proportional to a power of the mean

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Relationship between p for the Tweedie distribution and tail heaviness

A

tail heaviness increases as p increases

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Mean and variance of a Tweedie distribution

A

mean = mu = [ ( 1 - p ) * theta ] ^ ( 1 / ( 1 - p ) )

where theta = location parameter

variance = dispersion parameter * mu ^ p

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

General GLM format (in matrix notation)

Taylor and McGuire

A

link function = transposed covariate matrix * beta matrix

where betas are the linear response variables, and the link function transforms the mean of each observation into a linear function of the parameters (betas)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Conditions for the structure of a GLM (3)

Taylor and McGuire

A
  1. each observation is a member of the EDF
  2. h(mu-sub i) = x-sub i-transpose * beta
  3. observations are stochastically independent
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

GLM version of a standard linear regression

A

mean, mu = sumproduct ( x, beta)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Underlying assumptions of a standard linear regression (3)

A
  1. errors are normally distributed
  2. errors have constant variance
  3. linear relationship
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Difference b/w weighted linear regression and standard linear regression

A

weighted linear regression recognizes errors might have unequal variances

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Model generalizations to get from a linear regression to a GLM (2)

A
  1. non-linear relationship

2. non-normal errors

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Common estimation method for GLM parameters

A

MLE

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Requirements for selection of a GLM and purpose of each (4)

A

selection of:

  1. cumulant function (controls the shape of the distribution)
  2. index, p (controls relationship b/w mean and variance in an EDF)
  3. covariates (x’s = explanatory variables)
  4. link function (controls relationship b/w mean and covariates)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Measure of model goodness of fit

Taylor and McGuire

A

deviance

> > smaller = better

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Deviance formula (unscaled)

A

deviance = 2 * sum ( log-likelihood (perfect model ) - log-likelihood ( actual model ) )

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Scale parameter calculated from deviance and corresponding distribution
(Taylor and McGuire)

A

scale parameter = deviance / ( n - p )

> > Chi-square distribution w/ ( n - p ) df

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Standardized Pearson Residuals (Taylor and McGuire)

A

= raw residual / std. dev. ( observation )

  • unbiased and homoscedastic
17
Q

Problem with standardized Pearson residuals

Taylor and McGuire

A

reproduces any non-normality from the observations

18
Q

Best residual to use for model assessment and why

Taylor and McGuire

A

deviance residuals

why: corrects any non-normality in the data

19
Q

Deviance residual

A

= sgn ( actual - fitted ) * ( d-sub i / scale parameter ) ^ .5

where sgn function = -1 if negative, 0 if 0, and 1 if positive
and d-sub i is the contribution of the i-th observation to the unscaled deviance

20
Q

Types of stochastic models (4)

Taylor and McGuire

A
  1. non-parametric Mack model
  2. parametric Mack models
  3. cross-classified models
  4. GLM representations of CL models
21
Q

Results of non-parametric Mack model (2)

A
  1. estimators of CL age-to-age factors are MVUE’s among estimators that are unbiased linear combinations
  2. unbiased CL reserve estimates
22
Q

Special cases of parametric Mack models (2)

A
  1. ODP Mack model

2. Tweedie Mack model

23
Q

Assumption required to turn a non-parametric Mack model into a parametric one

A

require that incremental observations (given claims to date) come from the EDF distributions

24
Q

Theorem 3.1 from Taylor and McGuire (3 MVUE results for parametric models)

A

under EDF and general Mack assumptions:

  1. MLEs are the usual CL estimators (and are unbiased)
  2. for special case of ODP Mack model and if dispersion parameters are just column dependent, then CL estimators are MVUEs
  3. cumulative loss and reserve estimates are also MVUEs

> > CL estimators = age-to-age factors

25
Q

Reason Taylor and McGuire’s parametric MVUE results are stronger than regular Mack results

A

minimum variance of all unbiased estimators, not just linear combinations

26
Q

EDF cross-classified model assumptions (2)

A
  1. stochastic independence of response variable

2. explicit row and column parameters, where column parameters sum to 1

27
Q

Theorem 3.2 from Taylor and McGuire (EDF and ODP cross-classified results)

A

under the EDF cross-classified assumptions, restricted to an ODP distribution with constant dispersion parameter, the MLE fitted values and forecasts are the same as the usual CL method

28
Q

Theorem 3.3 from Taylor and McGuire (MVUE for cross-classified models)

A

if theorem 3.2 applies and the fitted and forecasted values are corrected for bias, then they are MVUEs

29
Q

Alpha and beta parameter calculations for non-GLM version of ODP cross-classified model, order of calculations, and forecasted incremental losses

A

order: alpha increasing, beta decreasing

alpha ( 1 ) = latest cumulative loss

all other alphas = latest cumulative loss / ( 1 - sum of already calculated betas )

beta = sum ( incremental losses in column ) / sum (already calculated alphas )

forecasted incremental losses = alpha * beta for given row and column

30
Q

Difference b/w ODP Mack model and ODP cross-classified model under GLM representations of CL models

A

ODP Mack models link ratios

ODP cross-classified models incremental losses

31
Q

Matrix notation for GLM representation of ODP Mack model

A

Y = X * beta

where Y = individual predicted age-to-age factors (all)
X = identity matrix
beta = volume weighted age-to-age factors

32
Q

Matrix notation for GLM representation of ODP cross-classified model

A

Y = X * beta

where Y = estimated incremental losses (all)
X = identity matrix
beta = ln of all alpha and beta parameters (single column in order)

33
Q

Problem with GLM representation of ODP cross-classified model and consequence

A

can lead to parameter redundancy (aka model is over-specified and parameters are aliased)

> > if parameters are aliased, estimates will not match the non-GLM version (rescale alpha and beta parameters)

34
Q

Forecast design matrix (GLM ODP cross-classified model)

A

same as regular GLM representation but only shows estimates of future incremental losses