Flashcards in Chapter 18 Generalised Linear Modelling Deck (35):
1
What is an explanatory variable?
input into a model that is expected to influence the response variable.
i.e. rating factor
it is important that explanatory variables make intuitive sense.
2
What is a response variable?
output variable from a model is likely to be influenced by an explanatory variable.
ie price
3
What is a categorical variable?
These are explanatory variables which are discrete and distinct, often cannot be given any natural ordering score.
Eg gender
4
What is noncategorical variable?
can take numerical values eg age.
5
What is an interaction term?
Used where the pattern in response variable is better modelled by including an extra parameter for each combination of two or more factors.
6
Oneway analysis merits
prior to use of GLMs the effect of frequency and severity of each rating factor was considered separately.
This oneway analysis ignores correlations and interaction effects between variables and so may underestimate or double count the effects of variables.
7
Uses of GLMs
A GLM can be used to model the behaviour of a random variable that is believed to depend on the values of several other characteristics eg age, sex, chronic condition.
It is a generalisation of the normal model for multiple linear regression.
8
What are the drawbacks for the normal model for multiple linear regression?
it assumes the response variable has a normal distribution
the normal distribution has a constant variance which may not be appropriate
it adds together the effects of different explanatory variables, but is often not what is observed
it becomes longwinded with more than two explanatory variables.
9
Assumptions of classical linear models
error term are independent and come from a normal distribution
the mean is a linear combination of the explanatory variables
the error terms have constant variance (or homoscedasticity)
10
What are the two properties of any member of the exponential family?
the distribution is completely specified in terms of its mean and variance.
the variance is a function of its mean
11
What is the link function?
the link function acts to remove the assumption that the effects of different variables must simply be added together.
it must be both differentiable and monotonic.
include:log, logit & identity functions.
12
Steps for obtaining predicted values from a single GLM
Specify design matrix X and the vector of parameters Beta
Choose a distribution for the response variable and the link function.
Identify the loglikelihood function
Take logarithm to convert the product of many terms into a sum
Maximise the logarithm of the likelihood function by taking partial derivatives with respect to each parameter.
Compute predicted values.
13
What techniques are used to analyse significance of explanatory variables?
chisquared test
the Fstatistic  models need to be nested for this to work.
Akaike Criterion Information  appropriate where models are not nested.
other methods
14
Define degrees of freedom
number of observations  number of parameters
15
AIC formula
AIC = 2 * log likelihood + 2* number of parameters
the lower the AIC the better the model.
fewer parameters is better/parsimonious model.
16
Measuring uncertainty in the estimators of the model parameters
The cramerrao lower bound is used.
the maximum likelihood estimator thetahat is distributed N(theta,CRLB).
standard errors in a GLM will be found using the Hessian matrix.
this is a matrix of 2nd derivatives.
17
What other ways can be used to test significance?
Comparisons with time
Consistency checks with other factors
18
Comparisons with time
analysis of claims frequency by factor by year will indicate whether claims frequencies have been stable over time.

19
Consistency checks with other factors
time is not the only factor that can be used as a consistency check.
eg an explanatory variable like age would be expected to show the same pattern regardless of geopraphical region.
20
Testing the appropriateness of models
The hat matrix is one of the outputs of the modelfitting process.
It is the matrix H such that yHat = Hy
For Normal multiple linear regression model.
The diagonal entries, h(i,i) of the matrix are called leverages. h(i,i) in interval (0,1).
Leverages measure the influence that each observed value has on the fitted value for that observation.
Data points with high leverages or residuals may distort the outcome and accuracy of a model.
21
Deviance residuals
This is the measure of the distance between the actual observation and the fitted value.
deviance corrects the skewness of the distribution.
22
Standardised Pearson residuals
A standardised residual is the difference between the observed response and the predicted value, adjusted for the standard deviation of the predicted value and the leverage of the observed response.
These adjustments make it possible to compare Standardised Pearson residuals even where observations have different means.
23
Residual Plots
For a particular method if the distribution chosen for the response variable is appropriate then the residuals chart should produce residuals that:
are symmetrical about the xaxis
have an average residual of zero
are fairly constant across the width of the fitted values
24
Cook's distance and leverage
Cook's distance is used to estimate the influence of a data point on the model results.
Data points with a Cook's distance of 1 or more are considered to merit closer examination in the analysis.
As a result of the investigation into any data points with a high Cook's distance, decision might be made to remove the observations altogether.
25
Models may be refined using (4)?
Interactions
Aliasing
Resctrictions
Smoothing
26
Model refinement: Interactions
After choosing a structure of the model and checked that it is appropriate for the factors chosen the model can be refined further.
The may be complete or marginal.
27
Model refinement: Aliasing
Aliasing occurs when there is a linear dependency among the observed covariates X1,....,Xp.
That is, 1 covariate can be expressed as a linear combination of other covariates
eg X3= 5 +2X1+ 3X2
28
There are two types of aliasing
Intrinsic
Extrinsic
29
Intrinsic aliasing
Occurs because of the dependencies inherent within the definition of covariates.
arise mostly when categorical variables are included
This is dealt by modelling software.
30
Extrinsic aliasing
Occurs when two or more factors contain levels that are perfectly correlated.
31
Near aliasing
When modelling in practice, a common problem occurs when two or more factors contain levels that are almost, but not quite, perfectly correlated.
In order to understand problems where model suggest very large negative/positive parameters 2 way tables of exposure and claim counts.
From this it should be possible to identify combinations that cause near aliasing.
The issue can then be resolved either deleting or excluding those rogue records or reclassifying the rogue records into another, more appropriate, factor.
32
Parameter smoothing
A GLM can be improved by smoothing the parameter values. This can be achieved by grouping level factors.
The granularity of data can be kept in modeling since softwares can use this granularity & patterns to better group the different levels of variables.
33
Factors can be simplified by?
Grouping and summarising data prior to loading. Requires knowledge of expected patterns.
Grouping in the modelling package. eg grouping age into two age bands.
34
Restrictions when pricing: GLM restrictions
Legal or commercial considerations may impose rigid restrictions on the way particular factors are used in practice.
eg legal: restricting use of age and gender in pricing for medical scheme.
When the use of certain factors is restricted the model will be able to compensate for this to an extent for this artificial restriction by adjusting the fitted relativities for correlated factors.
this is achieved by the offset term.
35