Flashcards in Chapter 18 Generalised Linear Modelling Deck (35):
What is an explanatory variable?
-input into a model that is expected to influence the response variable.
-i.e. rating factor
-it is important that explanatory variables make intuitive sense.
What is a response variable?
-output variable from a model is likely to be influenced by an explanatory variable.
What is a categorical variable?
-These are explanatory variables which are discrete and distinct, often cannot be given any natural ordering score.
What is non-categorical variable?
-can take numerical values eg age.
What is an interaction term?
-Used where the pattern in response variable is better modelled by including an extra parameter for each combination of two or more factors.
One-way analysis merits
-prior to use of GLMs the effect of frequency and severity of each rating factor was considered separately.
-This one-way analysis ignores correlations and interaction effects between variables and so may underestimate or double count the effects of variables.
Uses of GLMs
-A GLM can be used to model the behaviour of a random variable that is believed to depend on the values of several other characteristics eg age, sex, chronic condition.
-It is a generalisation of the normal model for multiple linear regression.
What are the drawbacks for the normal model for multiple linear regression?
-it assumes the response variable has a normal distribution
-the normal distribution has a constant variance which may not be appropriate
-it adds together the effects of different explanatory variables, but is often not what is observed
-it becomes long-winded with more than two explanatory variables.
Assumptions of classical linear models
-error term are independent and come from a normal distribution
-the mean is a linear combination of the explanatory variables
-the error terms have constant variance (or homoscedasticity)
What are the two properties of any member of the exponential family?
-the distribution is completely specified in terms of its mean and variance.
-the variance is a function of its mean
What is the link function?
-the link function acts to remove the assumption that the effects of different variables must simply be added together.
-it must be both differentiable and monotonic.
-include:log, logit & identity functions.
Steps for obtaining predicted values from a single GLM
-Specify design matrix X and the vector of parameters Beta
-Choose a distribution for the response variable and the link function.
-Identify the log-likelihood function
-Take logarithm to convert the product of many terms into a sum
-Maximise the logarithm of the likelihood function by taking partial derivatives with respect to each parameter.
-Compute predicted values.
What techniques are used to analyse significance of explanatory variables?
-the F-statistic - models need to be nested for this to work.
-Akaike Criterion Information - appropriate where models are not nested.
Define degrees of freedom
-number of observations - number of parameters
AIC = -2 * log likelihood + 2* number of parameters
-the lower the AIC the better the model.
-fewer parameters is better/parsimonious model.
Measuring uncertainty in the estimators of the model parameters
-The cramer-rao lower bound is used.
-the maximum likelihood estimator theta-hat is distributed N(theta,CRLB).
-standard errors in a GLM will be found using the Hessian matrix.
-this is a matrix of 2nd derivatives.
What other ways can be used to test significance?
-Comparisons with time
-Consistency checks with other factors
Comparisons with time
-analysis of claims frequency by factor by year will indicate whether claims frequencies have been stable over time.
Consistency checks with other factors
-time is not the only factor that can be used as a consistency check.
-eg an explanatory variable like age would be expected to show the same pattern regardless of geopraphical region.
Testing the appropriateness of models
-The hat matrix is one of the outputs of the model-fitting process.
-It is the matrix H such that y-Hat = Hy
-For Normal multiple linear regression model.
-The diagonal entries, h(i,i) of the matrix are called leverages. h(i,i) in interval (0,1).
-Leverages measure the influence that each observed value has on the fitted value for that observation.
-Data points with high leverages or residuals may distort the outcome and accuracy of a model.
-This is the measure of the distance between the actual observation and the fitted value.
-deviance corrects the skewness of the distribution.
Standardised Pearson residuals
-A standardised residual is the difference between the observed response and the predicted value, adjusted for the standard deviation of the predicted value and the leverage of the observed response.
-These adjustments make it possible to compare Standardised Pearson residuals even where observations have different means.
-For a particular method if the distribution chosen for the response variable is appropriate then the residuals chart should produce residuals that:
-are symmetrical about the x-axis
-have an average residual of zero
-are fairly constant across the width of the fitted values
Cook's distance and leverage
-Cook's distance is used to estimate the influence of a data point on the model results.
-Data points with a Cook's distance of 1 or more are considered to merit closer examination in the analysis.
-As a result of the investigation into any data points with a high Cook's distance, decision might be made to remove the observations altogether.
Models may be refined using (4)?
Model refinement: Interactions
-After choosing a structure of the model and checked that it is appropriate for the factors chosen the model can be refined further.
-The may be complete or marginal.
Model refinement: Aliasing
-Aliasing occurs when there is a linear dependency among the observed covariates X1,....,Xp.
-That is, 1 covariate can be expressed as a linear combination of other covariates
-eg X3= 5 +2X1+ 3X2
There are two types of aliasing
-Occurs because of the dependencies inherent within the definition of covariates.
-arise mostly when categorical variables are included
-This is dealt by modelling software.
-Occurs when two or more factors contain levels that are perfectly correlated.
-When modelling in practice, a common problem occurs when two or more factors contain levels that are almost, but not quite, perfectly correlated.
-In order to understand problems where model suggest very large negative/positive parameters 2 way tables of exposure and claim counts.
-From this it should be possible to identify combinations that cause near aliasing.
-The issue can then be resolved either deleting or excluding those rogue records or reclassifying the rogue records into another, more appropriate, factor.
-A GLM can be improved by smoothing the parameter values. This can be achieved by grouping level factors.
-The granularity of data can be kept in modeling since softwares can use this granularity & patterns to better group the different levels of variables.
Factors can be simplified by?
-Grouping and summarising data prior to loading. Requires knowledge of expected patterns.
-Grouping in the modelling package. eg grouping age into two age bands.
Restrictions when pricing: GLM restrictions
-Legal or commercial considerations may impose rigid restrictions on the way particular factors are used in practice.
-eg legal: restricting use of age and gender in pricing for medical scheme.
-When the use of certain factors is restricted the model will be able to compensate for this to an extent for this artificial restriction by adjusting the fitted relativities for correlated factors.
-this is achieved by the offset term.