Brehm Chapter 3 Flashcards Preview

Exam 8 > Brehm Chapter 3 > Flashcards

Flashcards in Brehm Chapter 3 Deck (24):

organizational details that need to be addressed when developing an internal model

- Reporting relationship: modeling team reporting line, solid line vs dotted line reporting
- functions represented : reserving, pricing, finance, planning, underwriting
- Resource commitment: mix of skill set (actuarial, UW, communication, etc.) full time vs part time
- Inputs and outputs: control of input parameters, control of output data analyses and uses of output
- Purpose: is the goal of the model to quantify variation around the plan?
- Initial scope - prospective UW year only, or including reserves assets, operational risks? low detail (on the whole company) or high detail (on specific segment)?


For each organizational detail, provide a recommended course of action:
- reporting relationship
- resource commitment
- inputs and outputs
- scope

- reporting relationship: the reporting line for the internal model team is less important than ensuring they report to a leader who is fair
- resource commitment: since an internal model implementation is considered a new competency, it's best to transfer internal employees or hire external employees for full-time positions
- Inputs and outputs: controlled in a manner similar to that used for general ledger or reserving systems
- scope: prospective UW period, variation around plan


Four parameter development details that need to be addressed when developing an internal model.

- Modeling software: capabilities, scalability, learning curve, integration with other systems.
- developing input parameters: process is heavily data driven, requires expert opinions (especially when data quality is low), many functional areas should be involved
- correlations: line of business representatives cannot set cross-line parameters, corporate level ownership of these parameters is required
- validation and testing: no-existing internal model with which to compare, multi-metric testing is required


For each parameter development detail provide a recommended course of action.

- modeling software: compare existing vendor software with user-built options, ensure final software choice aligns with capabilities of the internal model team
- developing input parameters: include product expertise from UW, claims, planning and actuarial; develop a systematic way to capture expert opinion
- correlations: have the internal model team recommend correlation assumptions, which are ultimately owned at the corporate level
- validation and testing: validate and test over an extended period, provide basic education to interested parties on probabilities and statistics


Four model implementation details that need to be addressed when developing an internal model

- priority setting: importance of priority (company may not immediately make the necessary improvements to support implementation), approach and style (ask vs. mandate), priority and timeline must be driven from the top
- interest and impact: implement communication and education plans across the enterprise
- pilot test: assign multidisciplinary team (actuarial, UW, finance, etc.) to provide real data and real analysis on the company as a whole or on one specific segment, important to remember that piloting means model indications receive no weight (just a learning exercise)
- education process: run in parallel with pilot test, bring leadership to same point of understanding regarding probability and statistics


For each model implementation detail, provide a recommended course of action.

- priority setting: have top management set the priority for implementation
- interest and impact: plan for regular communication to broad audiences
- pilot test: assign multidisciplinary team to analyze real company data; prepare the company for the magnitude of change resulting from using an internal
- Education process: target training to bring leadership to similar base level of understanding


Three integration and maintenance details that need to be addressed when developing an internal model

- cycle: integrate model runs into major corporate calendar (planning, reinsurance purchasing, capacity allocation), sure that internal model output supports major company decisions
- updating: determine frequency and magnitude of updates
- controls: ensure that there is centralized storage and control of input sets and output sets (date stamping vital), ensure there is an endorsed set of analytical templates used to manipulated internal model outputs for various purposes (such as decision making and reporting)


For each integration and maintenance detail, provide a recommended course of action

- cycle: integrate into planning calendar at a minimum
- updating: perform major input review no more frequently than twice a year. minor updates can be handled by modifying the scale of the impacted portfolio segments
- controls: maintain centralized control of inputs, outputs and application templates


Three ways in which parameter risk manifests itself

- estimation risk: arises from using only a sample of the universe of the possible claims to estimate the parameters of distributions
- projections risk: arises from projecting past trends into the future
- model risk: arises from having the wrong models to being with


Compare the overall uncertainty for a small and large firm. For each firm, explain how the overall uncertainty changes as projection risk increases

- When the frequency and severity distributions are known, the CV of total losses is calculated as the square root of "the frequency variance-to-mean ratio plus the square of the severity CV, all divided by the frequency mean. " thus the small firm should have more uncertainty since we are dividing by a smaller number (ie. the number of claims)
- as the projection risk increases, the overall uncertainty for the large firm is more significantly impacted. This is because the small firm is already volatile to begin with


to project future losses, an actuary fit a trend line to historical data. Using standard statistical procedures, the actuary placed prediction intervals around the projected losses. Explain why these prediction intervals may be too narrow

- historical data based on estimates of past claims which have not yet settled. In the projection period, the projection uncertainty is a combination of the uncertainty in each historical point and the uncertainty in the fitted trend line. prediction intervals may be too narrow due to the missing uncertainty associated with the historical data


Two approaches for modeling claim severity trend.

- model severity trend form insurance data with no regard to general inflation
- correct payment data using general inflation indices, model the residual superimposed inflation. any subsequent projection is a projection of superimposed inflation only, a separate projection of general inflation is required


Why is projecting superimposed inflation and general inflation separately advantageous?

- it reflects the dependency between claim severity trend and general inflation. Most enterprise risk models include a macroeconomic model, which includes future inflation rates. It is essential that the claim severity trend model reflects appropriate dependencies between claim severity trend and inflation. In doing so, inflation uncertainty is incorporated into projection risk


Primary difference between modeling projection risk using a simple trend model and modeling projection risk using a time series

- Simple trend model assumed there is a single underlying trend rate that has been constant throughout the historical period and will remain constant in the future. A time series assumes that the future trend rate is a mean-reverting process with an autocorrelation coefficient and an annual disturbance distribution


Compare the prediction intervals constructed using a simple trend model with those constructed using a time series

simple trend model prediction intervals widen with time due to the uncertainty in the estimated trend rate. Time series model, prediction intervals widen with time but effect is more pronounced and the prediction intervals are wider due to the additional uncertainty of the auto-regressive process


consequence of parameterizing a time series with limited data

- if the time period of the data is too limited to exhibit a range of behaviours, the resulting model will be limited as well and understate the projection risk


How is estimation risk assessed when using maximum likelihood estimation (MLE)

use the covariance matrix that results from the standard MLE procedure (based on second partial derivatives of the parameters), but we assume the parameters follow a joint log-normal distribution with that covariance matrix


describe a situation where estimating parameters using MLE is difficult

the best fitting parameters can be difficult to determine if the likelihood "surface" is very flat near the maximum. This implies a wide range of parameter sets have almost the same likelihood which means the set that miximizes the likelihood might not be any better than one that has a slightly smaller likelihood


For large datasets, the parameter distributions in the MLE procedure are multivariate normal. Describe two problems that may arise when this normality assumption is used for a small dataset

- the std dev of the parameters can be high enough to procude negative parameter values with significant probability
- the distribution of the parameters may be heavy-tailed (the bi-variate normal is not heavy-tailed)


When building a model, various rules and metrix are used to select the best model form. However the selected form may still be wrong. Describe a process to overcome this problem

- assign probabilities of being right to all of the better-fitting distributions. These probabilities can be based on the Hannan-Quinn Information Criteria metric or a Bayesian analysis
- use a simulation model to select a distribution from the better-fitting distributions
- select the parameters from the joint log-normal distribution of parameters for the selected distribution
- simulate a loss scenario using the parameterized distribution
- start the process over again with the next scenario


List and describe the potential Copulas

- Frank Copula: produces a weak correlation in the tails; can be inverted
- Gumbel copula: more tail concentration than Frank's copula; asymmetric, with more weight in the right tail; is not invertible, cannot be easily simulated
- Heavy right tail (HRT) copula and Joint Burr: produces less corrrelation in the elft tail and more correlation in the right tail; is invertible; If X and Y are Burr distributions, then a joint Burr distribution is produced when the a parameter of both Burr distributions is the same as that of the HRT copula
- Normal copula: Advantages - easy simulation method, gneralizes to multi-dimensions (ie. greater than two dimensions); right tail is lighter than the Gumbel and HRT copulas but heavier than the Frank copula
- when comparing copulas, make sure they have the same tao (measure of the correlation of a copula) or else they can't be easily compared


discuss how to compare copulas and how to evaluate which copula is more appropriate

- get historical data from both lobs as a joint distribution and fit each of the copulas to the data with MLE estimates. Calculate and graph the left and right tail concentration functions for the data and the copulas to see which copula fits best, especially at (and near) the limit of R(Z) as Z->1
-calculate and graph the J and chi functions for the data and the fitted copulas to see which fits the empirical data best. Select copula that fits best based on the graphs


Why is the Pearson correlation coefficient insufficient to aggregate the losses in an enterprise risk model

Pearson correlation coefficient is just one number showing the correlation overall, but the two lines may have more complex dependency with different levels of correlation at different points in the distribution. (eg. there may be higher correlation for extreme scenarios like catastrophes)


Formula for L(z) and R(z)

L(z) = C(z,z)/z