A5 - A10 HARD Flashcards

1
Q

Fully describe the bootstrapping process. Assume the data does not require any modifications

A

Shapland

  1. Calculate the fitted incremental claims using the GLM framework or the chain ladder age to age factors
  2. Calculate the residuals between fitted and actual incremental claims
  3. Create a triangle of random residuals by sampling with replacement from the set of non zero residuals
  4. Create a sample incremental triangle using the random residual triangle
  5. Accumulate the sample incremental triangle to cumulative
  6. Project the sample cumulative data to ultimate using the chain ladder method
  7. Calculate the reserve point estimate for each accident year using the projected data
  8. Iterate through this process to create a distribution of reserved for each accident year
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Identify one deterministic method for reducing the variability in the extrapolation of future incremental values

Explain how this method can be stochastic

A

Shapland

Bornhuetter Ferguson method In addition to specifying a priori loss ratios for the BF method we can add a vector of standard deviations to go with these means. We can then assume a distribution and stimulate a different priori loss ratio for every iteration of the model.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Describe the MCMC methodology.

A

Verrall MCMC (Markov chain Monte Carlo) methods simulate the posterior distribution of a random variable by breaking the process down in a number of simulations. This is achieved by using the conditional dis- tribution of each parameter (given all of the others), making the simulation a univariate distribution. Considering each parameter in turn creates a Markov chain

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Fully describe the steps required to implement a Bayesian model for the Bornhuetter/Ferguson method. Assume that prior distributions are defined for the column parameters and the row parameters.

A

Verrall ⇧ Define improper prior distributions for the column parameters and estimate the column parameters first. Since we are using improper prior distributions with large variances, the estimates will be those implied by the chain-ladder method ⇧ Define prior distributions for the row parameters x(i). In practice, these are often defined as gamma distributions where the beta parameter controls the level of confidence in the prior information ⇧ Using the x(i), re-parameterize the model in terms of lamda (i). We must do this since we defined prior distributions for the column parameters

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Negative incremental values can cause extreme outcomes in early development periods. In partic- ular, they can cause large age-to-age factors. Describe four options for dealing with these extreme outcomes.

A

Shapland

  1. Identify the extreme iterations and remove them • Only remove unreasonable extreme iterations so that the probability of extreme out- comes is not understated
  2. Recalibrate the model • Identify the source of the negative incremental losses and remove it if necessary. For example, if the first row has negative incremental values due to sparse data, remove it and reparameterize the model
  3. Limit incremental losses to zero • This involves replacing negative incremental values with zeroes in the original triangles, zeroes in the sampled triangles OR zeroes in the projected future incremental losses. We can also replace negative incremental losses with zeroes based on their development column
  4. Use more than one model • For example, if negative values are caused by salvage/subrogation, we can model the gross losses and salvage/subrogation separately. Then, we can combine the iterations assuming 100% correlation
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the process for using an N-year weighted average of losses when determining development factors under the following frameworks: a) GLM framework

A

Shapland

We use N years of data by excluding the first few diagonals in the triangle (which leaves us with N + 1 included diagonals). This changes the shape of the triangle to a trapezoid. The excluded diagonals are given zero weight in the model and fewer calendar year parameters are required

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Describe the process for using an N-year weighted average of losses when determining development factors under the following frameworks: b) Simplified GLM framework

A

Shapland

First, we calculate N-year average factors instead of all-year factors. Then, we exclude the first few diagonals when calculating residuals. However, when running the bootstrap simulations, we must still sample from the entire triangle so that we can calculate cumulative values. We use N-year average factors for projecting the future expected values as well

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Provide two reasons why the coe fficient of variation may rise in the most recent accident years.

A

Shapland

  1. With an increasing number of parameters in the model, parameter uncertainty increases when moving from the oldest years to the most recent years. This parameter uncertainty may overpower the process uncertainty, causing an increase in variability
  2. The model may simply be overestimating the variability in the most recent years
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Describe two methods for combining the results of multiple stochastic models.

A

Shapland

  1. Run models with the same random variables • Each model is run with the exact same random variables. Once all of the models have been run, the incremental values for each model are weighted together (for each iteration by accident year)
  2. Run models with independent random variables • Each model is run with its own random variables. Once all of the models have been run, weights are used to select a model (for each iteration by accident year).The result is a weighted mixture of models
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Provide four reasons for fitting a curve to unpaid claim distributions.

A

Shapland

  1. Assess the quality of the fit
  2. Parameterize a DFA (dynamic financial analysis) model
  3. Estimate extreme values
  4. Estimate TVaR
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Describe the process for creating distribution graphs. Include a discussion on kernel density func- tions.

A

Shapland

We can create a total unpaid distribution histogram by dividing the range of all values generated from the simulation into 100 equally sized buckets, and then counting the number of simulations that fall within each bucket. Since simulation results tend to appear jagged, a Kernel density function can be fit to the data to provide a smoothed distribution. Each point of a Kernel density function is estimated by weighting all of the values near that point, with less weight given to points further away

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

a) Briefly describe two methods for including correlation between bootstrap distributions for different business segments.
b) For each method, identify two advantages.

A

Shapland

Part a:

  1. Location mapping • Pick a business segment. For each bootstrap iteration, sample a residual and then note where it belonged in the original residual triangle. Then, sample each of the segments using the residuals at the same locations for their respective residual triangles. This preserves the correlation of the original residuals in the sampling process
  2. Re-sorting • To induce correlation among business segments in a bootstrap model, re-sort the resid- uals for each business segment until the rank correlation between each segment matches the desired correlation

Part b:

  1. Location mapping • Can be easily implemented in a spreadsheet • Does not require a correlation matrix
  2. Re-sorting • Works for residual triangles with different shapes/sizes • Different correlation assumptions can be employed
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Define the following terms:

  • Claims portfolio
  • Valuation classes
  • Claim group
A

Marshall

Claims portfolio – the aggregate portfolio for which the risk margins must be estimated

Valuation classes – the portfolios that are considered individually as part of the risk margin analysis

Claim group – a group of claims with common risk characteristics

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Explain how scenario testing can be used in conjunction with risk margin analyses.

A

Marshall

Scenario testing can be used to determine how key assumptions underlying the risk margin calculation would need to change in order to produce a risk-loaded actuarial central estimate. These scenarios include changes in claim frequencies, claim severities, loss ratios, etc.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Describe three tests for uniformity for n predicted percentiles.

A

Meyers Histogram - if the percentiles are uniformly distributed, the height of the bars should be equal. p-p plot - we first sort them into increasing order. The expected value of these percentiles is given by {ei} = 100*{1/(n+1),2/(n+1),…,n/(n+1)}. We then plot the expected percentiles on the x-axis and the sorted predicted percentiles on the y-axis. If these predicted percentiles are uniformly distributed, we expect this plot to lie along a 45 degree line. K-S test - we can test the significane of the percentiles visually. Using the K-S test, we can reject the hypothesis that a set of percentiles is significant at the 5% level if the K-S statistic D = max(abs(pi - fi)) is greater than its critical value, 136/sqrt(n) where {fi} = 100*{1/n,2/n,…,n/n}. On a p-p plot, these critical values appear as 45 degree bands that run parallel to the line y=x. We reject the hypothesis of uniformity if the p-p plot lies outside the bands.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Briefly describe how the Mack model performed on the incurred loss data and the paid loss data analyzed in the paper. (Meyers)

A

Meyers When used on incurred data, the Mack model produced light tails. When used on paid data, the Mack model produced expected loss estimates that were biased high.

17
Q

Briefly describe how the ODP model performed on the paid loss data analyzed in the paper. (Meyers)

A

Meyers When used on paid data, the ODP model produced expected loss estimates that were biased high.

18
Q

Briefly describe two possible reasons why the Mack model does not validate against the incurred and/or paid data analyzed in the paper. (Meyers)

A

Meyers 1. The insurance loss environment has experienced changes that are not yet observable 2. There are other models that can be validated

19
Q

Briefly describe how the leveled chain ladder (LCL) model and the correlated chain ladder (CCL) model performed on the incurred loss data analyzed in the paper.

A
  • LCL model – increased variability relative to the standard Mack model but still understated variability by producing light tails. Failed the KS test
  • CCL model – increased variability relative to the LCL model and passed the KS test
20
Q

Briefly describe two formulations for the skew normal distribution.

(Meyers)

A

Meyers

  • One formulation produces the skew normal distribution by expressing it as a mixed truncated normal-normal distribution
  • Another formulation produces the skew normal distribution by expressing it as a mixed lognormal-normal distribution
21
Q

Briefly describe two Bayesian models that include paymet year trend and can be used to model paid losses. (Meyers)

A
  • Correlated incremental model – models incremental losses using a payment year trend and a mixed lognormal-normal distribution. It also allows for correlation between accident years
  • Leveled incremental trend model – models incremental losses using a payment year trend and a mixed lognormal-normal distribution. It does NOT allow for correlation between accident years
22
Q

Briefly describe how the leveled incremental trend (LIT) model and the correlated incremental
trend (CIT) model performed on the paid loss data analyzed in the paper.

(Meyers)

A

Both models produced expected loss estimates that were biased high

23
Q

An actuary used Bayesian MCMC processes to simulate losses from a lognormal distribution with unknown parameters μ and σ.

Briefly describe how the posterior distributions of the parameters can be used to determine the
total loss volatility.

(Meyers)

A

Sample from the posterior distributions for each parameter to create parameter sets. Simulate a random loss from a lognormal distribution for each parameter set. The variability in the simulated losses represents the total volatility of the losses

24
Q

An actuary used Bayesian MCMC processes to simulate losses from a lognormal distribution with unknown parameters μ and σ.

Briefly describe how the posterior distributions of the parameters can be used to determine the parameter risk portion of the total loss volatility.

(Meyers)

A

Sample from the posterior distributions for each parameter to create parameter sets. Calculate the expected value of losses for each parameter set using the lognormal distribution. The variability in the expected values represents the parameter risk