Couret & Venter Flashcards

1
Q

Describe Couret & Venter’s main insight

A

Injury type correlations can be used to better predict freq of large claims since fatalities and permanent disabilities are costliest injury types

Losses above high loss limit can be difficult to estimate since excess losses are driven by small # of very large claims, so small error in estimation of freq of large claims can have significant impact on XS loss estimates.

C&V improved NCCI 7 groups segmentation by observing that since physical circumstances are similar for significant injury types, claim frequency between these injury types are correlated.

They then relied on those correlations to build model to better estimate frequency of more serious injuries (drive XS losses) based on less-severe injuries for which data exists.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Describe the data used in analysis

A

7 policy years undeveloped and untended WC claim counts at countrywide level by class and injury type.

Discard most recent year since too immature and split the remaining 6y into modeling set and holdout set.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

On which basis did C&V split data between modelling and holdout datasets?

A

Choose to split data based on even (modeling) and odd (holdout) years to help neutralize any differences in trend and development between datasets.

Also tried splitting data into 4 oldest years for modeling and latest 2 for holdout data and this approach gave similar results.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Briefly explain the relationship between injury type, frequency and severity. Any exception?

A

As severity of injury increases, frequency decreases and severity of claims increases

There is one exception where severity of PT is usually higher than F.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Explain why less-severe injury types are predictive of more-severe injuries

A

Serious injury types are correlated so a class with a lot of major claims is likely to have higher average of PT and F as well.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Describe the multi-dimensional credibility approach

A

C&V constructed multivariate credibility formulas to estimate true population mean injury type count ratios for each class based on various injury type count ratios for class & HG from modelling data.

vi = Vh + b(Vi-Vh) + c(Wi-Wh) + d(Xi-Xh) + e(Yi-Yh)

Procedure seek to find credibility factors b, c, d, e that vary by injury type and class i.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

How does multi-dimentisional approach ties with one-dimensional approach

A

If injury types were uncorrelated, credibilities given to other injury types would b 0 and these formulas would become:
vi = bVi + (1-b)Vh
(Similar to Robertson approach)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Describe why C&V decided to use multivariate version of B-S credibility

A

BS credibility minimizes LSE in estimates

Minimal LSE mighty not be appropriate for heavy-tailed distribution such as WC as squared error can get quite large in tail, but since C&V are focusing on claim freq (which is not heavy tailed), use of squared error will produce reasonable result

Multi-dimensional credibility takes advantage of extra claim frequency info for a class instead of simply relying on HG avg. This results in more accurate predictions of claim frequencies for a class.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Briefly describe how C&V tested the result of their analysis

A

Assumed raw injury type count rations in holdout sample are true values and tried to best predict using 3 approaches:
1. Raw class injury type ratios (Vi)
2. HG injury type ratios (Vh)
3. Injury type ratios resulting from credibility procedure (vi,est)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Describe the 2 tests they used

A
  1. Sum of Squared Errors (SSE) Test
    Differences between injury type ratios for each of the 3 possible methods and holdout sample ratios

SSE(Raw) = Sum of (Vi - Vi,holdout)^2
SSE(HG) = Sum of (Vh - Vi,holdout)^2
SSE(cred) = Sum of (vi, est - Vi,holdout)^2

Lowest of 3 is best

  1. Quintiles Test
    a. Sort classes in both modeling and holdout datasets in increasing order based on injury type relativities produced by cred procedure
    b. Group classes into 5 quintiles based on sorted relativities
    c. Calculate Quintile and Vquintile,holdout using each quintile (using TT counts as weight)

SSE(Raw) = Sum of (Vquin/Vh - Vquin,holdout/Vh,holdout)^2
SSE(HG) = Sum of (1 - Vquin,holdout/Vh,holdout)^2
SSE(Cred) = Sum of (vquin,est/vh,est - Vquin,holdout/Vh,holdout)^2

Lowest is best

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Which procedure was identified best under SSE Test

A

Credibility procedure

However, it does not show much of an improvement over HG.

2 explanations:
1. Estimators derived from even year data are designed to fit data
2. Class data by year is volatile

Made 2 adjustments to data (claimed true improvement was masked):
1. Group classes within each HG into quintiles (eliminates class-level volatility)
2. Each value in calculation is normalized at HG level to eliminate differences between modelling and holdout dataset

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Which procedure was preferred under Quintiles Test

A

Credibility procedure

Showed a substantial reduction in SSE (does not mean procedure is better for class-level estimation, only that using quintiles type ratios better than HG level ratios)

Procedure did not show improvement for HG A for several injury types. C&V claimed this is due to classes in A being very homogeneous so injury type ratios are not expected to vary much within HG.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

State 2 advantages of quintiles test over SSE

A
  1. Grouping classes into quintiles help reduce volatility in data
  2. Relative incidence ratios are impacted by unknown covariates, with level varying between odd and even years. Normalizing each dataset by HG makes even and odd years more directly comparable.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Briefly explain conclusion from C&V paper

A

Individual class experience contains info relevant to future large relative frequency.

A correlated credibility approach using relationships among injury type frequencies within each type can utilize that info.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

List 3 recent innovations in XS rating

A
  1. Use more HG to get more homogeneity in loss potential
  2. Look at possible differences in claim costs within an injury type across classes/HG
  3. Look for better ways to combine data from different state systems.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Explain how multi-dimensional credibility is an improvement over using HG for homogeneity

A

At HG level, there would be greater variance within HG (multiple classes)

Whereas, we would expect lower within variance at class level, thus greater homogeneity.

17
Q

Explain how multi-dimensional credibility is an improvement over using HG for credibility.

A

Multi-dim cred takes advantage of extra claim frequency info for a class instead of simply relying on HG average.

This results in more accurate predictions of claim frequency for a class.

18
Q

Explain how multi-dimensional credibility is an improvement over HG for predictive stability

A

Balances responsiveness of individual class injury type weights while maintaining stability using current HG XS ratio as complement of credibility.

19
Q

State the necessary condition for proper credibility

A

The necessary condition for credibility is that debit and credit risks should have the same permissible loss ratio.