Chapter 19: Methods of calculating the risk premium Flashcards
Burning cost
The actual cost of claims during a past period of years expressed as an annual rate per unit of exposure.
Use a simple regression model, based entirely on historical data.
Burning cost premium (BCP) calculation
BCP = (ΣClaims)/Total Exposed to Risk
Effective burning cost
The burning cost calculated using unadjusted data.
Claims are usually adjusted to allow for past inflation and IBNR.
Indexed burning cost
The curning cost calculated using adjusted data.
Why has the burning cost premium been criticised when applied to current figures without adjustments?
- we ignore trends such as claims inflation
- by taking current exposure (often premiums) and comparing this with current undeveloped claims, we will understate the ultimate position = loss ratios higher than expected
Burning cost approach:
Basic elements of the risk premium per unit of exposure
- average cost per claim
- average unit of exposure per policy
- average cost per claim
Burning cost approach:
Pure risk premium
(Expected claim frequency per policy) / (Average exposure per policy)x(Expected cost per claim)
Burning cost approach:
Information needed for each policy
- dates on cover
- all rating factors and exposure measure details
- details for premiums charged, unless they can be calculated by reference to the details on rating factors and exposure.
Burning cost approach:
When do we usually use this method?
- where litlle individual claims data are available
- where aggregate claims data by policy year are available
Burning cost approach:
Advantages
- simplicity
- needs relatively little data
- quicker than other methods to perform
- allows for experience of individual risks or portfolios
Burning cost approach:
Disadvantages
- harder to spot trends so it provides less understanding of changes impacting the individual risks
- adjusting past data is hard
- adjusting for changes in cover, deductibles and so on may be hard as we often lack individual claims data
- it can be a very crude approach depending on what adjustments are made
Frequency-severity approach
We assess the expected loss for a particular insurance structure by estimating the distribution of expected claims frequencies and distribution of severities for that structure and combining the results.
Key assumption of the frequency-severity approach
The loss frequency and severity distributions are not correlated
Causes of frequency trends
Changes in:
- accident frequency
- the propensity to make a claim and other changes in the social and economic environment
- legislation
- the structure of the risk
Frequency-severity approach:
For each historical policy year, the frequency of losses are calculated as:
frequency =
(ultimate number of losses)/(exposure measure)
Frequency-severity approach:
A standard trend applied to the frequency is based on:
- an analysis of all the risks within an insurer’s portfolio
- external information, such as industry surveys
Drivers of severity trends
- economic inflation
- changes in court awards and legislation
- economic conditions
- changes to the structure of the risk
“from the ground up”
“From the ground up” claims data shows all claims, no matter how small they are, and shows the original claim amount. It is often used in reinsurance to refer to data which shows all claims, even though reinsurance is only required for large claims.
Frequency-severity approach:
For each historical policy year, the average severity of losses is calculated as:
average severity =
(ultimate cost of losses)/(ultimate number of losses)
Possible drivers of frequency trends for employer’s liability insurance
- increasing compensation culture
- propensity of no-win-no-fee arrangements
- growth of claims management companies
- changes in health and safety regulations
- court decisions
- changes in economic conditions
- emergence of latent claims
- changes in policy terms, conditions, excesses, limits, etc.
Possible drivers of severity trends for employer’s liability insurance
- salary inflation
- court decisions/inflation
- medical advances/medical inflation
- inflation of legal costs
- legislative changes
- interest rate changes
- changes in policy terms, conditions, excesses, limits, etc.
Methods used to develop individual losses for IBNER
- apply an incurred development factor to each individual loss (open and closed claims), reflecting its maturity, to estimate it ultimate settlement value
- (more realistic approach) is to only develop open claims using “case estimate” development factors. These case estimate factors will be higher than the incurred development factors at the same maturity to offset the effect of not developing closed claims
- use stochastic development methods to allow for the variation that may occur in individual ultimate loss amounts around each of their expected values
Aggregate deductible
The maximum amount that the insured can retain within their their deductible when losses are aggregated
Non-ranking deductible
The non-ranking component of the deductible does not contribute towards the aggregate deductible