Bahnemann Flashcards
Methods for Estimating Distribution Parameters
method of moments
maximum likelihood
minimum chi squared
minimum distance
truncation
discarding; usually in case of claims below a deductible
censoring
capping; usually in case of limit
shifting
usually with straight deductible; for claims larger than deductible, they get reduced by the deductible amount
since limits reduce volatility of severity compared to unlimited data
- may be interesting in computing the variability of losses in layer
- can use the coefficient of variation as a way to measure the variability for different distributions
claim contagion parameter
accounts for claim counts not being independent of each other (where 1 claim encourages others to file a claim too)
-if claim counts have a Poisson, γ=0
final rate for a policy needs to incorporate
all other expenses and profit as well as charge for risk
risk charge
premium amount used to cover contingencies such as:
- random deviations of losses from expected values (process risk)
- uncertainty in selection of parameters describing the loss process (parameter risk)
for purpose of pricing, instead of publishing full rates for every limit
insurers usually use relativities called ILFs to rate for a basic limit
ILFs can be determined using
empirical data directly or can be obtained using a theoretical curve fit to empirical data with latter approach being more common for highest limits with little empirical loss data
in determining ILFs appropriate for each limit, following assumptions are commonly made:
- all UW expenses and profit are variable and don’t vary by limit
- in practice, profit loads might be higher for higher limits since they are more volatile - frequency and severity are independent
- frequency is same for all limits
ILFs must be (with Bahnemann’s assumption of fx(l) is not equal to 0 )
increasing at a decreasing rate
in terms of premium, premium for successive layers of coverage of constant width
will be decreasing
checking that a set of ILFs satisfies above criteria for I’(l) and I’’(l)
performing a consistency test:
Per occurrence limit l
Increased limit factor I(l)
Marginal rate per $1k coverage I’()
an exception to consistency test
could occur if one of the ILF assumptions was violate like if liability lawsuits were influenced by size of limit, then frequency would not be same for all limits so formulas would not hold
one reason that a set of increased limits factors may fail this consistency test yet still generate actuarially reasonable prices.
Adverse selection, which could happen if insureds that expect higher loss potential are more inclined to buy higher limits.
Adverse selection, which could happen if liability lawsuits are influenced by the size of the limit.
how the consistency test has both a mathematical interpretation and a practical meaning
The practical interpretation is that as the limit increases, there are less losses expected at higher layers, so rates should not increase more for higher limits than for lower limits.
mathematical interpretation is that I’(l) ≥ 0 and I’‘(l) ≤ 0
there is more volatility (process risk) for policies with
with higher limits or higher attachment points, insurers will also want to charge a risk load for these policies
-to do this, need to include a risk charge ρ(l)
risk charge ρ(l) options
old Miccolis aka variance method
old ISO aka std dev method
risk load increases as
policy limit increases and is used to take into account the higher process risk for policies with higher limits
deductibles typically reduce
coverage limit so layer of coverage with deductible d and limit l is (d,l] and not (d,d+l]
3 types of deductibles:
straight
franchise
diminishing
straight deductible
loss is truncated and shifted by d such that net losses
Xd = X-d for d
franchise deductible
loss is truncated but not shifted by d such that
Xd=X for d
= X for D
-formula for LER even assuming alae is not additive is long, so calculate the loss eliminated at each size of loss level as a % of total losses