Relationship between insurance premium and portfolio size
How would actuarial science refer to the idea that the premium should increase as number of insured entities decreases.
In other words, what is the technical term for the intuition that I would charge a higher premium per car to insure just one car than to insure 1000 cars, if such intuition is even justified.
As a sketch of my thinking on this, I have defined risk-adjusted premium as the total premium to cover losses divided by the standard deviation of the total loss on the portfolio of policies. I believe both the numerator and the denominator can be deduced from the law of large numbers.
Any insights will be appreciated.
Solution 1:
Say each car pays a premium of $q$ and has accident probability $p$ over some horizon. All accidents cause a full loss of the car value (=1). All cars are identical. Accident rates are independent.
With $n$ cars insured, let $X_i$ be the loss on car $i=1,2,...n$.
You want to charge enough in premium so that your losses are covered with high probability. Say you want to choose $q$ so that $P(nq-\sum_{i=1}^n X_i>0)=0.99$. The term $\sum_{i=1}^n X_i$ is a binomial random variable with prob $p$ and $n$ trials, so for this purpose we can replace with a normal random variable with mean $np$ and standard deviation $\sqrt{np(1-p)}$. A little algebra shows that the premium $q$ needs to be set to at least $p+2.33\sqrt{p(1-p)/n}$. So there you see that the premium you need to charge (per car) goes down with $n$.
Solution 2:
The term used in the industry is "risk pooling", but as TickaJules points out above, it's actually the Central Limit Theory in disguise. Most insurance premiums are calibrated as Mean Loss plus expenses plus profit/contingency where the profit/contingency is intended both to cover rarer events and to provide a return to shareholders or the other owners of the surplus. One method of calibration is as Ticka stated: the contingency element is calibrated to a point on the CDF at which the company feels the risk-reward tradeoff is too great. For example, covering up to the 99th percentile may be reasonable, but to the 99.9th percentile may cause policyholders to flee to cheaper pastures. A simpler method is to apply a "risk load" to a variability measure, often the standard deviation of the expected loss.
Both these contingency elements can be perceived as sample means from the universe of potential policyholders. The empirical sample variance is an estimator for the true variance. Also, the difference between true and empirical percentiles exhibit convergence in distribution to a normal distribution.
Therefore, as the pool of insureds grows, the uncertainty around the estimates shrinks in proportion to the square root of the observeds. Therefore, the portion of this uncertainty applied to each policyholder can be reduced even if the overall magnitude of the uncertainty increases. It is sub-linear.