Skip to main content
Article

Assumption setting in an uncertain world: Beyond frameworks and false precision

2 July 2025

There's growing pressure from auditors and internal governance teams for actuaries to define and follow formal frameworks for setting and updating assumptions. At one level, this is entirely reasonable: Frameworks offer consistency, transparency, and control. They're also easier to audit. But there's a risk that in chasing tidy documentation and tick-box compliance, we over-engineer a process that almost invariably, unavoidably, requires significant judgement. This article is a sceptical but constructive look at what really matters in assumption setting, especially in the wake of rare events, volatile experience, and evolving accounting regimes like IFRS 17.

Assumption setting is harder than ever

Recent years have seen economic shocks, COVID-19 lockdowns, rising retrenchment and disability claims, mounting pressures on premium collections, pockets of persistently high mortality experience, and uncertain growth prospects putting pressure on expenses and expense inflation. All of this has made assumption setting discussions more challenging, with IFRS 17's reporting consequences creating pressure to avoid assumption changes. This pressure cuts both ways: Delay updates too long, and you store up problems for future periods; react too quickly to volatile experience, and you introduce unnecessary noise.

Rare events should change your assumptions

When a rare event happens, your assumed frequency of that event should typically increase. This doesn’t mean one should include the raw experience unadjusted, but rather that observing the rare event will usually result in an increase in the Bayesian posterior estimate of the event's frequency.

Many frameworks, peer reviews, and committee discussions treat rare events as binary: Either they're ‘one-offs’ and ignored, or they're ‘recurring’ and fully embedded. Neither is good enough.

A better approach is to ask: How likely is this event to recur, and how much weight should we give it? If you've had five years of experience and a 1-in-20-year event occurs in one of those years, you might consider adding 1/20th of that excess experience to the assumption (but possibly adjusted for portfolio size—we'll come to that in a moment). Not all of it, not none of it. It's a small impact—but statistically appropriate. This is Bayesian in spirit and can be implemented via formal credibility-style adjustments that would give you a data-driven weight based on the variance of your estimator.

Also worth clarifying: Best estimate means the probability-weighted mean, not the median and not the mode. For skewed risks, this distinction matters. A best estimate assumption should sit above the most likely single-year outcome if the distribution is positively skewed. It's not conservatism—it's just the mean.

Context matters

Of course, not every rare event tells us the same thing. Some events are:

  • Catch-ups: revealing a risk that was always there but hadn't yet materialised (e.g., first observed case of internal fraud)
  • Structural breaks: signalling that future risk has changed (e.g., new regulation or claims behaviour shift)
  • Path-dependent: where the event itself changes the likelihood of it happening again (e.g., COVID-19 lockdowns may make future lockdowns less likely due to economic blowback)

All of these affect whether and how we respond in our assumptions.

Rare events break multiple assumptions at once

COVID-19 didn't just affect mortality—it simultaneously hit business interruption cover, morbidity, retrenchment experience, new business levels, premium collections, lapses (in complex ways), expenses, investment performance, and missed fraud. Changing interest rates haven't just affected investment assumptions—they've also changed lapse patterns, altered competitive dynamics in savings products, and shifted consumer behaviour toward different product types.

When updating assumptions after a rare event, resist the temptation to treat each assumption in isolation. Ask: What other assumptions might this event have revealed as inadequate?

Sometimes the bigger risk isn't getting one assumption wrong—it's missing the systematic pattern across several assumptions that points to a deeper shift in your business environment. A pandemic doesn't just change your mortality table—it might also signal higher future morbidity claims, altered policyholder behaviour, and elevated operational costs. Similarly, a regulatory change affecting one product line often has spillover effects on expenses, systems costs, and compliance burdens across the entire book.

The practical implication: When a rare event occurs, conduct a broader assumption review rather than a narrow, single-assumption adjustment. Look for correlation patterns in your experience data and consider whether the event suggests systematic rather than isolated risks.

Portfolio size and credibility

Smaller portfolios offer less information but are also more likely to generate greater deviations from expected in the first place. Most of their experience variance is noise, not signal. Large portfolios can reveal signal sometimes with as little as one year of data. So any observed deviation—positive or negative—should have lower credibility if the underlying data volume is small.

More directly: Don't just look at exposure size. For claim assumptions, look at actual and expected number of claims. High-frequency events generate credible experience more quickly. Rules of thumb for credibility can help, but they must account for event type—you can't treat retrenchment and mortality the same way.

This also ties to investigation period length. A two-year period is usually too short—even if the volume of data is high enough to be credible, the lack of time diversification means that the experience may not reflect long-term future expectations. The danger is particularly elevated when using 18-month or 30-month periods (e.g., due to data limitations), resulting in unequal seasonal representation.

In short: Credible assumptions need credible data volume—not just exposure size, but event count, frequency, and enough time to smooth volatility and capture recurring patterns.

Expense assumptions deserve special attention

In my experience—and backed by regulatory reviews and valuation basis surveys—the greatest risk of mis-estimating assumptions often lies in expenses. Management budgets are often stretch targets, not forecasts. This results in underestimation of expenses and overestimation of volumes—two compounding errors. When volumes fall short, unit costs rise, delivering a double whammy of acquisition overruns and inflated maintenance expenses.

Fixed costs are fixed, but that's the problem. With lower business volumes, the unit cost of servicing each policy increases. Meanwhile, so-called ‘variable costs’ have a nasty habit of being fixed on the downside and variable on the upside.

Another frequent error: excluding recurring but ‘one-off’ expenses from the assumption basis. While a specific item may not repeat, the class of expenses it belongs to often does. If something similar occurs year after year—just with a different label—it's not truly exceptional. Treating these as non-recurring leads to systematically underestimating actual costs.

Frameworks: Friend or foe?

Auditors love them—for good reason. Frameworks provide consistency, help align assumptions across products and teams, and allow easier governance checks: ‘Did you follow the framework?’ But they come with trade-offs.

  • Too rigid? You waste time justifying exceptions—or worse, make the wrong decision following a recipe blindly.
  • Too loose? It's just documentation theatre. A policy that never constrains has no value.

In larger organisations with dispersed actuarial teams and CFOs across multiple businesses or territories, frameworks can help align judgement across individuals with different risk appetites and mental models. That's their strongest justification. But let's not pretend they remove subjectivity entirely.

Use frameworks, but don't expect them to eliminate judgement—expect them to structure it.

Ranges and tolerance bands

There's value in setting tolerance ranges—say, only revisiting demographic assumptions when actual versus expected is outside 90%–105%, or outside 95%–103% for three consecutive years. This avoids whipsawing assumptions based on noise.

Making many marginal adjustments to assumptions has a real cost. More changes mean more risk for human error, more runs to quantify the impact, more subjectivity on which changes to group, and more noise for those reviewing and interpreting the results.

There's also a deeper statistical argument for stability. Unless your new experience investigation is fully credible (which it never is), there's a case for moving only partway from your prior assumption toward the latest experience. If the new data suggests a mortality rate of 90% of a standard table, but your prior assumption was 100%, and you have limited confidence in the new data, why jump fully to 90%? A Bayesian approach might suggest moving somewhere in between—incorporating the new information while respecting the uncertainty in both your prior assumption and the new evidence.

Yes, there may appear to be a mild prudence bias in those example approaches. But given skewed risk profiles, kurtosis, and the ever-present risk of events not in data (ENID), a slight apparent bias to prudence might actually be consistent with a realistic best estimate in practice. Auditors and regulators in different markets have different tolerances for slight prudence in assumptions, and the purpose of the assumption provides critical context.

The seductive illusion of basis stability

You'll often still hear: ‘As long as the overall basis is appropriate, don't worry about the unders and overs.’ That's fine until:

  • Cross-subsidies creep in, between short- and long-boundary business, or younger and older policyholders where long-term projections aren’t guaranteed to keep the mix across these categories stable.
  • Business mix by product changes or is projected to change over time.
  • New business pricing loses discipline.
  • Profitability reporting (including onerosity testing and reporting under IFRS 17) becomes misleading.
  • Capital allocation gets distorted, leading to decisions optimising for the floors in your assumptions rather than the return of capital prospects of the products.

We should aim for both overall soundness and local appropriateness—by product, by assumption type. A consistent basis isn't just about the average—it must withstand scrutiny at each product level, align with how the business actually runs, and avoid false comfort from portfolio-level netting off.

In summary

Assumption setting is ultimately about managing uncertainty with judgement, not eliminating it with process. The best frameworks support good decisions; they don't make them for you.


Explore more tags from this article

We’re here to help