Insights

Nov 12, 2025

So... you've got a good parametric trigger. Now how do you price it?

David Schmid

One of the challenges holding back parametric insurance is often overlooked: the disconnect between how event severity is modelled during pricing and how it is calculated during payout. 

Because many triggers lack a consistent probabilistic counterpart for pricing. 

There are two main reasons for this: 

  1. No probabilistic model exists 

In such cases, pricing is often done by fitting an extreme-value distribution to a set of historical data. 

This approach can work, provided the historical data are consistent with the trigger itself (for example, using gridded precipitation data to assess a lack-of-rainfall trigger) and the threshold triggers frequently enough (i.e. “at the money”). 

This also enables historical backtesting, as long as non-stationarity in the data is properly accounted for. 

However, in many cases, the historical data serve only as a proxy for the trigger because no direct historical record exists. 

This issue particularly affects triggers based on newer sensor technologies that lack a long observational history. 

If available, one can still use corresponding hazard maps as a reference, but that remains a broad approximation rather than a true representation of the trigger behaviour. 

  1. A probabilistic model exists, but was built for a different purpose. 

This is common when a model designed for indemnity pricing is repurposed to price parametric covers. 

Since parametric products are fundamentally hazard-driven, this approach introduces several problems: 

  • The hazard module of indemnity models is often (mis)used to adjust frequencies and calibrate losses to historical experience. This works for indemnity pricing but creates bias when used for parametric pricing. 

  • Even if it is possible to isolate the hazard module cleanly from the broader indemnity-model framework, the issue remains that the methodology used to compute wind speeds in that model does substantially differ from the one applied during the payout calculation. 

These inconsistencies may seem harmless, but in practice they introduce inefficiencies into pricing and, more importantly, inflate premiums through large uncertainty loadings designed to compensate for the mismatch between modelled and realised payouts. 

In other words, it's a barrier to scale. 

If we want parametric insurance to grow into a mainstream risk-transfer instrument, this fundamental inconsistency needs to be resolved wherever possible, and certainly not at the expense of the trigger quality. 

My perspective – with pricing glasses on 

Having spent years pricing and structuring parametric covers for one of the world’s largest reinsurers, I’ve seen firsthand how methodological differences cascade into pricing uncertainty, often resulting in unattractive quotes. 

If your hazard model for pricing and your trigger mechanism for payout are not aligned, you end up managing two independent sources of uncertainty: 

  • one in frequency, and 

  • one in severity. 

In practice, this means higher technical pricing, wider confidence intervals, and reduced appetite to scale parametric products. 

When pricing and trigger methodologies are consistent, however, the equation simplifies: 

All uncertainty collapses into a single dimension: frequency. 

The severity component effectively cancels out, turning pricing into a pure frequency assessment. 

Once you see it, you can’t unsee it, and you’ll love it. 

Reask’s consistent approach with Metryc and DeepCyc 

At Reask, we’ve eliminated that disconnect and we’ve proven it works in reality. 

Our extensive post-event validation against more than 500 observed anemometers across global basins shows excellent correlation between measured on the ground and modelled wind speeds. 

Figure 2: Validation plots comparing modelled Metryc wind speeds with observed wind speeds, showing strong correlation (left Metryc footprint of Laura 2020 and right comparison across multiple relevant US storms). 

This demonstrates the robustness of our wind-field methodology, which is identically applied in both our probabilistic model (DeepCyc, used for pricing) and our post-event product (Metryc, used for payout validation). 

That means the hazard intensity used to model your expected loss is generated in exactly the same way as the intensity that later determines your payout. For pricing actuaries on the risk-carrier side, the only remaining task is to challenge DeepCyc’s frequency assumptions. 

With Metryc and DeepCyc, consistency is a modelling principle that ensures what you price is what you get when the payout comes due. 

It removes unnecessary uncertainty loadings, enables simpler and more transparent pricing, and helps lower the barriers to scaling parametric insurance. 

Stay in the loop

Sign up for the Reask newsletter for the latest climate science, model updates, and industry insights *

* By subscribing, you agree to receive the Reask newsletter. You can unsubscribe at any time. For more details, see our Privacy Policy.

2025 © Reask

All rights reserved

Stay in the loop

Sign up for the Reask newsletter for the latest climate science, model updates, and industry insights *

* By subscribing, you agree to receive the Reask newsletter. You can unsubscribe at any time. For more details, see our Privacy Policy.

2025 © Reask

All rights reserved

Stay in the loop

Sign up for the Reask newsletter for the latest climate science, model updates, and industry insights *

* By subscribing, you agree to receive the Reask newsletter. You can unsubscribe at any time. For more details, see our Privacy Policy.

2025 © Reask

All rights reserved