From disasters to data: The evolution of catastrophe modeling post-pandemic

From disasters to data: The evolution of catastrophe modeling post-pandemic

From disasters to data: The evolution of catastrophe modeling post-pandemic | Insurance Business America

Catastrophe & Flood

From disasters to data: The evolution of catastrophe modeling post-pandemic

Daniel Zitelli, SVP at Holborn, on the influx of ‘scientific methods’ in the risk game

Catastrophe & Flood

By
Chris Davis

Catastrophe modeling has seen significant evolution over the past few decades. Daniel Zitelli (pictured), SVP and co-head of catastrophe modeling at Holborn, has witnessed these transformations firsthand. Speaking to IB, he said that, back in the early 2000s, catastrophe models were burgeoning in sophistication and adoption within the insurance marketplace.

“New model releases tended to be adopted very quickly, introducing fresh scientific methods that instilled confidence in model results,” he said.

At that time, out-of-the-box model results had a profound impact on risk pricing at both primary and reinsurance levels. The newfound ability to model granular losses by location or rating territories was groundbreaking and significantly influenced the market.

However, as technology progressed, so did the expectations and scrutiny surrounding these models.

“Models have become more commonplace today, and people now take a much more sophisticated view of the numbers generated,” Zitelli noted. No longer are out-of-the-box model numbers considered final answers; they serve as inputs for further study and analysis in risk evaluation. The integration of additional data sources, now more readily available and easily digestible, enables Holborn and the broader market to supplement model outputs with comprehensive context and a finer understanding of the actual risk.

Today’s users of these models blend outputs with historical weather patterns, claims data, and both government and private sector data, providing a multifaceted view of risk. Zitelli explained: “We combine model output with various data sources to understand the context of what those loss numbers mean.” This integration offers a more precise risk evaluation by aligning model outputs with real-world data.

See also  EPIC, Jencap parent unveils leadership appointments – two new CEOs

Evolution of cat modeling

Advances in computing have played a crucial role in this evolution. Enhanced storage capacities, faster compute cycles, and user-friendly technological tools have empowered the market to integrate diverse data sources into their workflows. This capability allows for both broad and detailed risk assessments, facilitating evaluations at the portfolio level and for individual risks.

Zitelli illustrated this with an example: “Understanding the definition of a coastline is critical when evaluating hurricane risk,” he said. “Different definitions of what constitutes a coastline can significantly impact risk assessments.” Distance to the coast is often a key metric in underwriting and pricing, but it’s essential to know which coastline definition is being used. This understanding ensures that model outputs are accurately interpreted and adjusted according to real-world conditions.

Flood risk assessment has similarly benefited from the integration of updated data. Previously, reliance on outdated FEMA maps limited the accuracy of flood risk evaluations. Now, incorporating recent data such as elevation levels and flood defenses provides a more accurate picture. “Being able to integrate new data into underwriting guidelines gives a much better idea of flood risk,” said Zitelli.

With a strong foundation in mathematics, Zitelli emphasizes the importance of understanding the theoretical underpinnings of models to apply them effectively in practice. “A sound understanding of mathematics is essential to comprehend why a model works the way it does and how to utilize its outputs,” he said. This knowledge is crucial for dissecting vendor models or developing proprietary ones, helping to explain the variances in loss results from different models.

See also  Alternative capital – A mature option for PE to take advantage of market cycles: BMO

The ability to interpret and communicate these differences is vital for clients. Zitelli often addresses questions about why one model’s loss number differs from another’s, attributing these discrepancies to variations in model architecture. “Understanding the mathematical choices made by model developers allows us to explain these differences to our clients,” he noted, enabling more informed decisions about which models to use in specific scenarios.

Looking to the future, Zitelli envisions significant opportunities for AI in risk modeling. AI has the potential to revolutionize the entire risk evaluation landscape beyond just catastrophe models. “AI can help understand the context of loss risk, supplementing what models tell us,” he said. The insurance industry, operating in an information economy, stands to benefit immensely from AI’s ability to unlock hidden data within claims warehouses.

Large language models, for example, can extract insights from previously unusable data sources such as claims adjuster reports.

“AI can consume text and pull out insights that were previously inaccessible,” Zitelli explained.

Related Stories

Keep up with the latest news and events

Join our mailing list, it’s free!