Tag Archives: Modeling

Hurricane Modeling: High-Tech MeetsLocal Insight

Sophisticated computer modeling has led to great advances in forecasting weather-related disasters and their potential human toll and economic impact. The predictive power of these models has given insurers comfort writing coverage for risks – like flood – that were once considered untouchable and enabled them to develop innovative products.  

It can be tempting to think of hurricane forecasting and modeling as being all about high-resolution images, big data, and elaborate algorithms. While these technologies are critical to developing and implementing effective models, they depend heavily on local knowledge and “boots on the ground.” 

“After an event, we quickly send engineers to survey structural damage and look for linkages to the storm’s characteristics,” said Jeff Waters, senior product manager for risk modeler RMS. “Information gathered by our people on the ground is incorporated into our reconstruction of the event to help us identify drivers of the damage and inform our models.” 

Waters recounted how, in the wake of Hurricane Maria in 2017, an RMS team arrived in Puerto Rico on October 3 – 13 days after landfall – to validate a modeled loss estimate. During the week the team spent on the island, they found that damage to insured buildings was less than expected for a storm of Maria’s magnitude. They also observed that most insured buildings featured bunker-style reinforced-concrete construction and flat concrete roofs.  

“These buildings performed very well,” Waters said. “Reinforced concrete prevents significant structural damage, and, with less drywall and tiled flooring, interior damage from water intrusion is limited. Wood and light-metal structures – which tend to be in older neighborhoods where fewer properties are insured – fared far worse.”  

Such ground-level information not only helped validate RMS’s loss estimate – it also contributes to the model’s continuous improvement. You can read a more detailed account on the RMS blog. 

Recent research illustrates how advances in geospatial technologies allow qualitative local knowledge to be incorporated into mathematical models to evaluate potential outcomes of restoration and protection projects and support plans for mitigation and recovery.  Local knowledge mapping is one such approach to marrying modern technology and the advanced analysis it facilitates to the experiences of the individuals, communities, and businesses most affected by natural disasters. 

Bridging the Cyber Insurance Data Gap

 

 

Cyber risks are opportunistic and indiscriminate, exploiting random system flaws and lapses in human judgment.

Underwriting cyberrisk is beyond difficult. It’s a newer peril, and the nature of the threat is constantly changing – one day, the biggest worry is identity theft or compromise of personal data. Then, suddenly it seems, everyone is concerned about ransomware bringing their businesses to a standstill.

Now it’s cryptojacking and voice hacking – and all I feel confident saying about the next new risk is that it will be scarier in its own way than everything that has come before.

This is because, unlike most insured risks, these threats are designed. They’re intentional, unconstrained by geography or cost. They’re opportunistic and indiscriminate, exploiting random system flaws and lapses in human judgment.  Cheap to develop and deploy, they adapt quickly to our efforts to defend ourselves.

“The nature of cyberwarfare is that it is asymmetric,” wrote Tarah Wheeler last year in a chillingly titled Foreign Policy article, In Cyber Wars, There Are No Rules.  “Single combatants can find and exploit small holes in the massive defenses of countries and country-sized companies. It won’t be cutting-edge cyberattacks that cause the much-feared cyber-Pearl Harbor in the United States or elsewhere. Instead, it will likely be mundane strikes against industrial control systems, transportation networks, and health care providers — because their infrastructure is out of date, poorly maintained, ill-understood, and often unpatchable.”

This is the world the cyber underwriter inhabits – the rare business case in which a military analogy isn’t hyperbole.

We all need data — you share first

In an asymmetric scenario – where the enemy could as easily be a government operative as a teenager in his parents’ basement – the primary challenge is to have enough data of sufficiently high quality to understand the threat you face. Catastrophe-modeling firm AIR aptly described the problem cyber insurers face in a 2017 paper that still rings true:

“Before a contract is signed, there is a delicate balance between collecting enough appropriate information on the potential insured’s risk profile and requesting too much information about cyber vulnerabilities that the insured is unwilling or unable to divulge…. Unlike property risk, there is still no standard set of exposure data that is collected at the point of underwriting.”

Everyone wants more, better data; no one wants to be the first to share it.

As a result, the AIR paper continues, “cyber underwriting and pricing today tend to be more art than science, relying on many subjective measures to differentiate risk.”

Anonymity is an incentive

To help bridge this data gap, Verisk – parent of both AIR and insurance data and analytics provider ISOyesterday announced the launch of Verisk Cyber Data Exchange.  Participating insurers contribute their data to the exchange, which ISO manages – aggregating, summarizing, and developing business intelligence that it provides to those companies via interactive dashboards.

Anonymity is designed into the exchange, Verisk says, with all data aggregated so it can’t be traced back to a specific insurer.  The hope is that, by creating an incentive for cyber insurers to share data, Verisk can provide insights that will help them quantify this evolving risk for strategic, model calibration, and underwriting purposes.