Tag Archives: Data

Bridging the Cyber Insurance Data Gap

 

 

Cyber risks are opportunistic and indiscriminate, exploiting random system flaws and lapses in human judgment.

Underwriting cyberrisk is beyond difficult. It’s a newer peril, and the nature of the threat is constantly changing – one day, the biggest worry is identity theft or compromise of personal data. Then, suddenly it seems, everyone is concerned about ransomware bringing their businesses to a standstill.

Now it’s cryptojacking and voice hacking – and all I feel confident saying about the next new risk is that it will be scarier in its own way than everything that has come before.

This is because, unlike most insured risks, these threats are designed. They’re intentional, unconstrained by geography or cost. They’re opportunistic and indiscriminate, exploiting random system flaws and lapses in human judgment.  Cheap to develop and deploy, they adapt quickly to our efforts to defend ourselves.

“The nature of cyberwarfare is that it is asymmetric,” wrote Tarah Wheeler last year in a chillingly titled Foreign Policy article, In Cyber Wars, There Are No Rules.  “Single combatants can find and exploit small holes in the massive defenses of countries and country-sized companies. It won’t be cutting-edge cyberattacks that cause the much-feared cyber-Pearl Harbor in the United States or elsewhere. Instead, it will likely be mundane strikes against industrial control systems, transportation networks, and health care providers — because their infrastructure is out of date, poorly maintained, ill-understood, and often unpatchable.”

This is the world the cyber underwriter inhabits – the rare business case in which a military analogy isn’t hyperbole.

We all need data — you share first

In an asymmetric scenario – where the enemy could as easily be a government operative as a teenager in his parents’ basement – the primary challenge is to have enough data of sufficiently high quality to understand the threat you face. Catastrophe-modeling firm AIR aptly described the problem cyber insurers face in a 2017 paper that still rings true:

“Before a contract is signed, there is a delicate balance between collecting enough appropriate information on the potential insured’s risk profile and requesting too much information about cyber vulnerabilities that the insured is unwilling or unable to divulge…. Unlike property risk, there is still no standard set of exposure data that is collected at the point of underwriting.”

Everyone wants more, better data; no one wants to be the first to share it.

As a result, the AIR paper continues, “cyber underwriting and pricing today tend to be more art than science, relying on many subjective measures to differentiate risk.”

Anonymity is an incentive

To help bridge this data gap, Verisk – parent of both AIR and insurance data and analytics provider ISOyesterday announced the launch of Verisk Cyber Data Exchange.  Participating insurers contribute their data to the exchange, which ISO manages – aggregating, summarizing, and developing business intelligence that it provides to those companies via interactive dashboards.

Anonymity is designed into the exchange, Verisk says, with all data aggregated so it can’t be traced back to a specific insurer.  The hope is that, by creating an incentive for cyber insurers to share data, Verisk can provide insights that will help them quantify this evolving risk for strategic, model calibration, and underwriting purposes.

Tapping the insurance ecosystem for insights

I had the pleasure last week of attending “Data in the New: Transforming Insurance” – the third annual insurtech-related thought leadership event held by St. John’s University’s Tobin Center for Executive Education and School of Risk Management.

To distill the insights I collected would take far more than one blog post.  Speakers, panelists, and attendees spanned the insurance “ecosystem” (a word that came up a lot!) – from CEOs, consultants, and data scientists to academics, actuaries, and even a regulator or two to keep things real. I’m sure the presentations and conversations I participated in will feed several posts in weeks to come.

Herbert Chain, executive director of the Center for Executive Education of the Tobin College of Business, welcomes speakers and attendees.
Just getting started

Keynote speaker James Bramblet, Accenture’s North American insurance practice lead, “set the table” by discussing where the industry has been and where some of the greatest opportunities for success lie. He described an evolution from functional silos (data hiding in different formats and databases) through the emergence of function-specific platforms (more efficient, better organized silos) to today’s environment, characterized by “business intelligence and reporting overload”.

Accenture’s James Bramblet discusses the history and future of data in insurance.

“Investment in big data is just getting started,” Jim said, adding that he expects the next wave of competitive advantage to be “at the intersection of customization and real time” – facilitating service delivery in the manner and with the speed customers have come to expect from other industries.

Jim pointed to several areas in which insurers are making progress and flagged one – workforce effectiveness – that he considers a “largely untapped” area of opportunity. Panelists and audience members seemed to agree that, while insurers are getting better at aggregating and analyzing vast amounts of data, their operations still look much as they have forever: paper based and labor intensive. While technology and process improvement methodologies that could address this exist, several attendees said they found organizational culture to be the biggest obstacle, with one citing Peter Drucker’s observation that “culture eats strategy for breakfast.”

Lake or pond? Raw or cooked?

Paul Bailo, global head of digital strategy and innovation for Infosys Digital, threw some shade on big data and the currently popular idea of “data lakes” stocked with raw, unstructured data. Paul said he prefers “to fish in data ponds, where I have some idea what I can catch.”

Data lakes, he said, lack the context to deliver real business insights. Data ponds, by contrast, “contain critical data points that drive 80-90 percent of decisions.”

Stephen Mildenhall, assistant professor of risk management and insurance and director of insurance data analytics at the School of Risk Management, went as far as to say the term “raw data” is flawed.

“Deciding to collect a piece of data is part of a structuring process,” he said, adding that, to be useful, “all data should be thoroughly cooked.”

Innovation advice

Practical advice was available in abundance for the 80-plus attendees, as was recognition of technical and regulatory challenges to implementation. James Regalbuto, deputy superintendent for insurance with the New York State Department of Financial Services, explained – thoroughly and with good humor – that regulators really aren’t out to stifle innovation. He provided several examples of privacy and bias concerns inherent in some solutions intended to streamline underwriting and other functions.

Perhaps the most broadly applicable advice came from Accenture’s Jim Bramblet, who cautioned against overthinking the features and attributes of the many solutions available to insurers.

“Pick your platform and go,” Jim said. “Create a runway for your business and ‘use case’ your way to greatness.”