Executive Exchange: Insuring AI-Related Risks

By Lewis Nibbelin, Contributing Writer, Triple-I

Garnering millions of weekly users and over a billion user messages every day, the generative AI chatbot ChatGPT became one of the fastest-growing consumer applications of all time, helping to lead the charge in AI’s transformation of business operations across various industries worldwide. With generative AI’s rise, however, came a host of accuracy, security, and ethical concerns, presenting new risks that many organizations may be ill-equipped to address.

Enter Insure AI, a joint collaboration between Munich Re and Hartford Steam Boiler (HSB) that structured its first insurance product for AI performance errors in 2018. Initially covering only model developers, coverage expanded to include the potential losses from using AI models, as – though organizations might have substantial oversight in place – mistakes are inevitable.

“Even the best AI governance process cannot avoid AI risk,” said Michael Berger, head of Insure AI, in a recent Executive Exchange interview with Triple-I CEO Sean Kevelighan. “Insurance is really needed to cover this residual risk, which…can further the adoption of trustworthy, powerful, and reliable AI models.”

Speaking about his team’s experiences, Berger explained that most claims stem not from “negligence,” but from “data science-related risks, statistical risks, and random fluctuation risks, which led to an AI model making more errors than expected” – particularly in situations where “the AI model sees more difficult transactions compared to what it saw in its training and testing data.”

Such errors can underlie every AI model and are thereby the most fundamental to insure, but Insure AI is currently working with clients to develop coverage for discrimination and copyright infringement risks as well, Berger said.

Berger also discussed the insurance industry’s extensive history of disseminating technological advancements, from helping to usher in the Industrial Revolution with steam-engine insurance to insuring renewable energy projects to facilitate sustainability today. Like other tech innovations, AI is creating risks that insurers are uniquely positioned to assess and mitigate.

“This is an industry that’s been based on using data and modeling data for a very long time,” Kevelighan agreed. “At the same time, this industry is extraordinarily regulated, and the regulatory community may not be as up to speed with how insurers are using AI as they need to be.”

Though they do not currently exist in the United States on a federal level, AI regulations have already been introduced in some states, following a comprehensive AI Act enacted last year in Europe. With more legislation on the horizon, insurers must help guide these conversations to ensure that AI regulations suit the complex needs of insurance – a position Triple-I advocated for in a report with SAS, a global leader in data and AI.

“We need to make sure that we’re cultivating more literacy around [AI] for our companies and our professionals and educating our workers in terms of what benefits AI can bring,” Kevelighan said, noting that more transparent discussion around AI is crucial to “getting the regulatory and the customer communities more comfortable with how we’re using it.”

Learn More:

Insurtech Funding Hits Seven-Year Low, Despite AI Growth

Actuarial Studies Advance Discussion on Bias, Modeling, and A.I.

Agents Skeptical of AI but Recognize Potential for Efficiency, Survey Finds

Insurers Need to Lead on Ethical Use of AI

Leave a Reply

Your email address will not be published. Required fields are marked *