Tag Archives: Artificial Intelligence

Algorithms, A.I.and Insurance: Promise and Peril

By Max Dorfman, Research Writer

A couple of articles crossed our desk recently that discussed the benefits and pitfalls of algorithms and artificial intelligence (AI). Neither discussed insurance, but they offered important lessons for the industry.

Algorithms and AI can work quickly, but they aren’t perfect.

An algorithm is a simple set of instructions for a computer.  Artificial intelligence is a group of algorithms that can modify and create new algorithms as it processes data. Broadly, these smart technologies can drive untold change for the industry.

As the Financial Times wrote earlier this year, “Insurance claims are, by their nature, painful processes. They happen only when something has gone wrong and they can take months to resolve.”

Chinese insurer Ping An uses AI to accelerate decision making, and New York-based insurance start-up Lemonade employs algorithms and AI to help pay clients more quickly. Other insurers use smart technologies for fraud detection, risk management, marketing, and other functions.

What could go wrong?

Algorithms and AI can work quickly, but they aren’t perfect. A recent article by Osonde A. Osoba, an information scientist and professor with the RAND Corporation, details what data scientists call an “algorithm audit.” An algorithm audit detects biases or blind spots that skew results, making it necessary to review and test the underlying data.

In the case Osoba discusses, Apple Pay was assailed on Twitter by tech executive David Heinemeier Hansson for giving him a credit limit 20 times larger than his wife’s, despite their sharing all assets, among other factors. Hansson concluded that the algorithm was sexist – causing a furor on the social media platform among both those who vehemently agreed and disagreed with him.

Apple Pay said it doesn’t have information about applicants’ gender or marital status. Yet no one from Apple could answer why Hansson received a significantly higher credit limit. They responded: “Credit limits are determined by an algorithm.”

Still, these algorithms and AI are informed by something – perhaps the implicit biases of the programmers. For example, systems using facial recognition software have yielded decisions that appear biased against darker-skinned women.

Are algorithms easier to fix than people?

An article in The New York Times by Sendhil Mullainathan, a professor of behavioral and computational science at the University of Chicago, discusses human and algorithmic biases. He cites a study in which he and his co-authors examined an algorithm that is commonly used to determine who requires extra levels of health care services. This algorithm has affected approximately 100 million people in the U.S. In this case, black patients were routinely rated to be at lower risk. However, the algorithm was inherently flawed: it used data on who receives the highest amount of health care expenditures.

Black patients already spend less money on health care than white patients with the same chronic conditions, so the algorithm only served to reinforce this bias. Indeed, without the algorithmic bias, the study estimated that the number of black patients receiving extra care would more than double. Yet Mullainathan believes that the algorithm can be fixed fairly easily.

Contrast this to a 2004 study Mullainathan conducted. He and his co-author responded to job listings with fabricated resumes: half the time they sent resumes with distinctively black names; the other half with distinctively white names. Resumes with black names received far fewer responses than those with white names.

This bias was verifiably human and, therefore, much harder to define.

“Humans are inscrutable in a way that algorithms are not,” Mullainathan says. “Our explanations for our behavior are shifting and constructed after the fact.”

Don’t write algorithms off

As RAND’s Osoba writes, algorithms and AI “help speed up complex decisions, enable wider access to services, and in many cases make better decisions than humans.” It’s the last point that one must be particularly mindful of; while algorithms can reproduce and intensify biases of their programmers, they don’t possess inherent prejudices, as people do.

As Mullainathan puts it, “Changing algorithms is easier than changing people: software on computers can be updated; the ‘wetware’ in our brains has so far proven much less pliable.”

Much Ado About AI at I.I.I. Joint Industry Forum

By Lucian McMahon

You’re familiar with the buzzwords by now. Internet of things. Blockchain. Artificial intelligence.

At the 2019 I.I.I. Joint Industry Forum, a panel on artificial intelligence and insurance cut through the buzz. How can AI be used to help build resilient communities? And how can the insurance industry leverage AI to better help customers address risk?

Pictured left to right: Andrew Robinson, Sean Ringsted, Ahmad Wani, Kyle Schmitt, James Roche

New products, more resilience

Regarding resilience, Ahmad Wani, CEO and co-founder of One Concern, said that AI is being used to harness vast troves of data to identify, on a “hyperlocal level,” the impact of a whole range of hazards. His company is already doing just that, partnering with local governments and insurance companies to better plan for future losses. “We don’t need to wait for disasters to happen to prioritize the response, we need to make changes and to act now before the disaster,” Wani said.

Sean Ringsted, executive vice president, chief digital officer and chief risk officer at the Chubb Group, also pointed out that insurers are already expanding their product offerings thanks to AI and big data. Contingent business interruption, for example: the sheer volume of data can now allow insurers to effectively analyze supply chain risks and price them accordingly.

Transparency and fairness are top of mind

But as Ringsted said, “it’s not all good news and roses.” What sorts of concerns should insurers and the public have about using AI?

Kyle Schmitt, managing direct of the global insurance practice at J.D. Power cited consumer concerns with the data and algorithms used for AI-enabled products. Consumers are deeply concerned with the security and privacy of any data they share with insurers. Per Schmitt, consumers also worry about the fairness of AI products, when algorithms instead of people are making decisions in an opaque way.

This is the so-called “black box problem” of AI, in which complex algorithms will arrive at answers or decisions without anyone being able to explain how they did so. Ringsted stressed that, for AI to be a viable tool, its mechanisms will need to be explainable to regulators and the public.

James Roche, vice president, personal lines product development at ISO, echoed this sentiment: social responsibility requires both robust data stewardship and strict control over AI outputs to ensure that outcomes are fair and within ethical standards.

From a consumer perspective, ensuring ethical use of AI is critical. Schmitt said that at the end of the day consumers are open-minded, “but they just want some ground rules.”