Tag Archives: Artificial Intelligence

Agents Skeptical of AI but Recognize Potential for Efficiency, Survey Finds

Despite the rapid advancement of artificial intelligence (AI) in the insurance industry, only 6% of agency principals have implemented an AI solution, with many expressing concerns about its impact on their agency’s operations, according to the 2024 Agent-Customer Connection Study by Liberty Mutual and Safeco Insurance.

The study — which surveyed more than 1,000 independent agency leaders, agency staff members, and consumers — reveals a complex relationship between AI and the insurance sector, indicating a need for effective strategies to harness AI’s potential and address prevalent concerns about AI accuracy and data privacy.

Low current adoption but growing interest

So far, AI tools remain on the fringes in most independent agencies, according to the report. While only 6% of agency principals surveyed said they have already implemented an AI solution in their agency, more than one-in-three (36%) said they are likely to be using AI in their business in the next five years.

The research found that agent sentiment on AI is split. Sixty-four percent of agency principals said they are interested in how AI can improve their business, but only 17% of agents said they trust AI technology, and 27% view AI as a threat.

“I’m still learning a lot about the impact that AI will have,” said one of the agents surveyed, “but from what I’ve learned so far, it could revolutionize the way we service clients and bring many new efficiencies to our service platform.”

For many agents, a lack of understanding about AI is holding back adoption. Forty-five percent of the agents surveyed said they don’t know enough about AI to make business decisions about the technology.

Concerns about AI accuracy and data privacy are also prevalent. Nearly one in three agents say they are unlikely to implement AI into their business practices in the next five years, citing a lack of trust and concerns about data privacy as the top reasons.

Potential Benefits of AI for Insurance Agencies

AI technology has the potential to drive significant efficiency gains and time savings for insurance agencies, the Liberty Mutual/Safeco report’s authors stated. AI-powered chatbots alone could save businesses up to 30% in customer support costs. The report cited a recent survey by Section, which found that among professionals who are adept at working with AI, they reported saving up to 12 hours per week by leveraging the technology.

Half of agency principals said they believe AI can make their business more efficient, and 43% of agency principals surveyed said using AI will help their agency better serve customers and grow the business in the future.

The top five areas where agents envision AI providing a boost include:

  • Identifying cross-selling opportunities, cited by 58%
  • Assisting with marketing content creation, 53%.
  • Automating routine service tasks, 52%.
  • Creating personalized customer communications, 46%.
  • Automating administrative tasks, 44%.

The efficiency and service enhancements enabled by AI will be key to meeting the rising expectations of insurance customers in the digital age, according to the report. Seventy-seven percent of independent agency customers said it’s very valuable or critical for their agent to be highly responsive to requests, and 67% want their agent to proactively understand their needs. The ability to contact an agent 24/7, something AI can help facilitate, would make 39% of customers more likely to choose a particular agent over another.

View the complete Liberty Mutual/Safeco survey report here.

Algorithms, A.I.and Insurance: Promise and Peril

By Max Dorfman, Research Writer

A couple of articles crossed our desk recently that discussed the benefits and pitfalls of algorithms and artificial intelligence (AI). Neither discussed insurance, but they offered important lessons for the industry.

Algorithms and AI can work quickly, but they aren’t perfect.

An algorithm is a simple set of instructions for a computer.  Artificial intelligence is a group of algorithms that can modify and create new algorithms as it processes data. Broadly, these smart technologies can drive untold change for the industry.

As the Financial Times wrote earlier this year, “Insurance claims are, by their nature, painful processes. They happen only when something has gone wrong and they can take months to resolve.”

Chinese insurer Ping An uses AI to accelerate decision making, and New York-based insurance start-up Lemonade employs algorithms and AI to help pay clients more quickly. Other insurers use smart technologies for fraud detection, risk management, marketing, and other functions.

What could go wrong?

Algorithms and AI can work quickly, but they aren’t perfect. A recent article by Osonde A. Osoba, an information scientist and professor with the RAND Corporation, details what data scientists call an “algorithm audit.” An algorithm audit detects biases or blind spots that skew results, making it necessary to review and test the underlying data.

In the case Osoba discusses, Apple Pay was assailed on Twitter by tech executive David Heinemeier Hansson for giving him a credit limit 20 times larger than his wife’s, despite their sharing all assets, among other factors. Hansson concluded that the algorithm was sexist – causing a furor on the social media platform among both those who vehemently agreed and disagreed with him.

Apple Pay said it doesn’t have information about applicants’ gender or marital status. Yet no one from Apple could answer why Hansson received a significantly higher credit limit. They responded: “Credit limits are determined by an algorithm.”

Still, these algorithms and AI are informed by something – perhaps the implicit biases of the programmers. For example, systems using facial recognition software have yielded decisions that appear biased against darker-skinned women.

Are algorithms easier to fix than people?

An article in The New York Times by Sendhil Mullainathan, a professor of behavioral and computational science at the University of Chicago, discusses human and algorithmic biases. He cites a study in which he and his co-authors examined an algorithm that is commonly used to determine who requires extra levels of health care services. This algorithm has affected approximately 100 million people in the U.S. In this case, black patients were routinely rated to be at lower risk. However, the algorithm was inherently flawed: it used data on who receives the highest amount of health care expenditures.

Black patients already spend less money on health care than white patients with the same chronic conditions, so the algorithm only served to reinforce this bias. Indeed, without the algorithmic bias, the study estimated that the number of black patients receiving extra care would more than double. Yet Mullainathan believes that the algorithm can be fixed fairly easily.

Contrast this to a 2004 study Mullainathan conducted. He and his co-author responded to job listings with fabricated resumes: half the time they sent resumes with distinctively black names; the other half with distinctively white names. Resumes with black names received far fewer responses than those with white names.

This bias was verifiably human and, therefore, much harder to define.

“Humans are inscrutable in a way that algorithms are not,” Mullainathan says. “Our explanations for our behavior are shifting and constructed after the fact.”

Don’t write algorithms off

As RAND’s Osoba writes, algorithms and AI “help speed up complex decisions, enable wider access to services, and in many cases make better decisions than humans.” It’s the last point that one must be particularly mindful of; while algorithms can reproduce and intensify biases of their programmers, they don’t possess inherent prejudices, as people do.

As Mullainathan puts it, “Changing algorithms is easier than changing people: software on computers can be updated; the ‘wetware’ in our brains has so far proven much less pliable.”

Much Ado About AI at I.I.I. Joint Industry Forum

By Lucian McMahon

You’re familiar with the buzzwords by now. Internet of things. Blockchain. Artificial intelligence.

At the 2019 I.I.I. Joint Industry Forum, a panel on artificial intelligence and insurance cut through the buzz. How can AI be used to help build resilient communities? And how can the insurance industry leverage AI to better help customers address risk?

Pictured left to right: Andrew Robinson, Sean Ringsted, Ahmad Wani, Kyle Schmitt, James Roche

New products, more resilience

Regarding resilience, Ahmad Wani, CEO and co-founder of One Concern, said that AI is being used to harness vast troves of data to identify, on a “hyperlocal level,” the impact of a whole range of hazards. His company is already doing just that, partnering with local governments and insurance companies to better plan for future losses. “We don’t need to wait for disasters to happen to prioritize the response, we need to make changes and to act now before the disaster,” Wani said.

Sean Ringsted, executive vice president, chief digital officer and chief risk officer at the Chubb Group, also pointed out that insurers are already expanding their product offerings thanks to AI and big data. Contingent business interruption, for example: the sheer volume of data can now allow insurers to effectively analyze supply chain risks and price them accordingly.

Transparency and fairness are top of mind

But as Ringsted said, “it’s not all good news and roses.” What sorts of concerns should insurers and the public have about using AI?

Kyle Schmitt, managing direct of the global insurance practice at J.D. Power cited consumer concerns with the data and algorithms used for AI-enabled products. Consumers are deeply concerned with the security and privacy of any data they share with insurers. Per Schmitt, consumers also worry about the fairness of AI products, when algorithms instead of people are making decisions in an opaque way.

This is the so-called “black box problem” of AI, in which complex algorithms will arrive at answers or decisions without anyone being able to explain how they did so. Ringsted stressed that, for AI to be a viable tool, its mechanisms will need to be explainable to regulators and the public.

James Roche, vice president, personal lines product development at ISO, echoed this sentiment: social responsibility requires both robust data stewardship and strict control over AI outputs to ensure that outcomes are fair and within ethical standards.

From a consumer perspective, ensuring ethical use of AI is critical. Schmitt said that at the end of the day consumers are open-minded, “but they just want some ground rules.”