By Lewis Nibbelin, Contributing Writer, Triple-I
Technological innovations — particularly generative AI — are revolutionizing insurance operations and risk management more quickly than the industry can fully accommodate them, necessitating more proactive involvement in their implementation, according to participants in Triple-I’s 2024 Joint Industry Forum.
Such involvement can ensure that the ethical implications of AI remain integral to its continued evolution.
Benefits of AI
Increasingly sophisticated AI models have expedited data processing across the insurance value chain, reshaping underwriting, pricing, claims, and customer service. Some models automate these processes entirely, with one automated claims review system – co-developed by Paul O’Connor, vice president of operational excellence at ServiceMaster – streamlining claims processing through to payment, thereby “removing the friction from the process of disputes,” said O’Connor.
“We’re at an inflection point of seeing losses dramatically reduced,” said Kenneth Tolson, global president for digital solutions at Crawford & Co., as AI promises to “dramatically mitigate or even eliminate loss” by enabling insurers to resolve problems more efficiently.
Novel insurance products also cover more risk, said Majesco’s chief strategy officer Denise Garth, who pointed to usage-based insurance (UBI) as more appealing to younger buyers. UBI emerged from telematics, which can leverage AI to track actual driving behavior and has been found to encourage significant safety-related changes.
Alongside lower operational costs resulting from AI efficiency gains, such policies suggest a possibility for reduced premiums and, consequently, a diminished protection gap, Garth said.
Utilizing AI presents “the first time in decades that we have the opportunity to truly optimize our operations,” she added.
Industry hurdles
For Patrick Davis, senior vice president and general manager of Data & Analytics at Majesco, developing effective AI strategies hinges not on massive budgets or teams of data scientists, but on the internal organization of existing data.
AI models fail when base datasets are inaccessible or ill-defined, he explained. This is especially true of generative AI, which encourages decision-making by producing new data via conversational prompting.
“Extremely well-described data” is essential to receiving meaningful, accurate responses, Davis said. Otherwise, “it’s garbage in, garbage out.”
Outdated technology and business practices, however, impede successful AI integration throughout the insurance industry, Davis and Garth agreed.
“We have, as an industry, a lot of legacy,” Garth said. “If we don’t rethink how we’re going about our products and processes, the technology we apply to them will keep doing the same things, and we won’t be able to innovate.”
Beyond frustrating innovation, cultural resistance to change within organizations can delay them in preemptively balancing their unique risks and goals with the likely inevitable influence of AI, leaving themselves and insureds at a disadvantage.
“We’re not going to stop change,” said Reggie Townsend, vice president and head of the data ethics practice at SAS, “but we have to figure out how to adapt to the pace of change in a way that allows us to govern our risk in acceptable ways.”
Ethical implications
Responsible innovation, Townsend said, entails “making sure, when we have changes, that they have a material benefit to human beings” – benefits which an organization clearly defines while being considerate of potential downsides.
Improperly managed data facilitates such downsides from using AI models, contributing to pervasive bias and privacy concerns.
Augmenting base datasets with demographic trend information, for example, may be “tempting,” O’Connor explained, “but where does this data go, once it gets outside our boundaries and augmented elsewhere? Vigilance is absolutely required.”
Organizational oversight committees are crucial to ensuring any major technological advancements remain intentional and ethical, as they encourage innovators to “overcommunicate the ‘why,’” said discussion moderator Peter Miller, president and CEO of The Institutes.
Tolson reaffirmed this point in discussing how his organization’s AI counsel holds him accountable by fostering “diligence and openness” around an “articulated vision,” further fueling collaborative sharing of data cross-organizationally. Collaboration and transparency around AI are key, he stressed, “so that we don’t have to learn the same lesson twice, the hard way twice.”
Looking ahead
Though they do not currently exist in the U.S. on a federal level, AI regulations have already been introduced in some states, following a comprehensive AI Act enacted earlier this year in Europe. With more legislation on the horizon, insurers must help lead these conversations to ensure that AI regulations suit the complex needs of insurance, without hindering the industry’s commitments to equity and security.
A recent report by Triple-I and SAS, a global leader in data and AI, centers the insurance industry’s role in guiding conversations around ethical AI implementation on a global, multi-sector scale. Defending this position, Townsend explained how the industry “has put a lot of rigor in place already” to eradicate bias and preserve data integrity “because [its] been so highly regulated for a long time,” creating an opportunity to educate less experienced businesses.
Immeasurable mountains of data produced from rapid technological advancement indicate more and more underinformed industries will turn to AI to assess them, making assuming an educational responsibility even more imperative.
Learn More:
Insurers Need to Lead on Ethical Use of AI
JIF 2024: What Resilience Success Looks Like
Changing Risks, Rising Costs Drive Insurance Transformation for 2025: Majesco
Executive Exchange: Using Advanced Tools to Drill Into Flood Risk