Category Archives: Technology

California Finalizes Updated Modeling Rules, Clarifies Applicability Beyond Wildfire

California’s Department of Insurance last week posted long-awaited rules that remove obstacles to profitably underwriting coverage in the wildfire-prone state. Among other things, the new rules eliminate outdated restrictions on use of catastrophe models in setting premium rates.

The measure also extends language related to catastrophe modeling to “nature-based flood risk reduction.” In the original text, “the only examples provided of the kinds of risk mitigation measures that would have to be considered in this context involved wildfire. However, because the proposed regulations also permit catastrophe modeling with respect to flood lines, it was appropriate to add language to this subdivision relating to flood mitigation.”

The relevant language applies “generally to catastrophe modeling used for purposes of projecting annual loss,” according to documents provided by the state Department of Insurance.

Benefits for policyholders

As a result, the department said in a press release, “Homeowners and businesses will see greater availability, market stability, and recognition for wildfire safety through use of catastrophe modeling.”

For the past 30 years, California regulations – specifically, Proposition 103 – have required insurance companies to apply a catastrophe factor to insurance rates based on historical wildfire losses. In a dynamically changing risk environment, historical data alone is not sufficient for determining fair, accurate insurance premiums. According to Cal Fire, five of the largest wildfires in the state’s history have occurred since 2017. 

The state’s evolving risk profile, combined with the underwriting and pricing constraints imposed by Proposition 103, has led to rising premium rates and, in some cases, insurers deciding to limit or reduce their business in the state.

With fewer private insurance options available, more Californians have been resorting to the state’s FAIR Plan, which offers less coverage for a higher premium. This isn’t a tenable situation.

“Put simply, increasing the number of policyholders in the FAIR Plan threatens the solvency of insurance companies in the voluntary market,”  California Insurance Commissioner Ricardo Lara explained to the State Assembly Committee on Insurance. “If the FAIR Plan experiences a massive loss and cannot pay its claims, by law, insurance companies are on the hook for the unpaid FAIR Plan losses…. This uncertainty is driving insurance companies to further limit coverage to at-risk Californians.”

“Including the use of catastrophe modeling in the rate making process will help stabilize the California insurance market,” said Janet Ruiz, Triple-I’s California-based director of strategic communication. “Homeowners in California will be able to better understand their individual risk and take steps to strengthen their homes.”

The new measure also requires major insurers to increase the writing of comprehensive policies in wildfire-distressed areas equivalent to no less than 85 percent of their statewide market share. Smaller and regional insurance companies must also increase their writing.

Requirements for insurers

It also requires catastrophe models used by insurers to account for mitigation efforts by homeowners, businesses, and communities – something not currently possible under existing outdated regulations today.

Moves like this by state governments – combined with increased availability of more comprehensive and granular data tools to inform underwriting and mitigation investment – will go a long way toward improving resilience and reducing losses.

Learn More:

Triple-I “Trends and Insights” Issues Brief: California’s Risk Crisis

Triple-I “Trends and Insights” Issues Brief: Proposition 103 and California’s Risk Crisis

Triple-I “State of the Risk” Issues Brief: Wildfire

Triple-I “State of the Risk” Issues Brief: Flood

JIF 2024: Panel Highlights Human-Centered Use
of Advanced Technology

By Lewis Nibbelin, Contributing Writer, Triple-I

Technological innovations — particularly generative AI — are revolutionizing insurance operations and risk management more quickly than the industry can fully accommodate them, necessitating more proactive involvement in their implementation, according to participants in Triple-I’s 2024 Joint Industry Forum.

Such involvement can ensure that the ethical implications of AI remain integral to its continued evolution.

Benefits of AI

Increasingly sophisticated AI models have expedited data processing across the insurance value chain, reshaping underwriting, pricing, claims, and customer service. Some models automate these processes entirely, with one automated claims review system – co-developed by Paul O’Connor, vice president of operational excellence at ServiceMaster – streamlining claims processing through to payment, thereby “removing the friction from the process of disputes,” said O’Connor. 

“We’re at an inflection point of seeing losses dramatically reduced,” said Kenneth Tolson, global president for digital solutions at Crawford & Co., as AI promises to “dramatically mitigate or even eliminate loss” by enabling insurers to resolve problems more efficiently.

Novel insurance products also cover more risk, said Majesco’s chief strategy officer Denise Garth, who pointed to usage-based insurance (UBI) as more appealing to younger buyers. UBI emerged from telematics, which can leverage AI to track actual driving behavior and has been found to encourage significant safety-related changes.

Alongside lower operational costs resulting from AI efficiency gains, such policies suggest a possibility for reduced premiums and, consequently, a diminished protection gap, Garth said.

Utilizing AI presents “the first time in decades that we have the opportunity to truly optimize our operations,” she added.

Industry hurdles

For Patrick Davis, senior vice president and general manager of Data & Analytics at Majesco, developing effective AI strategies hinges not on massive budgets or teams of data scientists, but on the internal organization of existing data.

AI models fail when base datasets are inaccessible or ill-defined, he explained. This is especially true of generative AI, which encourages decision-making by producing new data via conversational prompting.

 “Extremely well-described data” is essential to receiving meaningful, accurate responses, Davis said. Otherwise, “it’s garbage in, garbage out.”

Outdated technology and business practices, however, impede successful AI integration throughout the insurance industry, Davis and Garth agreed.

“We have, as an industry, a lot of legacy,” Garth said. “If we don’t rethink how we’re going about our products and processes, the technology we apply to them will keep doing the same things, and we won’t be able to innovate.”

Beyond frustrating innovation, cultural resistance to change within organizations can delay them in preemptively balancing their unique risks and goals with the likely inevitable influence of AI, leaving themselves and insureds at a disadvantage.

“We’re not going to stop change,” said Reggie Townsend, vice president and head of the data ethics practice at SAS, “but we have to figure out how to adapt to the pace of change in a way that allows us to govern our risk in acceptable ways.”

Ethical implications

Responsible innovation, Townsend said, entails “making sure, when we have changes, that they have a material benefit to human beings” – benefits which an organization clearly defines while being considerate of potential downsides.

Improperly managed data facilitates such downsides from using AI models, contributing to pervasive bias and privacy concerns.

Augmenting base datasets with demographic trend information, for example, may be “tempting,” O’Connor explained, “but where does this data go, once it gets outside our boundaries and augmented elsewhere? Vigilance is absolutely required.”

Organizational oversight committees are crucial to ensuring any major technological advancements remain intentional and ethical, as they encourage innovators to “overcommunicate the ‘why,’” said discussion moderator Peter Miller, president and CEO of The Institutes.

Tolson reaffirmed this point in discussing how his organization’s AI counsel holds him accountable by fostering “diligence and openness” around an “articulated vision,” further fueling collaborative sharing of data cross-organizationally. Collaboration and transparency around AI are key, he stressed, “so that we don’t have to learn the same lesson twice, the hard way twice.”

Looking ahead

Though they do not currently exist in the U.S. on a federal level, AI regulations have already been introduced in some states, following a comprehensive AI Act enacted earlier this year in Europe. With more legislation on the horizon, insurers must help lead these conversations to ensure that AI regulations suit the complex needs of insurance, without hindering the industry’s commitments to equity and security.

A recent report by Triple-I and SAS, a global leader in data and AI, centers the insurance industry’s role in guiding conversations around ethical AI implementation on a global, multi-sector scale. Defending this position, Townsend explained how the industry “has put a lot of rigor in place already” to eradicate bias and preserve data integrity “because [its] been so highly regulated for a long time,” creating an opportunity to educate less experienced businesses.

Immeasurable mountains of data produced from rapid technological advancement indicate more and more underinformed industries will turn to AI to assess them, making assuming an educational responsibility even more imperative.

Learn More:

Insurers Need to Lead on Ethical Use of AI

JIF 2024: What Resilience Success Looks Like

Changing Risks, Rising Costs Drive Insurance Transformation for 2025: Majesco

Executive Exchange: Using Advanced Tools to Drill Into Flood Risk

2024’s Nat Cats:
A Scholarly View

By Lewis Nibbelin, Contributing Writer, Triple-I

Triple-I recently kicked off a new webinar series featuring its Non-Resident Scholars. The first episode focused on the rising severity of natural catastrophes and innovative data initiatives these scholars are engaged in to help mitigate the impact of these perils. 

Moderated by Triple-I’s Chief Economist and Data Scientist Michel Léonard, the panel included:

  • Phil Klotzbach, Senior Research Scientist in the Department of Atmospheric Science at Colorado State University;
  • Victor Gensini, meteorology professor at Northern Illinois University and leading expert in convective storm research;
  • Seth Rachlin, social scientist, business leader, and entrepreneur currently active as a researcher and teaching professor; and
  • Colby Fisher, Managing Partner and Director of Research and Development at Hydronos Labs.

“Wild and crazy”

Klotzbach discussed “the wild and crazy 2024 Atlantic hurricane season,” which he called “the strangest above-normal season on record.”

Abnormally fluctuating periods of activity this year created “a story of three hurricane seasons,” reflecting a broader trend of decreasing storm frequency and increasing storm severity, Klotzbach said.

While Klotzbach and his forecasting team’s “very aggressive prediction for a very busy season” was validated by Hurricane Beryl’s landfall as the earliest Category-5 hurricane on record — followed by Debbie and Ernesto — “we went through this period from August 20 to September 23 where we had almost nothing. It was extremely quiet.”

After extensive media coverage claiming the forecasts were a “massive bust,” along came Hurricane Helene, which developed into the “strongest hurricane to make landfall in the Big Bend of Florida since 1851.” Helene drove powerful, destructive flooding inland – most notably in Asheville, NC, and surrounding communities. Then came Hurricane Milton which was noteworthy for spawning numerous fatal tornadoes.

“Most tornadoes that happen with hurricanes are relatively weak – EF0, EF1, perhaps EF2,” Gensini – the panel’s expert on severe convective storms (SCS) – added. “Milton had perhaps a dozen EF3 tornadoes.”

Costly and underpublicized

Severe convective storms – which include tornadoes, hail, thunderstorms with lightning, and straight-line winds – accounted for 70 percent of insured losses globally the first half of 2024. And in 2023, U.S. insured SCS-caused losses exceeded $50 billion for the first time on record for a single year.

Hailstorms are especially destructive, behind as much as 80 percent of SCS claims in any one year. Yet their relative brevity and limited scope compared to large-scale disasters earns them far less public and industry attention.

“We haven’t had a field campaign dedicated to studying hail in the United States since the 1970s,” Gensini explained, “so it’s been a long time since we’ve had our models updated and validated.”

Data-driven solutions

To rectify this knowledge gap, the In-situ Collaborative Experiment for the Collection of Hail in the Plains (or ICECHIP) will send Gensini and some 100 other scientists into the Great Plains to chase and collect granular data from hailstorms next year. Beyond developing hail science, their goal is to improve hail forecasting, thereby reducing hail damage.

Gensini pointed to another project, the Center for Interdisciplinary Research on Convective Storms (or CIRCS), which is a prospective academic industry consortium to develop multidisciplinary research on SCS. Informed by diverse partnerships, such research could foster resilience and recovery strategies that “move forward the entire insurance and reinsurance industry,” he said.

Rachlin and Fisher echoed this emphasis on enhancing the insurance industry’s facilitation of risk mitigation in their presentation on Hydronos Labs, an environmental software development and consulting firm that utilizes open-source intelligence (OSINT).

The costs and variability of climate and weather information have created “a data arms race” among insurance carriers, and aggregating and analyzing publicly available information is an untapped solution to that imbalance, they explained.

The company’s end goal, Rachlin added, is to promote an insurance landscape centered around “spending less money on [collecting] data and more money using data.”

All panelists stressed the ongoing need for more reliable, comprehensive data to steer industry strategies for effective mitigation. Investments in this data now are less than the costs of post-disaster recovery that will continue to plague more and more communities in our rapidly evolving climate.

Register here to listen to the entire webinar on demand.

Learn More:

Triple-I “State of the Risk” Issues Brief: Hurricanes

Triple-I “State of the Risk” Issues Brief: Flood

Triple-I “State of the Risk” Issues Brief: Severe Convective Storms

Outdated Building Codes Exacerbate Climate Risk

JIF 2024: Collective, Data-Driven Approaches Needed to Address Climate-Related Perils

Climate Resilience and Legal System Abuse Take Center Stage in Miami

Triple-I Experts Speak on Climate Risk, Resilience

NAIC, FIO to Collaborate on Data Collection Around Climate Risk

Actuarial Studies Advance Discussion
on Bias, Modeling, and A.I.

The Casualty Actuarial Society (CAS) has added to its growing body of research to help actuaries detect and address potential bias in property/casualty insurance pricing with four new reports. The latest reports explore different aspects of unintentional bias and offer forward-looking solutions.

The first  –A Practical Guide to Navigating Fairness in Insurance Pricing” – addresses regulatory concerns about how the industry’s increased use of models, machine learning, and artificial intelligence (AI) may contribute to or amplify unfair discrimination. It provides actuaries with information and tools to proactively consider fairness in their modeling process and navigate this new regulatory landscape.

The second new paper — Regulatory Perspectives on Algorithmic Bias and Unfair Discrimination” – presents the findings of a survey of state insurance commissioners that was designed to better understand their concerns about discrimination. The survey found that, of the 10 insurance departments that responded, most are concerned about the issue but few are actively investigating it. Most said they believe the burden should be on the insurers to detect and test their models for potential algorithmic bias.

The third paper –Balancing Risk Assessment and Social Fairness: An Auto Telematics Case Study” – explores the possibility of using telematics and usage-based insurance technologies to reduce dependence on sensitive information when pricing insurance. Actuaries commonly rely on demographic factors, such as age and gender, when deciding insurance premiums. However, some people regard that approach as an unfair use of personal information. The CAS analysis found that telematics variables –such as miles driven, hard braking, hard acceleration, and days of the week driven – significantly reduce the need to include age, sex, and marital status in the claim frequency and severity models.

Finally, the fourth paper – “Comparison of Regulatory Framework for Non-Discriminatory AI Usage in Insurance” – provides an overview of the evolving regulatory landscape for the use of AI in the insurance industry across the United States, the European Union, China, and Canada. The paper compares regulatory approaches in those jurisdictions, emphasizing the importance of transparency, traceability, governance, risk management, testing, documentation, and accountability to ensure non-discriminatory AI use. It underscores the necessity for actuaries to stay informed about these regulatory trends to comply with regulations and manage risks effectively in their professional practice.

There is no place for unfair discrimination in today’s insurance marketplace. In addition to being fundamentally unfair, to discriminate on the basis of race, religion, ethnicity, sexual orientation – or any factor that doesn’t directly affect the risk being insured – would simply be bad business in today’s diverse society.  Algorithms and AI hold great promise for ensuring equitable risk-based pricing, and insurers and actuaries are uniquely positioned to lead the public conversation to help ensure these tools don’t introduce or amplify biases.

Learn More:

Insurers Need to Lead on Ethical Use of AI

Bringing Clarity to Concerns About Race in Insurance Pricing

Actuaries Tackle Race in Insurance Pricing

Calif. Risk/Regulatory Environment Highlights Role of Risk-Based Pricing

Illinois Bill Highlights Need for Education on Risk-Based Pricing of Insurance Coverage

New Illinois Bills Would Harm — Not Help — Auto Policyholders

Insurers Need to Lead
on Ethical Use of AI

 

Every major technological advancement prompts new ethical concerns or shines a fresh light on existing ones. Artificial intelligence is no different in that regard. As the property/casualty insurance industry taps the speed and efficiency generative AI offers and navigates the practical complexities of the AI toolset, ethical considerations must remain in the foreground.  

Traditional AI systems recognize patterns in data to make predictions. Generative AI goes beyond predicting – it generates new data as its primary output.  As a result, it can support strategy and decision making through conversational, back-and-forth “prompting” using natural language, rather than complicated, time-consuming coding.

A recently published report by Triple-I and SAS, a global leader in data and AI, discusses how insurers are uniquely positioned to advance the conversation for ethical AI – “not just for their own businesses, but for all businesses; not just in a single country, but worldwide.” 

AI inevitably will influence the insurance sector, whether through the types of perils covered or by influencing how insurance functions like underwriting, pricing, policy administration, and claims processing and payment are carried out. By shaping an ethical approach to implementing AI tools, insurers can better balance risk with innovation for their own businesses, as well as for their customers.

Conversely, failure to help guide AI’s evolution could leave insurers — and their clients — at a disadvantage. Without proactive engagement, insurers will likely find themselves adapting to practices that might not fully consider the specific needs of their industry or their clients. Further, if AI is regulated without insurers’ input, those regulations could fail to account for the complexity of insurance – leading to guidelines that are less effective or equitable.

“When it comes to artificial intelligence, insurers must work alongside regulators to build trust,” said Matthew McHatten, president and CEO of MMG Insurance, in a webinar introducing the report. “Carriers can add valuable context that guides the regulatory conversation while emphasizing the value AI can bring to our policyholders.” 

During the webinar, Peter L. Miller, CPCU, president and CEO of The Institutes, noted that generative AI already is helping insurers “move from repairing and replacing after a loss occurs to predicting and preventing losses from ever happening in the first place,” as well as enabling efficiencies across the risk-management and insurance value chain.

Jennifer Kyung, chief underwriting officer for USAA, discussed several use cases involving AI, including analyzing aerial images to identify exposures for her company’s members. If a potential condition concern is identified, she said, “We can trigger an inspection or we can reach out to those members and have a conversation around mitigation.”

USAA also uses AI to transcribe customer calls and “identify themes that help us improve the quality of our service.”  Future use cases Kyung discussed include using AI to analyze claim files and other large swaths of unstructured data to improve cost efficiency and customer experience.

Mike Fitzgerald, advisory industry consultant for SAS, compared the risks associated with generative AI to the insurance industry’s early experience with predictive models in the early 2000s. Predictive models and insurance credit scores are two innovations that have benefited policyholders but have not always been well understood by consumers and regulators.  Such misunderstandings have led to pushback against these underwriting and pricing tools that more accurately match risk with price.

Fitzgerald advised insurers to “look back at the implementation of predictive models and how we could have done that differently.”

When it comes to AI-specific perils, Iris Devriese, underwriting and AI liability lead for Munich Re, said, “AI insurance and underwriting of AI risk is at the point in the market where cyber insurance was 25 years ago. At first, cyber policies were tailored to very specific loss scenarios… You could really see cyber insurance picking up once there was a spike of losses from cyber incidents. Once that happened, cyber was addressed in a more systematic way.”

Devriese said lawsuits related to AI are currently “in the infancy stage. We’ve all heard of IP-related lawsuits popping up and there’ve been a few regulatory agencies – especially here in the U.S. – who’ve spoken out very loudly about bias and discrimination in the use of AI models.”

She noted that AI regulations have recently been introduced in Europe.

“This will very much spur the market to form guidelines and adopt responsible AI initiatives,” Devriese said.

The Triple-I/SAS report recommends that insurers lead by example by developing their own detailed plans to deliver ethical AI in their own operations. This will position them as trusted experts to help lead the wider business and regulatory community in the implementation of ethical AI. The report includes a framework for implementing an ethical AI approach.

LEARN MORE AT JOINT INDUSTRY FORUM

Three key contributors to the project – Peter L. Miller, Matthew McHatten, and Jennifer Kyung — will share their insights on AI, climate resilience, and more at Triple-I’s Joint Industry Forum in Miami on Nov. 19-20. 

Cellphones Leading Cause of Distracted Driving; Telematics Can Help

By Max Dorfman, Research Writer, Triple-I

Distracted driving—which has significantly increased since the coronavirus pandemic—is most significantly affected by cellphone use, according to a new Issues Brief by Triple-I.

The report, Distracted Driving: State of the Risk, states that cellphone use–which includes dialing, texting, and browsing–was among the most ubiquitous and highest-risk behaviors found in governmental and private sector studies. According to a 2022 national observational survey from the National Highway Traffic Safety Administration (NHTSA), a total of 2.5 percent of drivers stopped at intersections were talking on hand-held phones at any moment during the day in 2021.

The brief also found that the U.S. personal auto insurance industry’s combined ratio—a measure that represents underwriting profitability—increased dramatically from 2022, to 112.2. A combined ratio below 100 indicates an underwriting profit, while one above 100 indicates an underwriting loss.

“As drivers returned to the roads following the pandemic, distracted driving surged, causing higher rates of accidents, injuries, and deaths. This high-risk behavior has worsened in the years since, having huge implications for the insurance industry and their policyholders,” stated Dale Porfilio, chief insurance officer, Triple-I.

The report notes that telematics and usage-based insurance can potentially help insurers—and their policyholders—better understand a driver’s risk profile and tailor auto insurance rates based on individual driving habits.

Indeed, according to an Insurance Research Council survey in 2022, 45 percent of drivers said they made significant safety-related changes in how they drove after participating in a telematics program. An additional 35 percent stated that they made small changes in their driving behavior. Policyholders became more comfortable with having their insurer monitor their driving behavior when it resulted in potentially lower insurance costs during the onset of the pandemic.

“If telematics can influence drivers to change behaviors and reduce the number of accidents, the nation’s roadways will be safer and auto insurance can be more affordable,” Porfilio concluded.

Learn More:

Facts + Statistics: Distracted driving | III

Louisiana Still Least Affordable State for Personal Auto, Homeowners Insurance

Surge in U.S. Auto Insurer Claim Payouts Due to Economic and Social Inflation

Predict & Prevent: From Data to Practical Insight

By Bob Marshall, co-founder and CEO, Whisker Labs

The insurance industry’s shift from assessing and pricing risks to predicting and preventing losses – thereby improving insurance availability and affordability – is well underway. Even a casual look at the trade press reveals insurers adopting technologies and data-driven strategies that help businesses, families, and communities improve their risk profiles.

This data-driven movement does more than simply contain insurance costs – it’s driving improved customer engagement, affinity, and retention and creating opportunities beyond the transactional. Data clarity is crucial for all stakeholders, from insurers to first responders utilities, policymakers and – most important – homeowners.  Accurate data enables proactive measures that can prevent fires from happening.

We’re seeing this with our insurance IoT offering, Ting. Ting prevents home fires by identifying unique signals generated by tiny electrical arcs, the precursors to imminent fire risks. These signals are incredibly small but are clearly visible to Ting’s advanced detection technology. Ting has been found to prevent 80 percent of home electrical fires – and, beyond its ability to predict and prevent, we have found that Ting holds even greater significance for organizations that want to bring greater clarity and value to their current data ecosystems.

Over the past few years, we’ve built the world’s most knowledgeable electrical fire prevention team, which has been instrumental in the evolution of Ting’s machine learning and AI. Our Fire Safety Team has found that existing electrical fire data, while helpful and directional, needs greater accuracy and completeness. This is not due to a lack of care. We’re talking about an exceptionally hard problem – codifying fires after the fact. It is at this critical point where data from IoT devices like Ting becomes indispensable.

More than 50 percent of insurance claims for fire are often coded in the “unknown/underdetermined” category. Of these, fire chiefs and forensic fire engineers suggest more than half are likely electrical-related, but lack of resources prevent them from determining exact causation beyond a reasonable doubt, so they simply default to “unknown.” Ting data continues to document important and first of its kind findings around the origin of electrical fires.

Our ‘why’ behind predict and prevent

A horrific loss from an electrical fire in my family prompted the question: “Why can’t faults be identified well before they can evolve into a fire?”

Electricity is one of the most dangerous forces in nature, yet one of our most critical resources; our growing reliance poses increasing risks to homes, businesses, and communities. Recent U.S. Fire Administration data reveals a sobering trend. The 10 years from 2012 through 2021 saw reduced cooking, smoking, and heating fires; however, in stark contrast, electrical fires saw an 11 percent increase over that same period. Fire ignitions with an undetermined cause increased equally by 11 percent.  

Our pursuit to address these trends has brought us and our insurance partners here: Nearly 400,000 home-years of data, 6,000 remediated hazards; an insurance-forward IoT and telematics platform with full turnkey delivery; and most notably, hundreds of thousands of customers thrilled that their insurance company is doing more for them than reactively paying claims.  

Beyond the home’s walls

But Ting’s value is not limited to inside the home. While every Ting sensor is monitoring each home’s electrical activity to help predict and prevent fires, collectively the Ting network is aggregating data from across the broader utility grid. Specifically, it can help predict and prevent faults on the grid, enabling operators to proactively address risks that might otherwise lead to catastrophic, loss-generating events like wildfires.

A diagram of a transmission tower

Description automatically generated

Data drives insights

Given that many electrical-related fires are coded not as electrical but as “unknown” in fire incident databases, we’ve learned that comparing “prevented fires” to claims after a fire must consider a broader set of fire claims across a book of business, not just those with a secondary cause of “electrical.” All unknown fires and any claim that could even be electrical-related should be included in the broader set of claims. Excluding claims that can reasonably and accurately be removed — such as arson, lightning, earthquakes, and wildfire-related home fires — the data reveals a one-third reduction in the broader category of fires across the “Ting cohort” versus the “non-Ting cohort.” This results in a strong ROI for insurers.

Beyond prevention metrics, we’ve learned a lot, and Ting continues to learn daily and provide statistically significant actuarial impacts. With fully documented and mitigated hazards identified in 1 in 68 homes, the cases – or “saves” – are documented in detail in a peer-reviewed whitepaper, the latest version published on June 1, 2023. By design, each identified and remediated hazard is carefully reported through a highly standardized process to ensure high-quality, consistent data. 

Upon analyzing this statistically significant data, a recurring theme surfaced: The longstanding perception of the electrical fire problem requires new thinking. Below, I highlight three surprising, objective observations revealed by Ting data that support this notion:

  1. There is a common misconception that electrical fires are largely due to older home wiring infrastructure. Yet, we have found that 50 percent of home electrical fire hazards stem from failing or defective devices and appliances, with the other half attributed to home wiring and outlets. This finding is reflected in the chart below, breaking down the location and types of home electrical fire hazards, with a breakout of those stemming from devices and appliances.
  2. What may seem more surprising is that the electric utility grid can be a significant fire risk factor inside the home – not just a community fire risk. Nearly 50 percent of all hazard cases trace back to a root cause outside the house in the form of a grid equipment fault. These faults result in dangerous power entering the home. These conditions endanger a home and its occupants and can cause a shock hazard, damage equipment, and sensitive electronics, and worse, ignite a fire. Utility repair crews often share that a hazard impacted multiple homes in the immediate area, not just the home protected by Ting.
  3. One last finding that runs counter to conventional thinking about electrical fire risk comes in the form of a home-age “bias.” Logically, most of us assume the older the home, the higher the risk. In general, this holds when considering the effects of age and use on existing wiring infrastructure – all other things being equal. However, this assumption falls apart when considering all other factors, such as materials, build quality, and the standards and codes at that time. In fact, with the prevention data that flows in each day from our Fire Safety Team, we have built predictive models for home fire risk; early indications are that these models are demonstrating skill and will lead to a better, more informed view of risk – and of course – even better prevention.

I’m amazed at how our initial objective preventing residential fires has evolved to take on such a broad scope. New data spawns new thinking and new opportunities. Objective data is essential to validating the efficacy of any initiative seeking to prevent losses. Predicting and preventing fires is in the interest of all – especially homeowners and their families.

Colorado’s Life Insurance Data Rules Offer Glimpse of Future for P&C Writers

The Colorado Division of Insurance’s recent adoption of regulations to govern life insurers’ use of any external consumer data and information sources is the first step in implementing legislation approved in 2021 aimed at protecting consumers in the state from insurance practices that might result in unfair discrimination.

Property/casualty insurers doing business in Colorado should be keeping an eye on how the legislation is implemented, as rules governing their use of third-party data will certainly follow.

The implementation regulations, which have been characterized as a “scaling back” of a prior draft release in February, require life insurers using external data to establish a risk-based governance and risk-management framework to determine whether such use might result in unfair discrimination with respect to race and remediate unfair discrimination, if detected. If the insurer uses third-party vendors and other external resources, it is responsible under the new rules for ensuring all requirements are met.

Life insurers must test their algorithms and models to evaluate whether any unfair discrimination results and implement controls and process to adjust their use of AI, as necessary. They also must maintain documentation including descriptions and explanations of how external data is being used and how they are testing their use of external data for unfair discrimination. The documentation must be available upon the regulator’s request, and each insurer must report its progress toward compliance to the Division of Insurance.

The revised draft no longer focuses on “disproportionately negative outcomes” that would have included results or effects that “have a detrimental impact on a group” of protected characteristics “even after accounting for factors that define similarly situated consumers.” Removing that term altogether, the revised draft shifts focus to requiring “risk-based” governance and management frameworks.

This change is significant. As Triple-I has expressed elsewhere, risk-based pricing of insurance is a fundamental concept that might seem intuitively obvious when described – yet misunderstandings about it regularly sow confusion. Simply put, it means offering different prices for the same level of coverage, based on risk factors specific to the insured person or property. If policies were not priced this way – if insurers had to come up with a one-size-fits-all price for auto coverage that didn’t consider vehicle type and use, where and how much the car will be driven, and so forth – lower-risk drivers would subsidize riskier ones.

Risk-based pricing allows insurers to offer the lowest possible premiums to policyholders with the most favorable risk factors. Charging higher premiums to insure higher-risk policyholders enables insurers to underwrite a wider range of coverages, thus improving both availability and affordability of insurance. This straightforward concept becomes complicated when actuarially sound rating factors intersect with other attributes in ways that can be perceived as unfairly discriminatory.

Algorithms and machine learning hold great promise for ensuring equitable pricing, but research has shown these tools also can amplify any biases in the underlying data. The insurance and actuarial professions have been researching and attempting to address these concerns for some time (see list below).

Want to know more about the risk crisis and how insurers are working to address it? Check out Triple-I’s upcoming Town Hall, “Attacking the Risk Crisis,” which will be held Nov. 30 in Washington, D.C.

Triple-I Research

Issues Brief: Risk-Based Pricing of Insurance

Issues Brief: Race and Insurance Pricing

Research from the Casualty Actuarial Society

Defining Discrimination in Insurance

Methods for Quantifying Discriminatory Effects on Protected Classes in Insurance

Understanding Potential Influences of Racial Bias on P&C Insurance: Four Rating Factors Explored

Approaches to Address Racial Bias in Financial Services: Lessons for the Insurance Industry

From the Triple-I Blog

Illinois Bill Highlights Need for Education on Risk-Based Pricing of Insurance Coverage

How Proposition 103 Worsens Risk Crisis in California

It’s Not an “Insurance Crisis” – It’s a Risk Crisis

IRC Outlines Florida’s Auto Insurance Affordability Problems

Education Can Overcome Doubts on Credit-Based Insurance Scores, IRC Survey Suggests

Matching Price to Peril Helps Keep Insurance Available and Affordable

Keep It Simple:Security System Complexity Correlates With Breach Costs

By Max Dorfman, Research Writer, Triple-I

Artificial intelligence is helping to limit the costs associated with data breaches, a recent study by IBM and the Ponemon Institute found. While these costs continue to rise, they are increasing more slowly for some organizations – in particular, those using less-complex, more-automated security systems.

According to the study, the average cost of a data breach was $4.45 million in 2023, a 2.3 percent increase from the 2022 cost of $4.35 million. The 2023 figure represents a 15.3 percent increase from 2020, when the average breach was $3.86 million.

However, not all organizations surveyed by the study experienced the same kinds of breaches – or the same costs. Organizations with “low or no security system complexity” – systems in which it is easier to identify and manage threats – experienced far smaller losses than those with high system complexity. The average 2023 breach cost $3.84 million for the former and a staggering $5.28 million for the latter. For organizations with high system complexity, this is an increase of more than 31 percent from the year before, amounting to an average of $1.44 million.

As David W. Viel, founder and CEO of Cognoscenti Systems, put it: “The size and complexity of a system directly results in a greater number of defects and resulting vulnerabilities as these quantities grow. On the other hand, the number of defects and cybersecurity vulnerabilities shrinks as the system or component is made smaller and simpler. This strongly suggests that designs and implementations that are small and simple should be very much favored over large and complex if effective cybersecurity is to be obtained.”

The research also noted that organizations that involve law enforcement in ransomware attacks experienced lower costs. The 37 percent of survey respondents that did not contact law enforcement paid 9.6 percent more than those that did, with the breach lasting an average of 33 days longer than those that did contact law enforcement. These longer breaches tended to cost organizations far more, with breaches with identification and containment times under 200 days averaging $3.93 million, and those over 200 days costing $4.95 million.

AI and automation are proving key

Security AI and automation both showed to be significant factors in lowering costs and reducing time to identify and contain breaches, with organizations utilizing these tools reporting 108-day shorter times to contain the breach, and $1.76 million lower data breach costs relative to organizations that did not use these tools. Organizations with no use of security AI and automation experienced an average of $5.36 million in data breach costs, 18.6 percent more than the average 2023 cost of a data breach.

Now, most respondents are using some level of these tools, with a full 61 percent using AI and automation. However, only 28 percent of respondents extensively used these tools in their cybersecurity processes, and 33 percent had limited use. The study noted that this means almost 40 percent of respondents rely only on manual inputs in their security operations.

Cyber insurance demand is growing

A recent study by global insurance brokerage Gallagher showed that the vast majority of business owners in U.S. – 74 percent – expressed extreme or very high concern about the impact of cyberattacks on their businesses. Indeed, a study by MarketsandMarkets found that the cyber insurance market is projected to grow from $10.3 billion in 2023 to $17.6 billion by 2028, noting that the rise in threats like data breaches, ransomware, and phishing attacks is driving demand.

Organizations are now responding more thoroughly to these threats, with increased underwriting rigor helping clients progress in cyber maturity, according to Aon’s 2023 Cyber Resilience Report. Aon states that several cybersecurity factors, including data security, application security, remote work, access control, and endpoint and systems security – all of which experienced the greatest improvement among Aon’s clients – must be continually monitored and evaluated, particularly for evolving threats.

Insurers and their customers need to work together to more fully address the risks and damages associated with cyberattacks as these threats continue to grow and businesses rely ever more heavily on technology.

Digital Tools Help Agency Revenues, But Cybercrime ConcernsMay Hamper Adoption

By Max Dorfman, Research Writer, Triple-I

Insurance agencies that adopt digital methods to interact with customers have seen their revenues grow faster than their less digitally sophisticated competitors, according to new research by Liberty Mutual and Safeco Insurance. However, the research also indicates that digital adoption by agencies has slowed in recent years.

The study, The State of Digital in Independent Insurance Agencies, found that “highly digital adopter” agencies — based on a 10-point scale related to the number and complexity of the tools the agency uses — experienced a 70 percent growth rate, as opposed to 17 percent for “high digital adopters”, and a mere 10 percent for “low” and “medium” digital adopters.

But while digital adoption has gained traction, it has declined as a priority in agencies’ plans. In the latter part of 2020, 58 percent of agencies said improving digital capabilities was part of their five-year growth plans, according to the Liberty Mutual/Safeco study. However, by late 2021, this had decreased to 47 percent, approximately the same as in 2017.

The digital tools that have seen a decrease in use range from social media to live online chats. Additionally, many agencies said they are not tracking which digital tools are driving growth.

The survey found that 60 percent of digitally focused agencies said they planned to invest in new digital capabilities within their five-year agency growth plans. Only 42 percent of slow and steady growth agencies said the same. Growth-focused agencies have used several tools to increase their reach and revenue. Self-service portals, video calls, live online chats, video quotes, and policy reviews have all driven significant improvement among these agencies.

These, however, are not the only tools being recommended and used. Artificial intelligence, machine learning, Internet of Things, and big data analytics are all being considered and used to increase engagement with customers and prospects.

Cybercrime may be a factor hampering growth in digital adoption. Indeed, global cybercrime costs are predicted to hit $10.5 trillion annually by 2025, according to Cybersecurity Ventures. Additionally, more than half of all consumers have experienced a cybercrime at some point, according to a 2021 survey by Norton.

Agents remain alert to cyber threats. The Liberty Mutual/Safeco study found that 57 percent of survey respondents anticipated that cyber liability would have a major impact on their agencies by 2025, an increase from 46 percent in 2017.