Category Archives: Technology

Colorado’s Life Insurance Data Rules Offer Glimpse of Future for P&C Writers

The Colorado Division of Insurance’s recent adoption of regulations to govern life insurers’ use of any external consumer data and information sources is the first step in implementing legislation approved in 2021 aimed at protecting consumers in the state from insurance practices that might result in unfair discrimination.

Property/casualty insurers doing business in Colorado should be keeping an eye on how the legislation is implemented, as rules governing their use of third-party data will certainly follow.

The implementation regulations, which have been characterized as a “scaling back” of a prior draft release in February, require life insurers using external data to establish a risk-based governance and risk-management framework to determine whether such use might result in unfair discrimination with respect to race and remediate unfair discrimination, if detected. If the insurer uses third-party vendors and other external resources, it is responsible under the new rules for ensuring all requirements are met.

Life insurers must test their algorithms and models to evaluate whether any unfair discrimination results and implement controls and process to adjust their use of AI, as necessary. They also must maintain documentation including descriptions and explanations of how external data is being used and how they are testing their use of external data for unfair discrimination. The documentation must be available upon the regulator’s request, and each insurer must report its progress toward compliance to the Division of Insurance.

The revised draft no longer focuses on “disproportionately negative outcomes” that would have included results or effects that “have a detrimental impact on a group” of protected characteristics “even after accounting for factors that define similarly situated consumers.” Removing that term altogether, the revised draft shifts focus to requiring “risk-based” governance and management frameworks.

This change is significant. As Triple-I has expressed elsewhere, risk-based pricing of insurance is a fundamental concept that might seem intuitively obvious when described – yet misunderstandings about it regularly sow confusion. Simply put, it means offering different prices for the same level of coverage, based on risk factors specific to the insured person or property. If policies were not priced this way – if insurers had to come up with a one-size-fits-all price for auto coverage that didn’t consider vehicle type and use, where and how much the car will be driven, and so forth – lower-risk drivers would subsidize riskier ones.

Risk-based pricing allows insurers to offer the lowest possible premiums to policyholders with the most favorable risk factors. Charging higher premiums to insure higher-risk policyholders enables insurers to underwrite a wider range of coverages, thus improving both availability and affordability of insurance. This straightforward concept becomes complicated when actuarially sound rating factors intersect with other attributes in ways that can be perceived as unfairly discriminatory.

Algorithms and machine learning hold great promise for ensuring equitable pricing, but research has shown these tools also can amplify any biases in the underlying data. The insurance and actuarial professions have been researching and attempting to address these concerns for some time (see list below).

Want to know more about the risk crisis and how insurers are working to address it? Check out Triple-I’s upcoming Town Hall, “Attacking the Risk Crisis,” which will be held Nov. 30 in Washington, D.C.

Triple-I Research

Issues Brief: Risk-Based Pricing of Insurance

Issues Brief: Race and Insurance Pricing

Research from the Casualty Actuarial Society

Defining Discrimination in Insurance

Methods for Quantifying Discriminatory Effects on Protected Classes in Insurance

Understanding Potential Influences of Racial Bias on P&C Insurance: Four Rating Factors Explored

Approaches to Address Racial Bias in Financial Services: Lessons for the Insurance Industry

From the Triple-I Blog

Illinois Bill Highlights Need for Education on Risk-Based Pricing of Insurance Coverage

How Proposition 103 Worsens Risk Crisis in California

It’s Not an “Insurance Crisis” – It’s a Risk Crisis

IRC Outlines Florida’s Auto Insurance Affordability Problems

Education Can Overcome Doubts on Credit-Based Insurance Scores, IRC Survey Suggests

Matching Price to Peril Helps Keep Insurance Available and Affordable

Keep It Simple:Security System Complexity Correlates With Breach Costs

By Max Dorfman, Research Writer, Triple-I

Artificial intelligence is helping to limit the costs associated with data breaches, a recent study by IBM and the Ponemon Institute found. While these costs continue to rise, they are increasing more slowly for some organizations – in particular, those using less-complex, more-automated security systems.

According to the study, the average cost of a data breach was $4.45 million in 2023, a 2.3 percent increase from the 2022 cost of $4.35 million. The 2023 figure represents a 15.3 percent increase from 2020, when the average breach was $3.86 million.

However, not all organizations surveyed by the study experienced the same kinds of breaches – or the same costs. Organizations with “low or no security system complexity” – systems in which it is easier to identify and manage threats – experienced far smaller losses than those with high system complexity. The average 2023 breach cost $3.84 million for the former and a staggering $5.28 million for the latter. For organizations with high system complexity, this is an increase of more than 31 percent from the year before, amounting to an average of $1.44 million.

As David W. Viel, founder and CEO of Cognoscenti Systems, put it: “The size and complexity of a system directly results in a greater number of defects and resulting vulnerabilities as these quantities grow. On the other hand, the number of defects and cybersecurity vulnerabilities shrinks as the system or component is made smaller and simpler. This strongly suggests that designs and implementations that are small and simple should be very much favored over large and complex if effective cybersecurity is to be obtained.”

The research also noted that organizations that involve law enforcement in ransomware attacks experienced lower costs. The 37 percent of survey respondents that did not contact law enforcement paid 9.6 percent more than those that did, with the breach lasting an average of 33 days longer than those that did contact law enforcement. These longer breaches tended to cost organizations far more, with breaches with identification and containment times under 200 days averaging $3.93 million, and those over 200 days costing $4.95 million.

AI and automation are proving key

Security AI and automation both showed to be significant factors in lowering costs and reducing time to identify and contain breaches, with organizations utilizing these tools reporting 108-day shorter times to contain the breach, and $1.76 million lower data breach costs relative to organizations that did not use these tools. Organizations with no use of security AI and automation experienced an average of $5.36 million in data breach costs, 18.6 percent more than the average 2023 cost of a data breach.

Now, most respondents are using some level of these tools, with a full 61 percent using AI and automation. However, only 28 percent of respondents extensively used these tools in their cybersecurity processes, and 33 percent had limited use. The study noted that this means almost 40 percent of respondents rely only on manual inputs in their security operations.

Cyber insurance demand is growing

A recent study by global insurance brokerage Gallagher showed that the vast majority of business owners in U.S. – 74 percent – expressed extreme or very high concern about the impact of cyberattacks on their businesses. Indeed, a study by MarketsandMarkets found that the cyber insurance market is projected to grow from $10.3 billion in 2023 to $17.6 billion by 2028, noting that the rise in threats like data breaches, ransomware, and phishing attacks is driving demand.

Organizations are now responding more thoroughly to these threats, with increased underwriting rigor helping clients progress in cyber maturity, according to Aon’s 2023 Cyber Resilience Report. Aon states that several cybersecurity factors, including data security, application security, remote work, access control, and endpoint and systems security – all of which experienced the greatest improvement among Aon’s clients – must be continually monitored and evaluated, particularly for evolving threats.

Insurers and their customers need to work together to more fully address the risks and damages associated with cyberattacks as these threats continue to grow and businesses rely ever more heavily on technology.

Digital Tools Help Agency Revenues, But Cybercrime ConcernsMay Hamper Adoption

By Max Dorfman, Research Writer, Triple-I

Insurance agencies that adopt digital methods to interact with customers have seen their revenues grow faster than their less digitally sophisticated competitors, according to new research by Liberty Mutual and Safeco Insurance. However, the research also indicates that digital adoption by agencies has slowed in recent years.

The study, The State of Digital in Independent Insurance Agencies, found that “highly digital adopter” agencies — based on a 10-point scale related to the number and complexity of the tools the agency uses — experienced a 70 percent growth rate, as opposed to 17 percent for “high digital adopters”, and a mere 10 percent for “low” and “medium” digital adopters.

But while digital adoption has gained traction, it has declined as a priority in agencies’ plans. In the latter part of 2020, 58 percent of agencies said improving digital capabilities was part of their five-year growth plans, according to the Liberty Mutual/Safeco study. However, by late 2021, this had decreased to 47 percent, approximately the same as in 2017.

The digital tools that have seen a decrease in use range from social media to live online chats. Additionally, many agencies said they are not tracking which digital tools are driving growth.

The survey found that 60 percent of digitally focused agencies said they planned to invest in new digital capabilities within their five-year agency growth plans. Only 42 percent of slow and steady growth agencies said the same. Growth-focused agencies have used several tools to increase their reach and revenue. Self-service portals, video calls, live online chats, video quotes, and policy reviews have all driven significant improvement among these agencies.

These, however, are not the only tools being recommended and used. Artificial intelligence, machine learning, Internet of Things, and big data analytics are all being considered and used to increase engagement with customers and prospects.

Cybercrime may be a factor hampering growth in digital adoption. Indeed, global cybercrime costs are predicted to hit $10.5 trillion annually by 2025, according to Cybersecurity Ventures. Additionally, more than half of all consumers have experienced a cybercrime at some point, according to a 2021 survey by Norton.

Agents remain alert to cyber threats. The Liberty Mutual/Safeco study found that 57 percent of survey respondents anticipated that cyber liability would have a major impact on their agencies by 2025, an increase from 46 percent in 2017.

Crash-Avoidance Features Complicate Auto Repairs But Still Are Valued

Max Dorfman, Research Writer, Triple-I

As more new vehicles become equipped with crash-avoidance features, some owners report significant issues with the technologies after repairs, according to a recent report from the Insurance Institute of Highway Safety (IIHS).

In the survey, approximately half of those who reported an issue with equipped front crash prevention, blind-spot detection, or rearview or other visibility-enhancing cameras said at least one of those systems presented problems after the repair job was completed. 

Nevertheless, many owners remained eager to have a vehicle with these features and were pleased with the out-of-pocket cost, according to Alexandra Mueller, IIHS senior research scientist.

“These technologies have been proven to reduce crashes and related injuries,” Mueller said. “Our goal is that they continue to deliver those benefits after repairs and for owners to be confident that they’re working properly.”

Still, as problems with these technologies persist, the study notes that it is important to track repair issues to further the adoption of crash avoidance features. IIHS research has shown that front-crash prevention, blind-spot detection, and rearview cameras all substantially reduce the types of crashes they are designed to address. For example, IIHS said, automatic emergency braking reduces police-reported rear-end crashes by 50 percent.

An analysis conducted by the IIHS-affiliated Highway Loss Data Institute (HLDI) showed the reduction in insurance claims associated with Subaru and Honda crash-avoidance systems remained essentially constant, even in vehicles more than five years old. But repairs can make it necessary to calibrate the cameras and sensors that the features rely on to work properly, making repairs complicated and costly.

For example, a simple windshield replacement can cost as little as $250, while a separate HLDI study found vehicles equipped with front crash prevention were much more likely to have glass claims of $1,000 or more. Much of that higher cost is likely related to calibration.

The new IIHS study found that owners often had more than one reason requiring repairs to these safety features. Most had received a vehicle recall or service bulletin about their feature, but that was rarely the sole reason they brought their vehicles in for service or repair.

“Other common reasons — which were not mutually exclusive — included windshield replacement, crash damage, a recommendation from the dealership or repair shop, and a warning light or error message from the vehicle itself,” according to the study.

Repair difficulties could motivate drivers to turn off crash avoidance features, potentially making collisions more likely.  But, despite the post-repair issues, the study found that slightly more than 5 percent of owners would opt not to purchase another vehicle with the repaired feature. As reckless driving and traffic fatalities continue to rise, advanced driver-assistance systems will only become more important for the roadway safety, necessitating reliable technology.  

Learn More:

Personal Auto Insurers’ Losses Keep Rising Due to Multiple Factors

IRC Releases State Auto Insurance Affordability Rankings

IRC Study: Public Perceives Impact of Litigation on Auto Insurance Claims

Why Personal Auto Insurance Rates Are Likely to Keep Rising

Acting to Curb Rising Auto Fatalities

“A.I. Take the Wheel!” Drivers Put Too Much Faith in Assist Features, IIHS Survey Suggests

Too many car owners are too comfortable leaving their vehicles’ driver-assist features in charge, potentially putting themselves and others at risk, according to the Insurance Institute for Highway Safety (IIHS).

IIHS said a survey of about 600 regular users of General Motors Super Cruise, Nissan/Infiniti ProPILOT Assist, and Tesla Autopilot found they were “more likely to perform non-driving-related activities like eating or texting while using their partial automation systems than while driving unassisted.”

“The big-picture message here is that the early adopters of these systems still have a poor understanding of the technology’s limits,” said IIHS President David Harkey.

The study reports that 53 percent of Super Cruise users, 42 percent of Tesla Autopilot users, and 12 percent of Nissan’s ProPilot Assist users were comfortable letting the system drive without watching what was happening on the road. Some even described being comfortable letting the vehicle drive during inclement weather.

These systems combine adaptive cruise control and lane-keeping systems, primarily to keep a car in a lane and following traffic on the highway. All require an attentive human driver to monitor the road and take full control when called for.

“None of the current systems is designed to replace a human driver or to make it safe for a driver to perform other activities that take their focus away from the road,” IIHS said in announcing the results of its survey.

While all three automakers caution drivers about the systems’ limits, confusion remains. Tesla’s driver-assist system, which it calls “full self-driving” has received much scrutiny over the years as auto safety experts say the name is misleading and risks worsening road safety.

The U.S.government has set no standards for these features, which are some of the newest technologies on vehicles today. A patchwork of state laws and voluntary federal guidelines is attempting to cover the testing and eventual deployment of autonomous vehicles in the United States. 

Learn More:

Background on: Self-driving cars and insurance

Tech Gains Tractionin Fight Against Insurance Fraud

By Max Dorfman, Research Writer, Triple-I

Insurance fraud costs the U.S. $308.6 billion a year, according to recent research by the Coalition Against Insurance Fraud (CAIF).  And, while staffing within insurers’ Special Investigation Units (SIU) is a pain point, CAIF found that use of anti-fraud technology is on the rise.

CAIF notes that hardest-hit insurance lines are:

  • Life insurance, at $74.7 billion annually;
  • Medicare and Medicaid, at $68.7 billion; and
  • Property and casualty, $45 billion.

“There is a huge and monumental impact that insurance fraud causes to American citizens, American families, and to our economy every single year,” said Matthew Smith, the coalition’s executive director.

Another recent CAIF study looked at SIUs and insurers’ response to fraud. The study found that SIU staff grew at 1.4 percent from 2021 to 2022, slower than the 2.5 percent growth rates from two previous studies addressing this issue. Staffing and talent are among the top concerns of anti-fraud leaders CAIF surveyed.

However, an additional CAIF study found that anti-fraud technology is increasingly being used—a positive sign in the fight against these crimes. Among the key findings of that report is that 80 percent of respondents use predictive modeling to detect fraud, up from 55 percent in 2018.

Insurance fraud is not a victimless crime. According to the FBI, the average American family spends an extra $400 to $700 on premiums every year because of fraud. Most of these costs are derived from common frauds, including inflating actual claims; misrepresenting facts on an insurance application; submitting claims for injuries or damage that never occurred; and staging accidents.

To further combat insurance fraud, there are ways to file complaints, including contacting your state’s fraud bureau; contacting your insurer to see if a fraud system is in place; using the National Insurance Crime Bureau (NICB) “Report Fraud” button; and reporting it to a local FBI branch.

“Insurance fraud is the crime we all pay for,” CAIF’s Smith added. “Ultimately, it’s American policyholders and consumers that pay the high cost of insurance fraud.”

Learn More:

Fraud, Litigation Push Florida Insurance Market to Brink of Collapse

Study: Insurers Suspect Rise in Fraudulent Claims Since Start of Pandemic

The Battle Against Deepfake Threats

Cellphone Bans Cut Crashes; TelematicsCan Help ReduceDistracted Driving

Max Dorfman, Research Writer, Triple-I

State prohibitions on cellphone use while driving correlate with reduced crash rates, according to recent research by the Insurance Institute for High Safety (IIHS). However, overall results were mixed among the states studied, with different legal language, degrees of enforcement, and penalty severity, providing possible explanations for the differing outcomes.

The study observed crash rate changes in California, Oregon, and Washington after legislation to prevent cellphone calls and texting while driving was enacted in 2017, with the research looking at overall numbers from 2015 to 2019. These numbers were compared to control states Idaho and Colorado.

Notably, the study found:

  • A 7.6 percent reduction in the rate of monthly rear-end crashes of all severities relative to the rates in the control states;
  • Law changes in Oregon and Washington were associated with significant reductions of 8.8 percent and 10.9 percent, respectively;
  • California did not experience changes in rear-end crash rates of all severities or with injuries associated with the strengthened law.

Still, state governments face several hurdles in their efforts to prevent crashes caused by cellphone use.

“Technology is moving much faster than the laws,” said Ian Reagan, a senior research scientist at IIHS. “Our findings suggest that other states could benefit from adopting broader laws against cellphone use while driving, but more research is needed to determine the combination of wording and penalties that is most effective.”

Distracted driving remains a major issue

Distracted driving remains a significant problem on roads nationwide. Indeed, distracted driving increased more than 30 percent from February 2020 to February 2022, due largely to changes in driving patterns spurred by the coronavirus pandemic, according to research by telematics service provider Cambridge Mobile Telematics.

The Governors Highway Safety Association (GHSA) reported that more than 3,100 people died in distraction-related accidents in 2020, with an estimated 400,000 people injured each year in such crashes. The true numbers, according to the study, are likely higher due to underreporting. The report also found that cell dial, cell text, and cell-browse were among the most prevalent and highest-risk behaviors.

Telematics can help

Telematics, which uses mobile technology to track driver behavior and provide financial incentives to drive less and often and more carefully, can help reduce dangerous driving. The more consumers positively react to the incentive, the less they pay for their insurance.

Research from the Insurance Research Council – like Triple-I, a nonprofit affiliate of The Institutes, focused on this exact issue, studying public perception and use of telematics. The study found that 45 percent of drivers surveyed said they made significant safety-related changes in the way they drove after participating in a telematics program. Another 35 percent said they made small changes in the way they drive.

During the pandemic, insurance consumers’ comfort with the idea of letting their driving be monitored in exchange for a better premium appeared to improve. In May 2019, mobility data and analytics firm Arity surveyed 875 licensed drivers over the age of 18 to find out how comfortable they would be having their premiums adjusted based on telematics variables. Between 30 and 40 percent said they would be either very or extremely comfortable sharing this data. In May 2020, they ran the survey again with more than 1,000 licensed drivers.

“This time,” Arity said, “about 50 percent of drivers were comfortable with having their insurance priced based on the number of miles they drive, where they drive, and what time of day they drive, as well as distracted driving and speeding.”

The Battle Against Deepfake Threats

By Max Dorfman, Research Writer, Triple-I

Some good news on the deepfake front: Computer scientists at the University of California have been able to detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods.

Deepfakes are intricate forgeries of an image, video, or audio recording. They’ve existed for several years, and versions exist in social media apps, like Snapchat, which has face-changing filters. However, cybercriminals have begun to use them to impersonate celebrities and executives that create the potential for more damage from fraudulent claims and other forms of manipulation.

Deepfakes also have the dangerous potential to be used to in phishing attempts to manipulate employees to allow access to sensitive documents or passwords. As we previously reported, deepfakes present a real challenge for businesses, including insurers.

Are we prepared?

A recent study by Attestiv, which uses artificial intelligence and blockchain technology to detect and prevent fraud, surveyed U.S.-based business professionals concerning the risks to their businesses connected to synthetic or manipulated digital media. More than 80 percent of respondents recognized that deepfakes presented a threat to their organization, with the top three concerns being reputational threats, IT threats, and fraud threats.

Another study, conducted by a CyberCube, a cybersecurity and technology which specializes in insurance, found that the melding of domestic and business IT systems created by the pandemic, combined with the increasing use of online platforms, is making social engineering easier for criminals.

“As the availability of personal information increases online, criminals are investing in technology to exploit this trend,” said Darren Thomson, CyberCube’s head of cyber security strategy. “New and emerging social engineering techniques like deepfake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organizations of all sizes.”

What insurers are doing

Deepfakes could facilitate the filing fraudulent claims, creation of counterfeit inspection reports, and possibly faking assets or the condition of assets that are not real. For example, a deepfake could conjure images of damage from a nearby hurricane or tornado or create a non-existent luxury watch that was insured and then lost. For an industry that already suffers from $80 billion in fraudulent claims, the threat looms large.

Insurers could use automated deepfake protection as a potential solution to protect against this novel mechanism for fraud. Yet, questions remain about how it can be applied into existing procedures for filing claims. Self-service driven insurance is particularly vulnerable to manipulated or fake media. Insurers also need to deliberate the possibility of deep fake technology to create large losses if these technologies were used to destabilize political systems or financial markets.

AI and rules-based models to identify deepfakes in all digital media remains a potential solution, as does digital authentication of photos or videos at the time of capture to “tamper-proof” the media at the point of capture, preventing the insured from uploading their own photos. Using a blockchain or unalterable ledger also might help.

As Michael Lewis, CEO at Claim Technology, states, “Running anti-virus on incoming attachments is non-negotiable. Shouldn’t the same apply to running counter-fraud checks on every image and document?”

The research results at UC Riverside may offer the beginnings of a solution, but as one Amit Roy-Chowdhury, one of the co-authors put it: “What makes the deepfake research area more challenging is the competition between the creation and detection and prevention of deepfakes which will become increasingly fierce in the future. With more advances in generative models, deepfakes will be easier to synthesize and harder to distinguish from real.”

Data Visualization:An Important Toolfor Insurance,Risk Management

By Max Dorfman, Research Writer, Triple-I

Data visualization has become an increasingly important tool for understanding and communicating complex risks and informing plans to address them.

Simply put, data visualization is the depiction of data through static or interactive charts, maps, infographics, and animations. Such displays help clarify multifaceted data relationships and convey data-driven insights.

The origins of data visualization could be considered to go back to the 16th century, during the evolution of cartography. However, modern data visualization is considered to have emerged in the 1960s, when researcher John W. Tukey published his paper The Future of Data Analysis, which advocated for the acknowledgement of data analysis as a branch of statistics separate from mathematical statistics. Tukey helped invent graphic displays, including stem-leaf plots, boxplots, hanging rootograms, and two-way table displays, several of have become part of the statistical vocabulary and software implementation.

Since Tukey’s advancements, data visualization has progressed in extraordinary ways. Matrices, histograms, and scatter plots (both 2D and 3D) can illustrate complex relationships among different pieces of data. And, in an age of big data, machine learning, and artificial intelligence, the possible applications of data science and data analytics has only expanded, helping curate information into easier to understand formats, giving insight into trends and outliers. Indeed, a good visualization possesses a narrative, eliminating the extraneous aspects of the data and emphasizing the valuable information. 

Whether for tracking long-term rainfall trends, monitoring active wildfires, or getting out in front of cyber threats, data visualization has proved itself tremendously beneficial for understanding and managing risk.

The Triple-I uses data visualization in its Resilience Accelerator to better illustrate the risks many communities face with natural disasters, particularly hurricanes, floods, and resilience ratings. Spearheaded by Dr. Michel Leonard, Chief Economist and Data Scientist, Head of the Economics and Analytics Department at the Triple-I, these data visualizations provide an ever-needed way to more effectively communicate these hazards, expanding the knowledge base of insurers, consumers, and policymakers.

To further understand data visualization, we sat down with Dr. Leonard.

Why is data visualization so essential in preparing for and responding to catastrophes? What immediately comes to mind is maps. We can make spreadsheets of policies and claims, but how do you express the relationships between each row in these spreadsheets? We can use data visualization to show how houses closest to a river are most at risk during a flood or show the likely paths of wildfires through a landscape. Before a catastrophe, these tools help us identify at-risk zones to bolster resilience. After a catastrophe, they help us identify areas that need the most to rebuild.

How can data visualization help change the way insurers confront the challenges of catastrophes? The most crucial aspect of data visualization for insurers is the potential to explore “what-if” scenarios with interactive tools. Understanding risk means understanding what range of outcomes are possible and what it most likely to happen. Once we start accounting for joint outcomes and conditional probabilities, spreadsheets turn into mazes. Thus, it’s important to illustrate the relationship between inputs and outputs in a way that is reasonably easy to understand.

With the increasing threat of climate risk, how much more significant do you anticipate data visualization will become? I’m reminded of the writings from the philosopher Timothy Morton, who described climate change as a “hyper-object”: a multifaceted network of interacting forces so complex, and with so many manifestations that it is almost impossible to fully conceptualize it in your head at once.

Climate change is complicated and communicating about the risks it creates is a unique problem. Very few people have time to read through a long technical report on climate risk and how it might affect them. Thus, the question becomes: How do we communicate to people the information they need in a way that is not only easy to understand but also engaging?

Images or infographics have always been compelling tools; however, we prefer interactive data visualization tools for their ability to capture attention and curiosity and make an impression.

How does the Resilience Accelerator fit into the sphere of data visualization? With the Resilience Accelerator, we wanted to explore the interplay between insurance, economics and climate risk, and present our findings in an engaging, insightful way. It was our goal from the beginning to produce a tool that would help policymakers, insurers, and community members could find their counties, see their ratings, compare their ratings with those of neighboring counties, and see what steps they should take to improve their ratings.

What motivated this venture into data visualization – and how can it help change the ways communities, policymakers, and insurers prepare for natural disasters? It’s our job to help our members understand climate-related risks to their business and to their policyholders. Hurricanes and floods are only the first entry in a climate risk series we are working on. We want our data to drive discussion about climate and resilience. We hope the fruits of those discussions are communities that are better protected from the dangers of climate change.

Where do you see data visualization going in the next five to 10 years?
I’m interested in seeing what comes from the recent addition of GPU acceleration to web browsers and the shift of internet infrastructure to fiber optics. GPU acceleration is the practice of using a graphics processing unit (GPU) in addition to a central processing unit (CPU) to speed up processing-intensive operations. Both of these technologies are necessary for creating a 3-D visualization environment with streaming real-time data.

Study Highlights Costof Data Breachesin a Remote-Work World

By Max Dorfman, Research Writer, Triple-I (04/27/2022)

A recent study by IBM and the Ponemon Institute quantifies the rising cost of data breaches as workers moved to remote environments during the coronavirus pandemic.

According to the report, an average data breach in 2021 cost $4.24 million – up from $3.86 million in 2020. However, where remote work was a factor in causing the breach, the cost increased by $1.07 million. At organizations with 81-100 percent of employees working remotely, the total average cost was $5.54 million.

To combat the risks associated the rise of remote work, the study highlights the importance of security artificial intelligence (AI) and automation fully deployed – a process by which security technologies are enabled to supplement or substitute human intervention in the identification and containment of incidents and intrusion efforts.

Indeed, organizations with fully deployed security AI/automation saw the average cost of a data breach decrease to $2.90 million. The duration of the breach was also substantially lower, taking an average of 184 days to identify the breach and 63 days to contain the breach, as opposed to an average of 239 days to identify the breach and 85 days to contain the breach for organizations without these technologies.

Organizations continue to struggle with breaches

In 2021 and 2022, several high-profile data breaches have illustrated the major risks cyberattacks represent. This includes a January 2022 attack 483 users’ wallets on Crypto.com, which resulted in the loss of $18 million in Bitcoin and $15 million in Ethereum and other cryptocurrencies.

In February, the International Committee of the Red Cross (ICRC) was targeted by a cyberattack that resulted in the hackers accessing personal information of more than 515,000 people being helped by a humanitarian program, with the intruders maintaining access to ICRC’s servers for 70 days after the initial breach.

And in April, an SEC filing revealed that the company Block, which owns Cash App, had been breached by a former employee in December of 2021. This leak included customers’ names, brokerage account numbers, portfolio value, and stock trading activity for over 8 million U.S. users.

Insurers play a key role in helping organizations

The increasing frequency and seriousness of cyberattacks has led more organizations to purchase cyber insurance, with 47 percent of insurance clients using this coverage in 2020, up from 26 percent in 2016, according to the U.S. Government Accountability Office. This shift includes insurers offering more policies specific to cyber risk, instead of including this risk in packages with other coverage.

The insurance industry offers first-party coverage – which typically provides financial assistance to help an insured business with recovery costs, as well as cybersecurity liability, which safeguards a business if a third party files a lawsuit against the policyholder for damages as a result of a cyber incident.

A third option, technology errors and omissions coverage, can safeguard small businesses that offer technology services when cybersecurity insurance doesn’t offer coverage. This kind of coverage is triggered if a business’s product or service results in a cyber incident that involves a third party directly.

Still, the primary focus for organizations looking to defend themselves from cyberattacks is implementing a rigorous cyber defense system.