Category Archives: Technology

Data Visualization:An Important Toolfor Insurance,Risk Management

By Max Dorfman, Research Writer, Triple-I

Data visualization has become an increasingly important tool for understanding and communicating complex risks and informing plans to address them.

Simply put, data visualization is the depiction of data through static or interactive charts, maps, infographics, and animations. Such displays help clarify multifaceted data relationships and convey data-driven insights.

The origins of data visualization could be considered to go back to the 16th century, during the evolution of cartography. However, modern data visualization is considered to have emerged in the 1960s, when researcher John W. Tukey published his paper The Future of Data Analysis, which advocated for the acknowledgement of data analysis as a branch of statistics separate from mathematical statistics. Tukey helped invent graphic displays, including stem-leaf plots, boxplots, hanging rootograms, and two-way table displays, several of have become part of the statistical vocabulary and software implementation.

Since Tukey’s advancements, data visualization has progressed in extraordinary ways. Matrices, histograms, and scatter plots (both 2D and 3D) can illustrate complex relationships among different pieces of data. And, in an age of big data, machine learning, and artificial intelligence, the possible applications of data science and data analytics has only expanded, helping curate information into easier to understand formats, giving insight into trends and outliers. Indeed, a good visualization possesses a narrative, eliminating the extraneous aspects of the data and emphasizing the valuable information. 

Whether for tracking long-term rainfall trends, monitoring active wildfires, or getting out in front of cyber threats, data visualization has proved itself tremendously beneficial for understanding and managing risk.

The Triple-I uses data visualization in its Resilience Accelerator to better illustrate the risks many communities face with natural disasters, particularly hurricanes, floods, and resilience ratings. Spearheaded by Dr. Michel Leonard, Chief Economist and Data Scientist, Head of the Economics and Analytics Department at the Triple-I, these data visualizations provide an ever-needed way to more effectively communicate these hazards, expanding the knowledge base of insurers, consumers, and policymakers.

To further understand data visualization, we sat down with Dr. Leonard.

Why is data visualization so essential in preparing for and responding to catastrophes? What immediately comes to mind is maps. We can make spreadsheets of policies and claims, but how do you express the relationships between each row in these spreadsheets? We can use data visualization to show how houses closest to a river are most at risk during a flood or show the likely paths of wildfires through a landscape. Before a catastrophe, these tools help us identify at-risk zones to bolster resilience. After a catastrophe, they help us identify areas that need the most to rebuild.

How can data visualization help change the way insurers confront the challenges of catastrophes? The most crucial aspect of data visualization for insurers is the potential to explore “what-if” scenarios with interactive tools. Understanding risk means understanding what range of outcomes are possible and what it most likely to happen. Once we start accounting for joint outcomes and conditional probabilities, spreadsheets turn into mazes. Thus, it’s important to illustrate the relationship between inputs and outputs in a way that is reasonably easy to understand.

With the increasing threat of climate risk, how much more significant do you anticipate data visualization will become? I’m reminded of the writings from the philosopher Timothy Morton, who described climate change as a “hyper-object”: a multifaceted network of interacting forces so complex, and with so many manifestations that it is almost impossible to fully conceptualize it in your head at once.

Climate change is complicated and communicating about the risks it creates is a unique problem. Very few people have time to read through a long technical report on climate risk and how it might affect them. Thus, the question becomes: How do we communicate to people the information they need in a way that is not only easy to understand but also engaging?

Images or infographics have always been compelling tools; however, we prefer interactive data visualization tools for their ability to capture attention and curiosity and make an impression.

How does the Resilience Accelerator fit into the sphere of data visualization? With the Resilience Accelerator, we wanted to explore the interplay between insurance, economics and climate risk, and present our findings in an engaging, insightful way. It was our goal from the beginning to produce a tool that would help policymakers, insurers, and community members could find their counties, see their ratings, compare their ratings with those of neighboring counties, and see what steps they should take to improve their ratings.

What motivated this venture into data visualization – and how can it help change the ways communities, policymakers, and insurers prepare for natural disasters? It’s our job to help our members understand climate-related risks to their business and to their policyholders. Hurricanes and floods are only the first entry in a climate risk series we are working on. We want our data to drive discussion about climate and resilience. We hope the fruits of those discussions are communities that are better protected from the dangers of climate change.

Where do you see data visualization going in the next five to 10 years?
I’m interested in seeing what comes from the recent addition of GPU acceleration to web browsers and the shift of internet infrastructure to fiber optics. GPU acceleration is the practice of using a graphics processing unit (GPU) in addition to a central processing unit (CPU) to speed up processing-intensive operations. Both of these technologies are necessary for creating a 3-D visualization environment with streaming real-time data.

Study Highlights Costof Data Breachesin a Remote-Work World

By Max Dorfman, Research Writer, Triple-I (04/27/2022)

A recent study by IBM and the Ponemon Institute quantifies the rising cost of data breaches as workers moved to remote environments during the coronavirus pandemic.

According to the report, an average data breach in 2021 cost $4.24 million – up from $3.86 million in 2020. However, where remote work was a factor in causing the breach, the cost increased by $1.07 million. At organizations with 81-100 percent of employees working remotely, the total average cost was $5.54 million.

To combat the risks associated the rise of remote work, the study highlights the importance of security artificial intelligence (AI) and automation fully deployed – a process by which security technologies are enabled to supplement or substitute human intervention in the identification and containment of incidents and intrusion efforts.

Indeed, organizations with fully deployed security AI/automation saw the average cost of a data breach decrease to $2.90 million. The duration of the breach was also substantially lower, taking an average of 184 days to identify the breach and 63 days to contain the breach, as opposed to an average of 239 days to identify the breach and 85 days to contain the breach for organizations without these technologies.

Organizations continue to struggle with breaches

In 2021 and 2022, several high-profile data breaches have illustrated the major risks cyberattacks represent. This includes a January 2022 attack 483 users’ wallets on Crypto.com, which resulted in the loss of $18 million in Bitcoin and $15 million in Ethereum and other cryptocurrencies.

In February, the International Committee of the Red Cross (ICRC) was targeted by a cyberattack that resulted in the hackers accessing personal information of more than 515,000 people being helped by a humanitarian program, with the intruders maintaining access to ICRC’s servers for 70 days after the initial breach.

And in April, an SEC filing revealed that the company Block, which owns Cash App, had been breached by a former employee in December of 2021. This leak included customers’ names, brokerage account numbers, portfolio value, and stock trading activity for over 8 million U.S. users.

Insurers play a key role in helping organizations

The increasing frequency and seriousness of cyberattacks has led more organizations to purchase cyber insurance, with 47 percent of insurance clients using this coverage in 2020, up from 26 percent in 2016, according to the U.S. Government Accountability Office. This shift includes insurers offering more policies specific to cyber risk, instead of including this risk in packages with other coverage.

The insurance industry offers first-party coverage – which typically provides financial assistance to help an insured business with recovery costs, as well as cybersecurity liability, which safeguards a business if a third party files a lawsuit against the policyholder for damages as a result of a cyber incident.

A third option, technology errors and omissions coverage, can safeguard small businesses that offer technology services when cybersecurity insurance doesn’t offer coverage. This kind of coverage is triggered if a business’s product or service results in a cyber incident that involves a third party directly.

Still, the primary focus for organizations looking to defend themselves from cyberattacks is implementing a rigorous cyber defense system.  

Invest in Technology — But Don’t Forgetto Invest in People

A recent survey of insurance underwriters found that 40 percent of their time is spent on “tasks that are not core” to underwriting. The top three reasons they cited are:

  • Redundant inputs/manual processes;
  • Outdated/inflexible systems; and
  • Lack of information/analytics at the point of need.

The survey – conducted by The Institutes and Accenture – also found that underwriting quality processes and tools are at their lowest point since the survey was first conducted in 2008. Only 46 percent of the 434 underwriters who responded said they believe their frontline underwriting practices are “superior” – which is down 17 percent from 2013.

“While underwriters believe technology changes have improved underwriting performance, 64 percent said their workload has increased or had no change with technology investments,” Christopher McDaniel, president at The Institutes Catastrophe Resiliency Council, told attendees at Triple-I’s Joint Industry Forum.

The survey’s findings with respect to talent may shed some light on this. The number of organizations viewed as having “superior” talent management capabilities for underwriting fell 50 percent since 2013 across almost every measure of performance evaluated.

“Training, recruiting, and retention planning had some of the biggest drops, particularly for personal lines,” McDaniel said. About a quarter of personal lines underwriters said they view their company’s talent management programs as deficient.  That rate rose to 41 percent for talent retention; 37 percent for in succession planning; 33 percent for in training; and 30 percent for recruiting

“While technology investment may have improved underwriting performance” in terms of risk evaluation, quoting, and selling, McDaniel said those improvements “appear to have come at the expense of training and retaining underwriting talent,” McDaniel said.

Even before the pandemic and “the great resignation,” insurance faced a talent gap.  Part of the challenge has been finding replacements for a rapidly retiring workforce, as the median age of insurance company employees is higher than in other financial sectors.

McKinsey study that assessed the potential impact of automation on functions like underwriting, actuarial, claims, finance, and operations at U.S and European companies found that as underwriting  becomes more technical in nature it also will require more social skills and flexibility. Respondents to the McKinsey survey said automation and analytics-driven processes will produce a greater need for “soft skills” to shape and interpret quantitative outputs. Adaptability will also become more important for underwriters to stay responsive to changing risks and learn new techniques as technology changes.

“Underwriters will not become programmers themselves,” the McKinsey report said, “but they will work extensively with colleagues in newer digital and data-focused roles to develop and manage underwriting solutions.”

NFT & Insurance: Is It “A Thing”?

Non-fungible tokens (NFTs) are a hot topic, gaining attention from pop culture to the business press. Most of this notoriety has been associated with the buying and selling of digital collectibles, but the underlying blockchain technology and this specific application of it have implications for tangible assets and for insuring both digital and physical properties.

For this reason, the Institutes RiskStream Collaborative – the risk-management and insurance industry’s first enterprise-level blockchain consortium – recently launched a free educational series about NFTs.

What are NFTs?

“Non-fungible” means an object is unique and can’t be replaced with something else. A dollar is fungible – you can trade it for another dollar bill or four quarters or specific numbers of other coins, and you still have exactly one dollar.  An individual bitcoin is fungible. A one-of-a-kind trading card isn’t fungible – if you trade it for a different card, you would have a different thing, and you would lose possession of your original card.

NFTs are unique digital markers that can be associated with an asset to identify it as one-of-a-kind.

Want to understand more? Watch the first episode.

Insurance potential

In the second episode, the RiskStream Collaborative brings in Jakub Krcmar, CEO of Veracity Protocol, to discuss the concepts of computer vision, digital twins, and NFTs of physical products. The ability to create a unique digital twin of exact replicas – like identical baseball cards or identical automobile gears – to create an NFT may have major insurance implications. One example was the potential for NFTs to be associated with high-value physical objects to demonstrate authenticity of ownership and reduce or eliminate fraud opportunities.

Episode three features Natalia Karayaneva, CEO of Propy, who explains the potential for NFTs in real estate transactions. She highlights some of the benefits of the NFT approach, underscoring the efficiencies brought to primarily paper-intensive processes. The potential for insurance also is discussed.

In episode four, Kaleido CEO Steve Cerveny wraps up the series by describing the tokens themselves. He highlights the ability to create NFTs to represent any asset. These tokens are programmable “things” on a blockchain, which can help with business processes. Blockchains are basically ledgers or databases. Like any ledger, they record transactions; unlike traditional ledgers, however, blockchains are distributed across networked computer systems. Anyone with an internet connection and access to the blockchain can view and transact on the chain.

This open, consensus-based nature of blockchain – with everyone on the chain checking the validity of every transaction according to an established set of rules – enables conflicts to be resolved automatically and transparently to all participants. This dispenses with the need for a central authority to enforce trust and allows participants to build in automation through smart contracts.

The Riskstream Collaborative is the largest blockchain consortium in insurance, with over 30 carriers, brokers, and reinsurers as members who lead governance and activity. An “associate member ecosystem” is beginning to be established, and RiskStream is inspecting use cases in personal lines, commercial lines, reinsurance, and life and annuities.

As Cybercriminals Act More Like Businesses, Insurers Must ThinkMore Like Criminals

Credit for all photos in this post: Don Pollard

Cybersecurity is no longer an emerging risk but a clear and present one for organizations of all sizes, panelists on a panel at Triple-I’s Joint Industry Forum (JIF) said. This is due in large part to the fact that cybercriminals are increasingly thinking and behaving like businesspeople.

“We’ve seen a large increase in ransomware attacks for the sensible economic reason that they are lucrative,” said Milliman managing director Chris Beck. Cybercriminals also are becoming more sophisticated, adapting their techniques to every move insurers, insureds, and regulators make in response to the latest attack trends. “Because this is a lucrative area for cyber bad actors to be in, specialization is happening. The people behind these attacks are becoming better at their jobs.”

As a result, the challenges facing insurers and the customers are increasing and becoming more complex and costly. Cyber insurance purchase rates reflect the growing awareness of this risk, with one global insurance broker finding that the percentage of its clients who purchased this coverage rose from 26 percent in 2016 to 47 percent in 2020, the U.S. Government Accountability Office (GAO) stated in a May 2021 report.

Panel moderator Dale Porfilio, Triple-I’s chief insurance officer, asked whether cyber is even an insurable risk for the private market. Panelist Paul Miskovich, global business leader for the Pango Group, said cyber insurance has been profitable almost every year for most insurers. Most cyber risk has been managed through more controls in underwriting, changes in cybersecurity tools, and modifications in IT maintenance for employees, he said.

By 2026, projections indicate insurers will be writing $28 billion annually in gross written premium for cyber insurance, according to Miskovich. He said he believes all the pieces are in place for insurers to adapt to the challenges presented by cyber and that part of the industry’s evolution will rely on recruiting new talent.

“I think the first step is bringing more young people into the industry who are more facile with technology,” he said. “Where insurance companies can’t move fast enough, we need partnerships with managing general agents, with technology and data analytics, who are going to bring in data and new information.”

“Reinsurers are in the game,” said Catherine Mulligan, Aon’s global head of cyber, stressing that reinsurers have been doing a lot of work to advance their understanding of cyber issues. “The attack vectors have largely remained unchanged over the last few years, and that’s good news because underwriters can pay more attention to those particular exposures and can close that gap in cybersecurity.”

Mulligan said reinsurers are committed to the cyber insurance space and believe it is insurable. “Let’s just keep refining our understanding of the risk,” she said.

When thinking about the future, Milliman’s Beck stressed the importance of understanding the business-driven logic of the cybercriminals.

If, for example, “insurance contracts will not pay if the insured pays the ransom, the logic for the bad actor is, ‘I need to come up with a ransom schema that I’m still making money’,” but the insured can still pay without using the insurance contract.

This could lead to a scenario in which the ransom demands become smaller, but the frequency of attacks increases. Under such circumstances, insurers might have to respond to demand for a new kind of product.

Learn More about Cyber Risk on the Triple-I Blog

Cyberattacks on Health Facilities: A Rising Danger

Cyber Insurance’s “Perfect Storm”

“Silent” Echoes of 9/11 in Today’s Management of Cyber-Related Risks

Brokers, Policyholders Need Greater Clarity on Cyber Coverage

Cyber Risk Gets Real, Demands New Approaches

Executive Exchange: Pandemic Lockdown Speeds Insurance Digitization Growth

The global pandemic accelerated many technological advances that were already underway in the insurance industry – changes that are likely to pick up speed as COVID-19 recedes, according to Rohit Verma, CEO, Crawford & Co.

Triple-I CEO Sean Kevelighan recently spoke with Verma about the dramatic changes taking place as virtual interactions became more necessary and expected by consumers – especially in the early stages of the COVID-19 crisis. Crawford is a global provider of claims management solutions. 

“We have a self-service app which had probably a seven to 10 percent adoption rate,” Verma said in the conversation, which you can watch in full below. “Within the first three months, that adoption rate went up from seven to 10 percent to about 35 to 40 percent.”

Verma said he expects further acceleration of digitization in insurance, with start-ups partnering with larger, established companies to transform how insurance is done. The biggest obstacle, as he sees it, is the question: “Do we approach problems with a viewpoint of how we do solve them digitally, or do we approach them on how we solve them traditionally?”

Verma will be one of five senior executives participating in the C-Suite Panel on Resilience at Triple-I’s 2021 Joint Industry Forum on Thursday, Dec. 2, in New York City.

This Just In:Insurance Isn’t Boring

I just learned that November 3 is National Cliché Day. Who knew?

So, what better time than now (before it’s too late!) to bust the cliché that insurance is a boring industry.

The cliché might be rooted in the idea that insurance is all about remaining cozily in some imaginary “safety zone”.  Or maybe in the fact that the industry’s visual surface tends to be one of dull-looking paperwork full of fine print.

But think about it: the entire industry is rooted in risk!

Automobile accidents and other forms of property damage are only the start of it. There’s liability risk – the risk of being sued: product liability, professional liability, employment practices, directors and officers, errors and omissions, medical malpractice – the list goes on, and insurance professionals have to understand these areas of risk intimately to price policies, set aside appropriate reserves, and pay claims in a timely fashion.

Is climate-related risk keeping you up at night? You’re not alone. Insurers have been working on that one for decades, empowered by sophisticated modeling and analytics capabilities.  They aren’t just worrying about extreme weather and climate – they’re partnering with other industries, communities, and governments to do something about it.  

And, speaking of sophisticated technology – what about cyber risk? The average cost of a data breach rose year over year in 2021 from $3.86 million to $4.24 million, according to a recent report by IBM and the Ponemon Institute — the highest in the 17 years that this report has been published. These kinds of numbers add up quickly. Unlike flood and fire – perils for which insurers have decades of data to help them accurately measure and price policies – cyber threats are comparatively new and constantly evolving. The presence of malicious intent results in their having more in common with terrorism than with natural catastrophes.

These are just a few of the risks types insurance professionals look in the eye daily, working with a wide range of experts across industries and disciplines to meet them.  From the individual and family level to businesses large and small to the global economy, insurers play a critical role as both risk-management partners and financial first responders.

Keep these things in mind next time you catch yourself stifling a yawn at the mention of insurance!

Insuretech Connect: Showcasing Innovation

Sean Kevelighan leads Climate Risk and Resilience panel. (Photo/videos by Scott Holeman,
Media Relations Director, Triple-I)

By Loretta Worters, Vice President, Media Relations, Triple-I

Insuretech Connect – the world’s largest gathering of insurance leaders and innovators – last week brought together insurance technology stakeholders to network, share insights, and learn about leading-edge technology across all insurance lines.

Conference participants included Pete Miller, president and CEO of The Institutes, who discussed risk mitigation through new technology. 

“Capturing data about the things we do and then allowing us to mitigate risk before we even get to the insurance function, that’s really where I think this industry is going,” he said.

One panel, Climate Risk and Resilience, focused on the importance of Insurtech and innovation to the success and sustainability of the industry. Moderated by Triple-I CEO Sean Kevelighan, the panel included Sean Ringsted, chief digital officer at Chubb; Christie McNeill, associate partner with McKinsey & Company and leader of ESG and Climate Change for the Insurance Practice in North America; Alisa Valderrama, CEO and co-founder of FutureProof Technologies, a venture-backed financial analytics software company specializing in climate risk; and Susan Holliday, Triple-I nonresident scholar and senior advisor to the International Finance Corporation (IFC) and the World Bank, where she focuses on insurance and Insuretech.

“Insurers are no stranger to climate and extreme weather,” Kevelighan said. “They have had a financial stake in it for decades.”

He noted that insured losses caused by natural disasters have grown by nearly 700 percent since the 1980s and four of the five costliest natural disasters in U.S. history have occurred over the past decade.

U.S. insurers paid out $67 billion in 2020 due to natural disasters. The insured losses emerged in part as the result of 13 hurricanes, five of the six largest wildfires in California’s history, and a derecho that caused significant damage in Iowa. 

This year’s Hurricane Ida is expected to cost insurers at least $31 billion and to push Hurricane Andrew out of the top five damaging storms. 2021 has been another record year for wildfires. January 1 to September 19, 2021 there were 45,118 wildfires, compared with 43,556 in the same period in 2020. 

The panelists talked about how insurers have long been aware of climate risk and – to the extent that existing data-gathering and modeling technologies allowed – considered it in risk pricing and reserving. As information storage and processing have vastly improved, the industry has not only gotten better at underwriting and reserving for these risks – it has identified opportunities in areas it once could only view as problems.

Improved modeling, for example, has increased insurers’ comfort with and appetite for writing flood coverage and spurred the development of new products. 

“Insurers are and always will be financial first responders, but there’s a growing realization that risk transfer alone isn’t enough,” Kevelighan said.  “Insurance is one important step toward resilience.  It’s well documented that better-insured communities recover faster from disasters.  But more is required to address increasingly complex global risks.”

Deepfake: A Real Hazard

By Maria Sassian, Triple-I consultant

Videos and voice recordings manipulated with previously unheard-of sophistication – known as “deepfakes“ – have proliferated and pose a growing threat to individuals, businesses, and national security, as Triple-I warned back in 2018.

Deepfake creators use machine-learning technology to manipulate existing images or recordings to make people appear to do and say things they never did. Deepfakes have the potential to disrupt elections and threaten foreign relations. Already, a suspected deepfake may have influenced an attempted coup in Gabon and a failed effort to discredit Malaysia’s economic affairs minister, according to Brookings Institution

Most deepfakes today are used to degrade, harass, and intimidate women. A recent study determined that up to 95 percent of the thousands of deepfakes on the internet were pornographic and up to 90 percent of those involved nonconsensual use of women’s images.

Businesses also can be harmed by deepfakes. In 2019, an executive at a U.K. energy company was tricked into transferring $243,000 to a secret account by what sounded like his boss’s voice on the phone but was later suspected to be thieves armed with deepfake software.

“The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” said a spokesperson for Euler Hermes SA, the unnamed energy company’s insurer. Security firm Symantec said it is aware of several similar cases of CEO voice spoofing, which cost the victims millions of dollars.

A plausible – but still hypothetical – scenario involves manipulating video of executives to embarrass them or misrepresent market-moving news.

Insurance coverage still a question

Cyber insurance or crime insurance might provide some coverage for damage due to deepfakes, but it depends on whether and how those policies are triggered, according to Insurance Business.  While cyber insurance policies might include coverage for financial loss from reputational harm due to a breach, most policies require network penetration or a cyberattack before it will pay a claim. Such a breach isn’t typically present in a deepfake.

The theft of funds by using deepfakes to impersonate a company executive (what happened to the U.K. energy company) would likely be covered by a crime insurance policy.

Little legal recourse

Victims of deepfakes currently have little legal recourse. Kevin Carroll, security expert and Partner in Wiggin and Dana, a Washington D.C. law firm, said in an email: “The key to quickly proving that an image or especially an audio or video clip is a deepfake is having access to supercomputer time. So, you could try to legally prohibit deepfakes, but it would be very hard for an ordinary private litigant (as opposed to the U.S. government) to promptly pursue a successful court action against the maker of a deepfake, unless they could afford to rent that kind of computer horsepower and obtain expert witness testimony.”

An exception might be wealthy celebrities, Carroll said, but they could use existing defamation and intellectual property laws to combat, for example, deepfake pornography that uses their images commercially without the subject’s authorization.

A law banning deepfakes outright would run into First Amendment issues, Carroll said, because not all of them are created for nefarious purposes. Political parodies created by using deepfakes, for example, are First Amendment-protected speech.

It will be hard for private companies to protect themselves from the most sophisticated deepfakes, Carroll said, because “the really good ones will likely be generated by adversary state actors, who are difficult (although not impossible) to sue and recover from.”

Existing defamation and intellectual property laws are probably the best remedies, Carroll said.

Potential for insurance fraud

Insurers need to become better prepared to prevent and mitigate fraud that deepfakes are capable of aiding, as the industry relies heavily on customers submitting photos and video in self-service claims. Only 39 percent of insurers said they are either taking or planning steps to mitigate the risk of deepfakes, according to a survey by Attestiv.

Business owners and risk managers are advised to read and understand their policies and meet with their insurer, agent or broker to review the terms of their coverage.

Insurance Careers Corner: Q&A with Sunil Rawat, Co-Founder and CEO of Omniscience

By Marielle Rodriguez, Social Media and Brand Design Coordinator, Triple-I

Sunil Rawat

Triple-I’s “Insurance Careers Corner” series was created to highlight trailblazers in insurance and to spread awareness of the career opportunities within the industry.

This month we interviewed Sunil Rawat, Co-Founder and CEO of Omniscience, a Silicon Valley-based AI startup that specializes in Computational Insurance. Omniscience uses five “mega-services” that comprise of underwriting automation, customer intelligence, claims optimization, risk optimization, and actuarial guidance to help insurance companies improve their decision-making and achieve greater success.

We spoke with Rawat to discuss his technical background, the role of Omniscience technology in measuring and assessing risk, and the potential flaws in underwriting automation.

Tell me about your interest in building your business. What led you to your current position and what inspired you to found your company?

I’m from the technology industry. I worked for Hewlett Packard for about 11 years, and hp.com grew about 100,000% during my tenure there. Then I helped Nokia build out what is now known as Here Maps, which in turn powers, Bing Maps, Yahoo Maps, Garmin, Mercedes, Land Rover, Amazon, and other mapping systems.

I met my co-founder, Manu Shukla, several years ago. He’s more of the mad scientist, applied mathematician. He wrote the predictive caching engine in the Oracle database, the user profiling system for AOL, and the recommender system for Comcast. For Deloitte Financial Advisory Services, he wrote the text mining system used in the Lehman Brothers probe, the Deepwater Horizon probe and in the recent Volkswagen emissions scandal. He’s the ‘distributed algorithms guy’, and I’m the ‘distributed systems guy’. We’re both deeply technical and we’ve got this ability to do compute at a very high scale.

We see an increasing complexity in the world, whether it’s demographic, social, ecological, political, technological, or geopolitical. Decision-making has become much more complex. Where human lives are at stake, or where large amounts of money are at stake on each individual decision, each individual decision’s accuracy must be extremely high. That’s where we can leverage our compute, taken from our learnings over the last 20 years, and bring it to the insurance domain. That’s why we founded the company — to solve these complex risk management problems. We’re really focused on computational finance, and more specifically, computational insurance.

What is Omniscience’s overall mission?

It’s to become the company that leaders go to when they want to solve complex problems. It’s about empowering leaders in financial services to improve risk selection through hyperscale computation.

What are your main products and services and what role does Omniscience technology play?

One of our core products is underwriting automation. We like to solve intractable problems. When we look at underwriting, we think about facultative underwriting for life insurance where you need human underwriters. The decision-making heuristic is so complex. Consider somebody who’s a 25-year-old nonsmoker asking for a 10-year term policy of $50,000 — it’s kind of a no-brainer and you can give them that policy. On the other hand, if they were asking for $50 million, you’re certainly going to ask for a blood test, a psychological exam, a keratin hair test, and everything in between. You need humans to make these decisions. We managed to take that problem and use our technology to digitize it. If you take a few hundred data fields, and a few 100,000 cases to build an AI model, it quickly becomes completely intractable from a compute standpoint. That’s where we can use our technology to look at all the data in all its facets — we automate and use all of it.

Once you’ve got an AI underwriter’s brain in software, you think from the customer intelligence standpoint. You’ve got all this rich transaction data from your customers to pre-underwrite, qualify, and recommend them for different products. We’ve also built a great capability in the data acquisition area. For workers comp and general liability, we have the data that improves the agent experience. We can also correctly classify any NAICS codes and can help with claims avoidance and finding hidden risk. We’ve also got a great OCR capability. In terms of digitization of text, we can take complex tabular data and digitize it without any human in the loop. We’re able to do this worldwide, even in complex Asian languages. We also do a lot of work in asset and liability management and can do calculations that historically have been done in a very low-powered, inaccurate manner. We can run these calculations daily or weekly, vs annually, which makes a big difference for insurance companies.

We also work in wildfire risk. A lot of wildfire spread models look at a ZIP+4 or a zip code level, and they take about four hours to predict one hour of wildfire spread, so about 96 hours to predict one day of wildfire spread at a zip code level. In California, where I am, we had lots of wildfires last year. When you double the density of the grid, the computation goes up 8x. What we were able to do is improve and look at the grid at 30 meters square, almost at an individual property size. You can individually look at the risk of the houses. At a 30-meter level, we can do one hour of wildfire propagation in 10 seconds, basically one day in about four minutes.

Are there any potential flaws in relying too much on automation technology that omits the human element?

Absolutely. The problem with AI systems is they may generally be only as good as the data that they’re built on. The number one thing is that because we can look at all the data and all its facets, we can get to 90+ percent accuracy on each individual decision. You also need explainability. It’s not like an underwriter decides in a snap and then justifies the decision. What you need from a regulatory or an auditability standpoint is that you must document a decision as you go through the decision-making process.

If you’re building a model off historical data, how do you make sure that certain groups don’t get biased again? You need bias testing. Explainability, transparency, scalability, adjustability — these are all very important. From a change management, risk management standpoint, you have the AI make the decision, and then you’ll have a human review. After you’ve done that process for some months, you can introduce this in a very risk-managed way. Every AI should also state its confidence in its decision. It’s very easy to decide, but you also must be able to state your confidence number and humans must always pay attention to that confidence number.

What is traditional insurance lacking in terms of technology and innovation? How is your technology transforming insurance?

Insurers know their domain better than any insurtech can ever know their domain. In some ways, insurance is the original data science. Insurers are very brilliant people, but they don’t have experience with software engineering and scale computing. The first instinct is to look at open-source tools or buy some tools from vendors to build their own models. That doesn’t work because the methods are so different. It’s kind of like saying, “I’m not going to buy Microsoft Windows, I’m going to write my own Microsoft Windows”, but that’s not their core business. They should use their Microsoft Windows to run Excel to build actuarial models, but you wouldn’t try to write your own programs.

We are good at system programming and scale computing because we’re from a tech background. I wouldn’t be so arrogant to think that we know as much about insurance as any insurance company, but it’s through that marriage of domain expertise in insurance and domain expertise in compute that leaders in the field can leapfrog their competitors.

Are there any current projects you’re currently working on and any trends you see in big data that you’re excited about?

Underwriting and digitization, cat management, and wildfire risk is exciting, and some work that we’re doing in ALM calculations. When regulators are asking you to show that you have enough assets to meet your liabilities for the next 60 years on a nested quarterly basis, that becomes very complex. That’s where our whole mega-services come in — if you can tie all together your underwriting, claims, and capital management, then you can become much better at selection, and you can decide how much risk you want to take in a very dynamic way, as opposed to a very static way.

The other things we’re excited about is asset management. We are doing some interesting work with a very large insurer. What we’ve been able to do is boost returns through various strategies. That’s another area we’re excited about — growing quite rapidly in the next year.

What your goals are for 2021 and beyond?

It’s about helping insurers develop this multi-decade compounding advantage through better selection, and we’re just going to continue to execute. We’ve got a lot of IP and technology developed, and we’ve got pilot customers in various geographies that have used our technology. We’ve got the proof points and the case studies, and now we’re just doubling down on growing our business, whether it’s with the same customers we have or going into more product lines. We are focused on serving those customers and signing on a few more customers in the three areas where we are active, which is Japan, Hong Kong, China, and North America. We are focused on methodically executing on our plan.