Skip Ribbon Commands
Skip to main content

​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​​

 

‭(Hidden)‬ Catalog-Item Reuse

What AI Means for Cyber Liability and Risk Management

While artificial intelligence (AI) enhances operational efficiency and security, it also widens the cyberattack surface, leading to complex challenges for organizations.
Sponsored by
what ai means for cyber liability and risk management

More than 80% of enterprises will have used generative artificial intelligence (AI) or deployed generative AI-enabled applications by 2026, according to Gartner. The continued evolution of AI technology and its increasing use within organizations, including the use of chatbots such as ChatGPT, is fervent—from helping companies improve customer service experiences and creating operating efficiencies to the use of AI for the company's intelligent security tools and services.

But, like most technological advancements, AI implementation brings complex challenges, particularly when it comes to cybersecurity. The use of AI is transforming the cybersecurity market, with a trickle-down impact on the cyber liability insurance market. As its use becomes normal in many organizations, it will bring both benefits and threats.

“We know that AI will absolutely increase the cyberattack surface for businesses," says Shawn Ram, head of insurance at Coalition. “We've already observed attackers using AI tools in the wild to extort money from our customers."

One headlining example of a cyberattack using AI technology occurred in February when an employee of Arup, the British multinational design and engineering giant, was duped into paying out $25.6 million to fraudsters. In this instance, deepfake technology was used to pose as the company's chief financial officer in a video conference call, which led to one of the company's Hong Kong employees making the wire transfer.

“Incidents like that have occurred and that's been on the radar," says Katie Pope, vice president, executive lines, at The Liberty Company Insurance Brokers. “That would likely be covered by the social engineering coverage, which sits on a cyber policy and can also sit on a crime policy. It is important that two policies are talking to each other on the cybercrime coverage overlap. Additionally, 'cybercrime' can encompass a few different coverages, and it is important this is looked at closely."

Agents can ensure their clients have the necessary coverage for social engineering claims by understanding their clients' needs, particularly as “on both policies, [coverage] is severely sublimited at $250,000," Pope says. “You can build it out in excess, but $250,000 is normally the starting point."

However, it is imperative that companies “are buyers of cyber liability coverage in general, particularly as we continue to evolve around AI and any other types of technology," says Derek Kilmer, associate managing director, Burns & Wilcox. “We still have a large number of small and medium-sized businesses (SMB) that are mostly non-buyers, so there's not even an opportunity for coverage from that standpoint. Luckily, the industry continues to evolve from a cybercrime standpoint in regard to AI."

As the use of AI offers both threats and benefits for organizations, independent agents can assist their clients by highlighting what these are and by offering advice on their use and application.

“When using AI, the No. 1 thing that organizations need to be aware of is where they're using it and to what extent," says Tim Zellman, vice president, global product owner—cyber and privacy insurance, HSB. “Agents should be aware of how their customers are using AI, particularly in areas where services have been provided using good old-traditional human intelligence previously, and now are using artificial intelligence services in its place."

Agents can encourage their clients to “have some type of internal policies in place for the use of AI by employees, including what they can and can't be doing and how they're using AI," Pope says. “Additionally, offering some way to monitor its use is necessary."

Further, “one of the biggest risks associated with companies using AI is privacy," Ram says. “Certain AI use cases may operate without express consumer consent and others may identify personal traits or interests that impact the consumer."

“It's been a huge issue, particularly for non-breach related privacy claims," Pope says. “I've seen quite a few claims that have alleged violations of the Video Privacy Protection Act, which was enacted in the late '80s so that, basically, people's Blockbuster rental histories weren't wrongfully disclosed. They've been trying to apply VPPA and other privacy laws to when entities pixel track; it could be a similar infringement with AI."

Besides the complexities, AI is becoming an essential tool in the fight against cybercrime for organizations. “We see reasons for optimism, given defenders can also use AI for quicker threat detection and incident response," Ram says.

“AI can be used defensively so you spot the vulnerabilities and patch," Pope says. “In that realm, it's similar to what they're already doing with cybersecurity, where they're already trying to find a vulnerability in their system before the bad actor."

Ultimately, “companies that use AI need to weigh the risks and benefits for their business," Ram adds.

Olivia Overman is IA content editor. 

17994
Monday, October 7, 2024
Cyber Liability
Big I Markets