Is AI a Risk or a Resource?

Artificial Intelligence (AI) has rapidly moved from buzzword to baseline . From Ai-driven underwriting to chatbots and policy automation, it’s everywhere. But with great tech comes greater risk, and a new wave of coverage confusion that the industry can’t afford to ignore. 

It’s time to agree that AI isn’t always right! 

While AI can process data at incredible speed, it doesn’t inherently know what’s true. One of the most serious concerns is “AI hallucination”, a term for when generative AI tools produce human-like outputs that might be entirely wrong or over the concept. In sectors like insurance, where precision matters, this is more than just an exposure, it’s a liability. 

Imagine a claim being denied because an AI engine misrepresented injury stats or underwriting errors because of outdated risk data. These aren’t hypotheticals. They’re real risks in the age of unreviewed automation. 

Verification is no longer optional must essential 

As PEOs and insurance professionals integrate more AI-driven tools, the importance of credible data is increasing. The best predictive models are only as strong as the source data behind them. Outdated, unchecked, or biased inputs can cause serious downstream errors in quoting, binding, and claims. 

What should that tell us? AI can accelerate processes, but it cannot replace human validation. Structured data, clean sources, and consistent review must sit at the core of any AI deployment. 

The Coverage Conundrum 

The industry is now confronting a new question: Is AI-related liability even covered? 

Just like cyber risks outgrew bundled coverages and evolved into their own specialized policies, AI is now following the same path. With insurer AI adoption climbing from 48% to 71% in early 2025 alone, and new standalone AI policies already entering the market, the demand for dedicated AI coverage has clearly arrived. While some assume AI incidents are covered under existing GL, cyber, or E&O policies, the reality is murkier and growing in the wrong direction. 

In fact, certain insurers like W.R Berkley Insurance have already introduced absolute AI Exclusions, removing coverage for any claims arising from the development, deployment, or even indirect use of AI. These exclusions are broad covering everything from chatbot misstatements to flawed algorithmic decision-making and internal training gaps. 

If you haven’t reviewed your policies with AI in mind, now is the time! 

What’s next for the PEO and Insurance Industry? 

We're witnessing the early days of AI as a standalone line of exposure and eventually, of insurance. Much like employment practices liability (EPLI) evolved from a throw-in to a complex, modeled product, AI will demand its own dedicated underwriting and policy language. 

For brokers, carriers, and PEOs, this means: 

  • Proactively identifying AI touchpoints within operations. 

  • Auditing existing policies for exclusions or gray areas. 

  • Working with carriers that understand and are actively addressing AI risk. 

  • Prioritizing structured, credible data as the foundation for every model or tool used 

Final Thought 

AI is powerful. But if it's not backed by accurate data, expert human check, and clear coverage, it becomes a risk multiplier, not a solution. As our industry leans deeper into automation and predictive tools, the focus must shift from speed alone to accuracy, clarity, and insurability

Let’s not just innovate faster but smarter. 

Next
Next

A global SharePoint breach is spreading fast. Is your business exposed?