
By: Jennifer Gilligan, IntegraMSP President
Why businesses should pay attention before carriers start asking harder questions
For years, cybersecurity was treated as a technical issue. Then, cyber insurance carriers started asking questions.
Do you have multifactor authentication? Endpoint protection? Backup testing? Incident response plans?
What was once considered “best practice” quickly became a business requirement because insurers no longer wanted to absorb unmanaged cyber risk.
Artificial intelligence is moving in the same direction.
Businesses are adopting AI rapidly, often without governance, oversight, or clear ownership. Employees are using generative AI tools daily. AI is already embedded in Microsoft 365, CRMs, marketing platforms, security tools, and operational software. In many organizations, adoption is happening faster than leadership can track it. That creates a growing problem for insurers, regulators, and legal teams alike.
Recent reporting from the Financial Times found that insurers are already beginning to limit payouts tied to AI-related cyber events, including “LLMjacking” and regulatory exposure connected to AI misuse. (Financial Times) Willis Towers Watson also noted that AI liability concerns are forcing organizations and insurers to rethink how coverage gaps, governance failures, and accountability are handled. (WTW)
This should sound familiar because the same pattern emerged during the rise of cyber insurance.
Insurers discovered they were covering enormous cyber-related losses under policies that were never designed for ransomware, business email compromise, or nation-state attacks. Over time, carriers responded by tightening underwriting standards, requiring controls, and introducing exclusions for unmanaged risk. AI is becoming the next version of that story.
The challenge is that AI risk does not fit neatly into one category. It can involve cybersecurity, privacy, professional liability, intellectual property, compliance, and operational risk all at once. Recent insurance analysis shows carriers are increasingly concerned about governance failures tied to AI-generated errors, data leakage, discrimination claims, and autonomous decision-making without human review. (FRANKI T)
That concern is accelerating because AI systems are becoming more powerful and more integrated into core business operations. Last week, the International Monetary Fund warned that advanced AI systems could create systemic cyber risks capable of disrupting financial systems and shared infrastructure. (Financial Times)
The practical implication for businesses is straightforward: organizations will increasingly need to demonstrate that AI usage is governed, documented, and controlled. That does not mean companies need to stop using AI. In fact, the opposite is true.
Most businesses should focus first on using AI already embedded inside trusted software platforms rather than rushing to build unmanaged internal systems or allowing unrestricted use of public tools. AI capabilities inside platforms like Microsoft, Salesforce, ServiceNow, and enterprise security suites typically come with stronger controls around identity management, logging, tenant protections, compliance, and governance.
For many organizations, that is the most defensible starting point. The bigger issue is visibility.
Many businesses still do not know:
- Which AI tools are employees using
- What company data is being entered into those systems
- Which departments are relying on AI outputs
- Whether AI-generated content is being reviewed before use
That lack of visibility creates the same kind of “shadow AI” problem businesses faced years ago with shadow IT and unsanctioned cloud applications. This is where governance becomes operational rather than theoretical.
Organizations should already be taking several foundational steps:
- Creating an internal AI acceptable use policy
- Defining approved and prohibited AI tools
- Restricting sensitive data from unapproved AI platforms
- Establishing human review requirements for AI-generated outputs
- Assigning ownership for AI governance and risk management
- Tracking where AI is being used across the organization
None of these controls are extreme. They are the early-stage equivalent of requiring MFA before obtaining cyber insurance coverage. And that is likely where this is heading.
Insurance carriers do not want uncertainty that they cannot model. As AI-related litigation, compliance obligations, and cyber risks increase, underwriting will almost certainly shift toward organizations that can demonstrate structured governance and documented controls.
The companies that adapt early will be in a far stronger position than organizations that wait for insurers, regulators, or lawsuits to force the conversation.
The goal is not to slow AI adoption. It is to make AI governable, defensible and safe enough to use at scale.
Free Download: Basic AI Acceptable Use Policy
If your organization has started using AI but has not yet formalized governance, policy, or usage standards, we put together a basic AI Acceptable Use Policy as a practical starting point.
It is designed to help businesses begin documenting:
- Approved AI usage
- Employee responsibilities
- Human oversight expectations
- Data protection requirements
- Governance considerations
Download the free sample policy here:
AI Acceptable Use Policy Sample
If you would like to discuss where your business currently stands with AI - we would love to chat.
