Skip to main content

Shadow AI Is a Legal Risk for Your Business, Not Just an IT Problem

Employees using unapproved AI tools can trigger HIPAA, GLBA, and state breach law violations. Learn how to protect your business with an AI use policy.

April 7, 2026

Employee using AI tools on laptop in office setting

Your employees are probably using AI tools right now. They are pasting customer data into ChatGPT, running financial projections through free AI apps, and summarizing client communications with browser extensions you have never heard of. They are not doing it to cause harm. They are doing it to get their work done faster.

But here is the problem: when employees use unapproved AI tools without your knowledge, your business takes on serious legal risk. This is called shadow AI, and it is not just an IT headache. It is a compliance issue that can trigger violations of HIPAA, the Gramm-Leach-Bliley Act (GLBA), state data breach notification laws, and your own contractual obligations to clients and vendors.

The fix is straightforward. You need an employee AI policy that tells your team what is allowed, what is off-limits, and what happens if the rules are broken. And you need it before a breach forces your hand.

What Is Shadow AI?

Shadow AI refers to employees using artificial intelligence tools that have not been reviewed, approved, or even noticed by the business. Think of it as the AI version of shadow IT, where workers adopt their own software without going through official channels.

Common examples include:

  • Pasting client emails or contracts into ChatGPT to get a quick summary or draft a response.
  • Uploading spreadsheets with financial data into AI-powered analytics tools.
  • Using AI browser extensions that read and process everything on screen, including sensitive documents.
  • Running AI transcription on recorded calls that may contain protected health information or financial details.

None of these tools have been vetted by your company. None of them are covered by your vendor agreements. And in most cases, the data your employees share becomes part of the AI provider's training set, meaning it is no longer private.

The Numbers Tell a Clear Story

This is not a hypothetical risk. Industry surveys consistently suggest that a significant percentage of employees share sensitive company data with AI tools without their employer's knowledge. Some estimates put this figure near half of all AI-using workers. That means a large portion of your workforce may already be feeding confidential data into systems you have never evaluated.

Research also indicates that breaches involving shadow AI cost organizations substantially more than breaches that do not involve unauthorized AI use, once you factor in investigation, notification, regulatory penalties, and lost business.

For a small or mid-sized business, the financial impact of an unexpected data breach can be devastating. And the direct costs do not include the cost of litigation if a client or business partner sues you for mishandling their data.

Which Laws Does Shadow AI Put at Risk?

The legal exposure from shadow AI depends on your industry, your clients, and the type of data your employees handle. But several major regulations come into play for most businesses.

HIPAA (Health Insurance Portability and Accountability Act)

If your business handles protected health information (PHI) in any capacity, HIPAA applies to you. This includes healthcare providers, insurance companies, HR departments that manage employee health plans, and any vendor or subcontractor that touches PHI.

When an employee pastes patient data or health plan information into an AI chatbot, that is a disclosure of PHI to an unauthorized third party. There is no business associate agreement with the AI provider. There is no encryption in transit that meets HIPAA standards. The data may be stored on servers you cannot audit.

HIPAA penalties range from $100 to $50,000 per violation, with annual caps up to $1.5 million per category (these amounts are adjusted periodically for inflation). Willful neglect penalties can go higher.

$100-$50K HIPAA Penalty Per Violation
$1.5M HIPAA Annual Cap Per Category
$100K GLBA Penalty Per Violation

GLBA (Gramm-Leach-Bliley Act)

Financial institutions, including banks, insurance companies, accounting firms, and businesses that offer financial products, must comply with GLBA's Safeguards Rule. This rule requires you to protect customer financial information with administrative, technical, and physical safeguards.

If an employee at your accounting firm uploads a client's tax documents to an AI tool that has not been vetted, you have failed to safeguard that information. The FTC enforces GLBA, and penalties include fines up to $100,000 per violation and criminal penalties for officers and directors.

State Data Breach Notification Laws

Every state has a data breach notification statute. These laws generally require businesses to notify affected individuals when personal information has been acquired by an unauthorized person. Most states also set notification deadlines and define what constitutes personal information. The details vary, but the core obligation is the same everywhere: if personal data is compromised, you must act.

When your employee shares personal data with an AI tool, that data has been transmitted to and stored by a third party your business has no agreement with. Depending on how the AI provider handles that data, this could meet the legal definition of a breach, triggering your obligation to investigate, notify affected parties, and report to the state attorney general.

Contractual Obligations

Many businesses have contracts with clients, partners, or vendors that include confidentiality provisions, data handling requirements, or restrictions on how information can be processed. If your employee runs a client's proprietary data through an unapproved AI tool, you may be in breach of that contract, even if no government regulation was violated.

Contract breaches can lead to lawsuits, loss of clients, and damage to your reputation. For businesses that rely on trust and long-term relationships, this may be the most costly consequence of all.

Need help with this topic?

Not sure where your business stands on AI compliance? Schedule a free consultation with Surge Business Law to find out.

Why Employees Use Unapproved AI Tools

Before you can fix the problem, it helps to understand why it happens. Employees turn to shadow AI for a few common reasons.

  • No policy exists. If you have not told your team what AI tools are acceptable, they will choose their own. Most employees assume that if a tool is free and publicly available, it is fine to use at work.
  • Approved tools are too slow or limited. If your company's official software does not keep up with what employees need, they will find alternatives.
  • They do not understand the risk. Most employees have no idea that pasting data into a chatbot could violate federal law or breach a client contract. They are not being reckless. They just have not been told.
  • There is pressure to perform. AI tools genuinely make people more productive. Employees who feel pressure to deliver results quickly will use whatever gets the job done.

The solution is not to punish employees for using AI. The solution is to give them clear rules and approved alternatives.

What Your AI Use Policy Needs to Cover

An effective AI use policy does not have to be long or complicated. But it does need to address a few key areas.

Define Approved and Prohibited Tools

List the AI tools employees are allowed to use. Specify which tools are prohibited. If you want employees to use a particular internal tool or an enterprise version of a chatbot that has data protections, name it. If free consumer AI tools are off-limits for work tasks, say so clearly.

Specify What Data Cannot Be Shared

Be explicit about the types of information that must never be entered into any AI tool. This includes personally identifiable information (PII), protected health information, financial records, client data, trade secrets, and any information covered by a nondisclosure agreement.

Require Disclosure of AI-Assisted Work

Decide whether employees need to disclose when they have used AI to produce a deliverable. For some businesses, this matters for quality control. For others, it matters because clients have a right to know.

Establish Consequences for Violations

Like any workplace policy, your AI use policy should outline what happens if an employee breaks the rules. This could range from additional training to termination, depending on the severity of the violation.

Assign Responsibility for AI Governance

Someone in your organization needs to own the policy. This person reviews new AI tools, updates the approved list, and handles incidents. For small businesses, this might be the owner or a designated manager. For larger teams, it might be a compliance officer or outside counsel.

Do Not Forget Your Vendor Agreements

If your business does decide to adopt AI tools officially, those tools need proper vendor agreements. A standard terms-of-service page is not enough.

Your agreement with an AI vendor should cover:

  • Data ownership: confirm who owns the data your employees input and whether the AI provider can use it for training
  • Data retention and deletion: verify how long the provider keeps your data and whether you can request deletion
  • Security standards: confirm the provider meets your industry requirements (SOC 2, HIPAA, etc.)
  • Breach notification: require the provider to notify you promptly if your data is compromised
  • Liability and indemnification: define what happens if the provider's system causes a breach or produces inaccurate output that harms your business

Many AI providers, especially free or consumer-grade tools, will not agree to these terms. That is exactly why they should not be used for business purposes.

What About Intellectual Property?

Shadow AI also creates intellectual property risks that many business owners overlook. When an employee uses an AI tool to draft a contract, write marketing copy, or generate code, the ownership of that output is uncertain. Most AI providers' terms of service give the user a license to the output, but the legal landscape around AI-generated content and copyright is still evolving.

If your business relies on original content, proprietary processes, or trade secrets, you need to control how AI fits into the creation process. Your intellectual property protections should account for AI-assisted work.

How Surge Business Law Helps You Get Ahead of This

Shadow AI is a problem that sits at the intersection of employment law, data privacy, and business contracts. It is not something a one-size-fits-all template can solve. Surge Business Law builds practical, enforceable AI governance frameworks that fit your specific operations.

AI Governance Policy Drafting

We draft AI use policies that are tailored to your business, your industry, and the regulations that apply to you. These policies are written in plain language so your employees actually understand and follow them. They cover tool approval, data handling, disclosure requirements, and enforcement.

Vendor Agreement Review

If you want to adopt AI tools for your business, we review the vendor's terms to make sure your data is protected. We negotiate better terms when possible and advise you on which providers meet your compliance requirements and which ones do not.

Employment Policy Updates

Your employee handbook and onboarding documents need to reflect the reality of AI in the workplace. We update your existing employment agreements, confidentiality clauses, and acceptable use policies to address AI-related risks.

Ongoing Support Through Momentum

AI tools and regulations change fast. As a Momentum member, you get ongoing access to legal advice as new tools emerge and new rules take effect. You do not have to guess whether your policy is still current. You can ask us.

Our transparent pricing means you know what this costs before you commit. No hourly billing surprises.

Take Action Before a Breach Forces Your Hand

Shadow AI is not going away. Your employees will keep using AI tools whether you have a policy or not. The question is whether you are going to manage that risk proactively or wait until a breach, a lawsuit, or a regulatory investigation makes the decision for you.

An AI use policy is not a luxury. It is a basic compliance requirement for any business where employees have access to a web browser. The cost of putting a policy in place is a fraction of the cost of cleaning up after a data breach.

If you are a business owner and you do not have an AI use policy yet, now is the time to fix that.

Book a free consultation with Surge Business Law to get your AI use policy started. We will help you understand your exposure, draft a policy that fits your business, and make sure your team knows the rules.

Frequently Asked Questions

What is shadow AI?

Shadow AI is when employees use artificial intelligence tools that have not been approved, vetted, or monitored by the business. It includes free chatbots, browser extensions, transcription tools, and other AI applications used without the employer's knowledge.

Is shadow AI actually illegal?

Shadow AI itself is not illegal, but it can cause your business to violate laws like HIPAA, GLBA, and state data breach statutes. It can also put you in breach of contracts with clients and vendors.

Do small businesses really need an AI use policy?

Yes. If your employees have access to the internet, they have access to AI tools. Without a policy, you have no control over what data leaves your organization. The size of your business does not reduce your legal obligations.

What should an AI use policy include?

At a minimum, it should list approved and prohibited tools, define what data cannot be shared with AI, establish disclosure requirements for AI-assisted work, and outline consequences for violations.

How much does it cost to get an AI policy drafted?

Surge Business Law offers flat-fee policy drafting. Visit our pricing page for current rates, or contact us for a free consultation to discuss your needs.

Can I just ban AI tools entirely?

You can, but a total ban is difficult to enforce and may push AI use further underground. A better approach is to approve specific tools with appropriate safeguards and clearly prohibit the rest.

Not sure where your business stands on AI compliance? Schedule a free consultation with Surge Business Law to find out.
Book a free consultation with Surge Business Law to get your AI use policy started. We will help you understand your exposure, draft a policy that fits your business, and make sure your team knows the rules.
Previous Data Breach Law: What Small Businesses Must Know Next Cybersecurity Compliance for Small Businesses: What You Actually Need