AI Security for Business: What You Need to Know
Your team is already using AI. Even if you haven’t officially rolled it out, I can almost guarantee that someone in your business is pasting customer data, internal documents, or sensitive information into ChatGPT right now.
That’s not a hypothetical risk — it’s a Tuesday.
AI security doesn’t need to be complicated, and it shouldn’t stop you from using these tools. But you do need basic guardrails. The businesses that get burned by AI security issues are the ones that pretend the risk doesn’t exist.
This guide covers the practical security considerations for SMEs using AI — no scaremongering, no enterprise-level paranoia, just what you actually need to know and do.
The Core Risk: Where Does Your Data Go?
When you type something into an AI tool, that data goes to a server. The question is: what happens to it there?
How the Major Platforms Handle Your Data
ChatGPT (OpenAI):
- Free tier: Your conversations may be used for training (opt-out available in settings)
- Plus/Pro: Conversations are not used for training by default
- Team/Enterprise: Data is never used for training, additional security controls
- Action: If using Plus, go to Settings → Data Controls → Turn off “Improve the model”
Claude (Anthropic):
- Free and Pro: Conversations are not used for training by default (you have to opt in)
- API usage: Data is not stored beyond 30 days, never used for training
- Action: Claude’s default is the most privacy-friendly of the three
Gemini (Google):
- Free tier: Conversations may be used to improve products
- Advanced: Can opt out, but data is still processed by Google
- Workspace integration: Subject to Google Workspace data handling agreements
- Action: Review and configure data settings in your Google account
The bottom line: Paid tiers with data training disabled are significantly more secure than free tiers. For business use, always use paid subscriptions with appropriate privacy settings.
What You Should Never Put Into AI Tools
Regardless of which tool you use or what their privacy policy says, do not paste the following into any AI tool:
Customer Personal Data
- Names + contact details
- Financial information
- Health information
- Any data covered by GDPR (which is basically all personal data for UK/EU residents)
Instead: Anonymise first. “Summarise this customer complaint” works fine with names, account numbers, and identifying details removed.
Login Credentials
This sounds obvious, but people do it. “Can you check why this code isn’t connecting to our database?” followed by pasting a config file with API keys and passwords. Don’t.
Proprietary Business Information
- Unreleased product details
- Pricing strategies not yet public
- Financial projections and internal reports
- Competitive intelligence
- Trade secrets
Instead: Use your AI tool for structure and drafting, but keep the sensitive specifics out. “Write a pricing proposal template” is fine. “Here’s our exact pricing for Client X including our margins” is not.
Legal Documents Under NDA
If you’re reviewing a contract that’s under NDA, pasting it into an AI tool could technically breach the NDA. Check with your legal advisor.
Creating an AI Usage Policy (It’s Simpler Than You Think)
You don’t need a 50-page governance document. You need a one-page policy that answers these questions:
1. Which Tools Are Approved?
Specify which AI tools your team can use for business purposes. Standardise on paid tiers with appropriate privacy settings.
Example: “All team members should use Claude Pro for business tasks. Personal ChatGPT accounts should not be used for company work.”
2. What Data Can Be Shared?
Create a simple traffic light system:
🟢 OK to share:
- General industry information
- Publicly available content
- De-identified data and anonymised examples
- Draft content for review
- General business questions
🟡 Share with caution (remove identifying details first):
- Customer scenarios (anonymised)
- Internal processes and workflows
- Business performance data (aggregated, not specific)
🔴 Never share:
- Customer personal data
- Passwords, API keys, credentials
- Financial details (specific revenue, margins, salaries)
- Content under NDA
- Proprietary algorithms or trade secrets
3. Who Reviews AI Outputs?
All AI-generated content intended for external use must be reviewed by a human before sending/publishing. This isn’t just about security — it’s about quality and accuracy.
4. When Do You Disclose AI Use?
Decide your position:
- Do you tell clients when AI assisted the work?
- Do you disclose AI-generated content on your website?
- Are there legal requirements in your industry? (Some sectors have emerging AI disclosure rules)
My view: Be honest. Most clients appreciate transparency, and AI assistance doesn’t diminish the value of expert-reviewed work.
GDPR Considerations
If you’re processing personal data (and if you’re a UK/EU business, you almost certainly are), AI use intersects with GDPR in several ways:
Data Processing
Sending personal data to an AI tool is a form of data processing. Your privacy policy and data processing records should reflect this.
Practical steps:
- Update your Record of Processing Activities (ROPA) to include AI tools
- Ensure your privacy notice mentions AI assistance where relevant
- Use Data Processing Agreements (DPAs) with AI providers — all major providers offer these
Right to Erasure
If a customer requests data deletion (their GDPR right), you need to consider whether their data exists in any AI conversation logs. With proper anonymisation (as recommended above), this shouldn’t be an issue.
Data Minimisation
GDPR requires you to process only the data you need. Don’t paste entire customer records into AI when you only need a summary. This is good practice regardless of AI.
Practical Security Setup (30 Minutes)
Here’s what to do this week:
For the Business Owner
- Choose an approved AI tool and pay for business-tier access
- Disable data training in the tool’s settings
- Write a one-page AI policy using the framework above
- Brief your team (a 15-minute conversation is enough)
- Update your privacy policy if AI processes any customer-related information
For Each Team Member
- Use only the approved tool for business tasks
- Anonymise before pasting — remove names, account numbers, identifying details
- Never share credentials — no passwords, API keys, or internal system details
- Review all outputs before sending externally
- When in doubt, ask before sharing sensitive information
For the IT Person (If You Have One)
- Set up a business account on the approved platform
- Configure SSO if available (Team/Enterprise tiers)
- Review data handling agreements from the AI provider
- Monitor usage — periodic checks on what’s being shared (some enterprise tiers offer audit logs)
AI Security Myths
”AI tools steal your ideas”
The major AI providers have clear policies about not using business-tier conversations for training. Your marketing strategy isn’t going to appear in someone else’s ChatGPT output. Use paid tiers with data training disabled and this risk is negligible.
”AI is too risky for business”
The risk of not using AI (falling behind competitors who are) is arguably greater than the risk of using it with sensible guardrails. Risks exist — they’re manageable.
”We need enterprise security before we can start”
For most SMEs, a paid subscription with data training disabled, combined with a simple usage policy, provides adequate security. You don’t need SOC 2 compliance and a CISO to use ChatGPT for writing blog posts.
”If we ban AI, we’re safe”
Your team will use it anyway, on personal accounts, with no guardrails. It’s better to provide approved tools with clear policies than to pretend you can prevent usage.
Want Help Implementing AI Securely?
At Black Sheep Marketing, our ATLAS framework includes AI governance as a core component. We help businesses implement AI tools with appropriate security measures — not over-engineered enterprise solutions, but practical guardrails that protect your business without slowing you down.
If you’re using AI (or want to start) but need confidence that you’re doing it safely, let’s talk.
Book a Free ATLAS Consultation →