Lesson 2 of 5 · Premium Track

Protecting your
privacy with AI

⏱ 25 min 🔒 Data reality check 📋 Personal audit

When you type a message to ChatGPT, Claude, or Gemini — where does that go? When Grammarly reads your emails, what does it see? When you use an AI image tool, who owns the output?

Most people using AI tools every day have genuinely no idea what's being collected, stored, or used. This lesson won't make you paranoid — it'll make you informed. Informed is different. You can make real decisions when you know the facts.

📌 The core principle

Your conversations with AI are not like conversations with a friend. They're more like conversations in a room with a transcript, a legal team, and a product roadmap. That doesn't mean you shouldn't use AI — it means you should know what you're sharing and with whom.


What the major AI tools
actually collect

Click each tool to see what's really happening behind the interface. This isn't scaremongering — it's the actual data from their privacy policies and terms of service.

💬
ChatGPT (OpenAI)
High exposure
Stores chats?
Yes, by default. All conversations are saved and can be reviewed by OpenAI.
Used for training?
Yes, unless you opt out in settings. Your conversations help train future models.
Data location
US servers. Subject to US law and potential government requests.
Shared with?
OpenAI staff, contractors, and potentially partners. Anonymised for research.
Opt out?
Yes — Settings → Data Controls → turn off "Improve the model for everyone". Chat history can be disabled separately.
🔒 Practical step: Turn off training opt-in in ChatGPT settings. Never share passwords, financial details, medical information, or anything you'd be uncomfortable seeing on a screen.
🤖
Claude (Anthropic)
Medium exposure
Stores chats?
Yes on claude.ai. API calls are not stored by default.
Used for training?
Consumer conversations may be used for safety and model improvement. Opt-out available.
Data location
US-based. Anthropic has stricter safety review processes than most competitors.
Privacy stance
Anthropic's constitutional AI approach includes explicit policies against using sensitive data for training without consent.
Opt out?
Account settings on claude.ai. Paid plans offer stronger data controls.
🔒 Practical step: For sensitive work, use the API via a tool with a strong data processing agreement, or a business/enterprise account where training opt-outs are guaranteed.
✏️
Grammarly
High exposure
What it reads
Everything you type — emails, documents, messages, passwords in forms, everything in text fields in your browser.
Stores content?
Yes. Documents you write are stored on Grammarly's servers.
Used for training?
Yes, to improve suggestions. Anonymisation is claimed but not independently verifiable.
Risk level
Very high for professionals — confidential client emails, unreleased documents, personal messages all pass through Grammarly.
Alternatives
LanguageTool (self-hosted option), ProWritingAid with stricter enterprise controls, or Claude/ChatGPT for specific editing tasks.
⚠️ Critical note: Many organisations explicitly ban Grammarly on work devices because it reads confidential documents. Check your employer's policy before using it professionally.
🎨
Midjourney / DALL-E / image AI
Medium exposure
Your prompts
Stored and typically public by default on Midjourney. DALL-E prompts are stored by OpenAI.
Image ownership
Varies significantly. Midjourney free tier — platform owns usage rights. Paid tiers — you own commercial rights. Read the specific T&Cs before commercial use.
Training use
All prompts and generations are used to improve models. No opt-out on most platforms.
Face images
Uploading photos of real people (including yourself) for AI processing has significant privacy implications in many jurisdictions, including the EU.
🔒 Practical step: Never upload photos of other people to image AI without their knowledge and consent. Be aware that Midjourney prompts are visible to other users by default unless you have a private subscription.
📊
Microsoft Copilot (in Office)
Lower exposure
Data handling
Enterprise version: data stays within your organisation's Microsoft 365 tenant. Not used for OpenAI model training.
Consumer version
More similar to ChatGPT — conversations stored by Microsoft, used for improvement.
Compliance
Enterprise: GDPR compliant, SOC 2 certified, data residency options. One of the stronger enterprise data protection stances.
Who controls
Your IT admin controls what Copilot can access within your organisation's data.
🔒 If your organisation uses M365 Enterprise Copilot, this is generally the safest option for sensitive professional work — assuming your IT team has configured it correctly.

Your personal
privacy audit

Work through this checklist for your own life. Every item you tick is a concrete step that reduces your exposure. You don't have to do everything — but you should make the decision consciously.

🔒 Privacy audit
Your AI privacy checklist
Tick each item you've done or you commit to doing
Progress
0 / 20
Account settings
Turn off training opt-in on ChatGPT
Settings → Data Controls → Improve the model for everyone → Off
Review chat history settings on every AI tool you use
Decide whether stored history poses a risk for you specifically
Read the privacy policy of your most-used AI tool
At least the "What we collect" and "How we use it" sections — takes 5 minutes
Use separate accounts for personal and professional AI use
Keeps your work data and personal data in separate risk buckets
What you share
Never paste passwords, API keys, or credentials into an AI chat
Even "to check" or "for context" — these go to servers you don't control
Remove identifying details before pasting documents about other people
Replace names, emails, addresses with [NAME], [EMAIL] etc. before asking AI to process them
Check your company policy before using AI for client work
Many organisations have explicit policies — violating them can be a disciplinary issue
Don't upload photos of other people to AI tools without their consent
Especially for face-based tools — privacy laws in many regions require explicit consent
Be cautious with medical or financial information in AI chats
These are the highest-sensitivity categories. Anonymise where possible.
Browser & apps
Audit browser extensions that have "read all site data" permissions
Grammarly, Jasper, and writing AI extensions read everything you type. Know which ones you have.
Remove AI extensions from your browser when doing sensitive work
Or use a separate browser profile without extensions for confidential tasks
Check what AI your phone keyboard is sending to servers
iOS predictive keyboard, Gboard, and others all have AI features with data implications
Review microphone and camera permissions for AI apps
Voice AI tools especially — know what's being recorded and when
Your data rights
Know that you can request deletion of your data from AI companies
GDPR (EU/UK), CCPA (California), and similar laws give you the right to request deletion
Download your data archive from AI tools you use frequently
ChatGPT, Claude, and others let you export all your conversation history — useful to see what they have
Delete conversation history you wouldn't be comfortable with others seeing
Old sensitive conversations don't need to live on AI servers indefinitely
Understand that "free" AI tools are often funded by your data
The business model matters. Know whether you're the customer or the product.
Set a recurring reminder to review your AI tool privacy settings
Privacy policies change. A quarterly check takes 10 minutes and keeps you current.
⚡ The practical rule

Don't share anything with an AI that you wouldn't be comfortable sharing with a stranger who works at a tech company. Not because AI companies are malicious — most aren't — but because data you share can be accessed, leaked, or repurposed in ways you didn't intend. This rule rules out about 90% of the situations where people overshare.

Key takeaway

Privacy isn't about being paranoid. It's about knowing what you're trading and deciding whether it's worth it — on your terms, not theirs.

Mark lesson as complete
PreviousSpotting AI misinformation