AI Privacy Face-Off: Who’s Using Your Data—and Who Isn’t | Splat

Privacy and AI: Which AI Platforms Deserve Your Trust — And Which Don’t

TwitterLinkedIn

Which AI Platforms Deserve Your Trust—and Which Don’t

Like everyone else, I’ve spent a lot of time in the interfaces of all the major AI platforms. It’s revolutionizing nearly every field; marketing is experiencing cataclysmic changes.

It’s impossible to have been doing this as long as I have and remain skepticism-free about personal data which finds it’s way into the databases of the major online players.

We’ve all seen it: you Google a health symptom once and spend the next two weeks being followed around by ads for prescription medications. You browse a pair of boots and suddenly Instagram thinks you’re on a wilderness expedition. The connection between what we search, and what shows up in our feeds isn’t a coincidence—it’s business.

In the interest of indulging my skepticism – and demystifying the data privacy standards prevalent in AI today, I spent some time reviewing the privacy policies of today’s major AI platforms.

The verdict? A few good citizens. And a few platforms you might want to keep at arm’s length.

Specifically, I dug into the fine print on ChatGPT, Claude, Gemini, Meta AI, Perplexity, and DeepSeek. Some platforms make strong commitments to privacy. Others bury broad rights in legal language or casually mention they might share your data with “trusted partners.”

This post breaks it down—platform by platform.


What’s Really Happening With Your Data?

Here’s what’s really happening: almost every AI system collects your inputs. That’s the point. Your prompts teach the model how to respond better.

But here’s what you might not know:

  • Some of that data gets stored—indefinitely.
  • Some gets used to train future versions.
  • Some gets passed to affiliates or outside vendors.
  • And depending on the platform, deleting it might be more fiction than fact.

So before you use AI to plan a strategy, troubleshoot code, or draft something sensitive, ask: where does this go next?


Key Privacy Questions You Should Be Asking

  • Who owns the data I input into the system?
    Some platforms let you retain ownership. Others claim broad rights over your content, especially if it’s used for training.
  • Is my data used to train future AI models?
    Unless you opt out, it often is. That means your prompts could be absorbed into future versions—even if anonymized.
  • Can I delete my data completely?
    Some platforms allow it, others offer partial deletion, and a few provide no clear path at all.
  • What kind of data is collected beyond the prompt?
    Think device metadata, IP address, geolocation, and session history. Some platforms go even further.
  • Is my data shared with anyone else?
    Affiliates. Service providers. Analytics partners. The language is often vague, but the sharing is real.
  • Are security protocols strong enough to protect my data?
    Encryption is typically used in transit and at rest, but leaks can still happen—especially during training or storage.
  • What happens if the company is sold or shuts down?
    Unless their policy says otherwise, your data might be transferred to new owners.
  • Is the AI provider compliant with global privacy regulations?
    GDPR, CCPA, HIPAA—all matter, depending on what data you input and where you’re based.

The Comparative Table: What the Major Platforms Really Say

Below is the side-by-side comparison you won’t find on the homepage. It lays out how each major AI platform actually handles your data.

Platform User Input Ownership Data Resale Policy Permanence Data Deletion Used for Training Data Shared With
ChatGPT (OpenAI) User retains ownership; data may be used for training unless opted out No Subject to change Yes; user can delete data Yes, unless opted out No resale; may use for training unless opted out
Claude (Anthropic) User owns outputs; data not used for training by default No Subject to change Yes; data deleted upon request No, unless opted in None by default
Perplexity User retains ownership; data not sold No Subject to change Yes; user can delete account and data No, unless opted in None unless opted in
DeepSeek Broad rights claimed; user inputs may be used Yes; may be shared with third parties Subject to change Limited information Yes Third parties and affiliates
Gemini (Google) User retains ownership; data used to improve services No Subject to change Yes; user can delete data Yes Affiliates and service providers
Meta User retains ownership; public data may be used *No direct resale; data shared with affiliates/service providers Subject to change Yes; data can be deleted through account settings Yes Affiliates and service providers

Some platforms—Claude, Perplexity, and to some extent ChatGPT—do a decent job of minimizing risk. Others, like DeepSeek, offer little clarity and reserve sweeping rights over your input. That’s not a good sign.

Meta avoids the word “sell,” but its use of vague terms like “affiliates” and “trusted partners” should raise your eyebrows. Gemini (Google), for all its institutional polish, still uses your data to improve services—and doesn’t always clarify where that line gets drawn.

The more vague the policy, the more caution you should exercise.


Closing Thought

AI isn’t magic. It’s software. It’s code. And it runs on the words, patterns, and data you feed it.

That means every time you use an AI platform—whether you’re writing a sales email or mapping out a new idea—you’re handing over information. The question is: do you trust the people behind the curtain?

Before you hit return, stop and ask:

Who else is seeing this?

Because privacy isn’t paranoia. It’s preparation.

Post Navigation