Safe AI Usage: A Lesson from ChatGPT

blog/safe-ai-usage-a-lesson-from-chatgpt

2025-08-13

AI CASE STUDY A Caution To ChatGPT Users.png

As AI tools like ChatGPT become increasingly embedded in our personal and professional workflows, it's critical to understand the privacy trade-offs—especially when using the free or standard versions.

What many users don’t realize is that chats shared from ChatGPT have been publicly accessible, searchable, and even indexed on platforms like Google.

This case study highlights a growing privacy concern that should serve as a warning to anyone inputting sensitive or proprietary information into AI platforms.

Shared Chats Publicly Searchable

Unless you're using an Enterprise version of ChatGPT—which makes up less than 0.01% of users—your data and prompts might not be private.

Proportion of ChatGPT Enterprise Users - less than 0.01 percent statistic

ChatGPT was quick to patch this recent vulnerability, but until recently: if you used the built-in “Share Chat” feature, your conversations became publicly accessible and discoverable on Google.

It was a simple as a search like “site:chatgpt.com/share ‘google maps api customization’ " which yielded hundreds of pages of user chats with ChatGPT.

Example Topics Found Publicly Shared:

  • “Early AI in Games”

  • “Google Maps API Customization”

  • “Dealing with Guilt and Shame”

Some of these include highly personal reflections, while others may contain sensitive business or technical information.

The visibility of these chats raises serious red flags about using AI tools without strict security controls. Luckily, ChatGPT addressed these security concerns when they were discovered and limited public discoverability. Even after fixing the issue, the whole issue stems from ChatGPT not treating your chats as private in the first place, and this, hasn’t changed.

A Caution to ChatGPT Users - Shared ChatGPT Chats Public on Google Search Screenshot with Examples

Treat AI Tools Like Public Forums

Here’s the harsh truth: If you wouldn’t post it on a public blog or paste it into Google Search, don’t enter it into ChatGPT.

While it's tempting to use AI for brainstorming client proposals, analyzing confidential data, or offloading emotional struggles, doing so without understanding how your data is handled could have real-world consequences—legal, reputational, and personal.

In industries like healthcare, finance, or legal services, entering personally identifiable information (PII) into an unsecured AI platform could even violate privacy laws like HIPAA, PIPEDA, or GDPR.

Minimizing Risk Without an Enterprise License

If you still want to use ChatGPT for general assistance but want to reduce your exposure, follow these steps to tighten your account's privacy settings:

Minimize ChatGPT Risk without an enterprise license step 1 disable model training step 2 delete public shared links

Step 1: Disable Model Training

Turn off OpenAI’s use of your chats to improve future model performance.

  • Go to Settings > Data Controls > Improve the Model for Everyone and toggle it Off.
    (By default, this is On.)

Step 2: Delete Public Shared Links

Remove any shared links you may have unintentionally made public.

  • Go to Settings > Data Controls > Manage Shared Links and delete any entries listed.

These steps do not guarantee full privacy, but they offer some protection.

Your data may still be stored and accessible under certain conditions, especially if shared or used in ways not covered by OpenAI’s security promises to free-tier users.

Security Best Practices for Any AI Tool

Whether you’re using ChatGPT, Gemini, Claude, or any other AI platform, these best practices apply:

  • Data Minimization

    • Only input the minimum necessary data. Avoid including client names, proprietary code, addresses, or internal strategies.

  • Masking and Anonymization

    • Before sharing datasets or prompts, remove or mask PII and sensitive details. For example, replace names with placeholders or obscure identifying variables.

  • Enterprise Configuration

    • For businesses, investing in an Enterprise AI license can provide better access controls, data privacy assurances, and audit logs.

Key Takeaways

ChatGPT is powerful—but it is not a secure, private workspace by default.

Understanding the visibility and usage of your data is essential.

As AI becomes a more integrated part of work and life, so too must your awareness of how it can inadvertently compromise you or your organization.

Until strong, user-level privacy controls become the standard, treat ChatGPT like an open forum, not a safe or confidential notebook.

More from Red Pill Labs:

Next
Next

Guide to Dayforce Reporting