Major consumer AI systems (ChatGPT, Gemini, Copilot, Meta AI, etc.) do not just ‘answer your questions’. They also learn from you. Every message, every preference, every personal detail can potentially be reused to train their models. And most people are unaware of this or do not know how to opt out.
The CNIL — the French Data Protection Authority — is the French public authority responsible for protecting personal data and ensuring privacy in the digital world. It does more than just regulate: it empowers users.
It provides a practical guide (available here: https://www.cnil.fr/fr/ia-comment-sopposer-la-reutilisation-de-ses-donnees-personnelles-entrainement-agent-conversationnel) that explains in concrete terms how to object to the reuse of personal data by the main AI services.
The constant development of conversational agents (ChatGPT, Gemini, Copilot, Meta AI, etc.) raises an important question: what happens to our personal data once we have provided it to these tools? More and more companies are reusing conversation histories, user profiles and even public content to train and improve their artificial intelligence models.
The CNIL now offers a practical and very concrete resource that explains, service by service, how to object to this reuse of personal data. It does not comment on the compliance of these practices with the GDPR, but it gives users immediate means of action.

This guide provides step-by-step instructions on where to click and which options to disable on the main AI platforms on the market. For example: how to disable ‘Activity in Gemini apps’ on Google, at the risk of deleting your conversation history; how to fill out Meta’s forms to refuse the use of your information (even if you don’t have an account); where to uncheck the ‘Improve the model for everyone’ option in ChatGPT settings; or how to withdraw permission for LinkedIn to use your profile data to train its generative AI models. The document also covers Copilot (Microsoft), Grok (X), DeepSeek, Le Chat (Mistral), Claude (Anthropic) and WhatsApp, with precise instructions on which menus to open, the exact names of the sections and the forms to use.
In practical terms, this resource allows everyone to regain a minimum of control: limiting data collection, preventing the reuse of their exchanges for training purposes, and exercising their right to object. It is useful both for the general public (who are simply looking for ‘where to uncheck the box’) and for professionals/compliance teams who need to inform their employees about the risks associated with sharing sensitive data in these tools. In a context where AI is becoming ubiquitous—at work, in messaging, on social media—it has become essential to have clear, up-to-date (16 October 2025) instructions focused on user rights.
For a long time, AI has been presented as a black box: powerful, fascinating, inevitable. But not necessarily negotiable. This document published by the CNIL changes the balance: it reminds us that your personal data is not free currency to be used to train models without your consent, and that you have the right to object.
In other words: ‘AI needs data to learn, but your data needs limits.’
Setting up these settings is not paranoia. It is basic digital hygiene, just like choosing a good password or enabling two-factor authentication.
The CNIL guide provides you with technical instructions, service by service. It is then up to you to decide, with full knowledge of the facts, what you agree to share — and what you refuse to share.