Most generators of AI generators rely on user data to train their chatbots. For this, they can turn to public or private data. Some services are less invasive and more flexible to collect the data from their users. Others, not so much. A new report from the incogni data deletion service examines the best and the worst IA when it comes to respecting your personal data and your confidentiality.
For his report “Gen Ai and LLM Data Confidential Ranking 2025“Incogni examined nine Popular GENERATIVE POPULAR SERVICES and applied 11 different criteria to measure their data confidentiality practices. The criteria covered the following questions:
- What data is used to train models?
- Can user conversations be used to train models?
- Can invites be shared with non-service suppliers or other reasonable entities?
- Can user personal information be deleted from the training data set?
- How clear it is if prompts are used for training?
- Is it easy to find information on how models have been trained?
- Is there a clear privacy policy for data collection?
- To what extent is the Privacy Policy readable?
- What sources are used to collect user data?
- Are data shared with third parties?
- What data do AID APPLOY are collecting?
The suppliers and the AI included in the research were the cat of Mistral AI, the Openai Chatppt, the Xai Grok, the Claude d’Anthropic, the PI of the Inflection AI, the Deekseek, the Copilot Microsoft, Google Gemini and Meta Ai. Each AI has successfully succeeded with certain questions and not as well with others.
Also: you want AI to work for your business? Then confidentiality must come first
For example, Grok has obtained a good note for the way he clearly conveys that the guests are used for training, but have not succeeded so well the legibility of his privacy policy. As another example, the notes given to Chatgpt and Gemini for the collection of mobile application data have deferred a little between the iOS and Android versions.
Through the group, however, the cat won the first prize as the most friendly IA service. Although he has lost a few points for transparency, he was still doing well in this area. In addition, its data collection is limited and it has obtained high points on other specific confidentiality problems at AI.
Chatgpt ranked second. Incogni researchers were slightly concerned about how OPNAI models are formed and how user data interact with the service. But Chatgpt clearly presents the confidentiality policies of the company, allows you to understand what is happening with your data and provides clear means to limit the use of your data.
(Disclosure: Ziff Davis, the parent company of Zdnet, filed a complaint in April 2025 against Openai, alleging that it violated Ziff Davis Copyrights in the training and exploitation of its AI systems.)
Grok arrived in third place, followed by Claude and Pi. Everyone had problems in certain areas, but overall, overall, respect for the confidentiality of users.
“Mistral Ai’s cat is the least invasive platform, with Chatgpt and Grok that follow closely,” said incogni in its report. “These platforms have classified the highest with regard to their transparency on how they use and collect data, and how easy it is to withdraw from the personal data used to form underlying models. Chatgpt has proven to be the most transparent to know if the guests will be used for the formation of models and had a clear confidentiality policy.”
As for the lower half of the list, Deepseek took sixth place, followed by Copilot, then Gemini. This left Meta IA in the last place, evaluated the IA service of the least user -friendly group.
Also: how Apple plans to form its AI on your data without sacrificing your privacy
Copilot has marked the worst of nine services according to the criteria specific to AI, such as the data used to train models and if user conversations can be used in the training. Meta AI won the worst note for its global data collection and sharing practices.
“The platforms developed by the largest technological companies have proven to be the most invasive confidentiality, Meta AI (Meta) being the worst, followed by Gemini (Google) and Copilot (Microsoft),” said Incogni. “Gemini, Deepseek, Pi Ai and Meta Ai do not seem to allow users to unsubscribe from commissioning to train models.”
In his research, Incogni noted that AI companies share data with different parties, including service providers, security forces, members of the same business group, research partners, affiliates and third parties.
“Microsoft’s privacy policy implies that user prompts can be shared with” third parties that carry out online advertising services for Microsoft or use Microsoft’s advertising technologies “, said incogni in the report. “Deepseek and Meta privacy policies indicate that guests can be shared with companies within its business group. Meta and Anthropic privacy policies can reasonably be understood to indicate that guests are shared with research employees.”
With some services, you can prevent your prompt from being used to train models. This is the case with Chatgpt, Copilot, Mistral AI and Grok. With other services, however, stopping this type of data collection does not seem possible, according to their privacy policies and other resources. These include Gemini, Deepseek, Pi Ai and Meta Ai. On this question, Anthropic said that he never collects user prompts to form his models.
Also: your data is probably not ready for AI – Here’s how to make it trustworthy
Finally, a transparent and readable privacy policy greatly helps to help you determine which data is collected and how to withdraw.
“Having an easy -to -use and simple assistance section that allows users to look for answers to confidentiality issues has been considerably improving transparency and clarity as long as it is up to date,” said Incogni. “Many platforms have similar data processing practices, however, companies like Microsoft, Meta and Google suffer from a single privacy policy covering all their products and a long privacy policy does not necessarily mean that it is easy to find answers to user questions.”
Get the best morning stories in your reception box every day with our Newsletter Tech Today.