A wave of confusion hit the internet after claims spread that Google had used people’s Gmail messages to train its Gemini AI. This bold statement triggered discussions on social media platforms and forums. Responding to this, Google has now denied the claim that Gmail content was used to train Gemini and worked to correct the rumors. The whole fiasco began with a misinterpretation of Gmail’s Smart Features.

Google responds after claims Gemini was trained using Gmail messages

The initial wave of confusion emerged when MalwareBytes published a story claiming that Google had quietly shifted Gmail settings in a way that could allow user messages to be trained for Gemini . The timing and phrasing of the statement were enough to trigger panic across social media. Many users have started questioning Google about its data handling and privacy practices.

Google thankfully acted quickly to address the confusion. Google spokesperson Jenny Thomson clarified the issue in a statement. The official confirmed that the circulating reports were inaccurate and based on a misunderstanding. Furthermore, she emphasized that Smart Features in Gmail have been available for years. The settings were not altered without user consent , and Gmail messages are not used to train the company’s Gemini model.

Gemini is trained on a separate dataset

In the official statement, Google confirms that the Gemini AI is trained on a completely separate dataset and that the personal Gmail accounts are never a part of it. MalwareBytes also later updated the article to further correct the misunderstanding. However, they blamed Google’s wording for the confusion.

The publication reviewed additional documents and confirmed that while Gmail does scan messages to enable built-in features, this does not mean Gmail content is used to train generative models like Gemini. Moreover, Google reiterated that Smart Features are opt-in by default. Users are given full control, and those who do not enable the feature are not enrolled.