Google has introduced an enhanced version of its Gemini Deep Research agent. The move comes around the same time as the release of OpenAI’s GPT 5.2 AI model . This release, powered by the capabilities of Gemini 3 Pro , targets complex, multi-step investigation and information synthesis. It initially focuses on developers and, shortly, on consumer applications.
The agent is designed to manage tasks that require gathering and synthesizing large amounts of context over long periods. It operates through an autonomous reasoning core that plans its own investigation. Upon receiving a prompt, the agent iteratively formulates queries, reads the results, identifies any knowledge gaps in the collected information, and searches again. This self-guided process allows the system to navigate complex data landscapes with enhanced accuracy.
Google vs. OpenAI: Gemini’s upgraded Deep Research agent challenges GPT-5.2 in accuracy
The main goal of this upgrade is to reduce risk . Google trained the Deep Research agent specifically to reduce the number of AI hallucinations. This should cut down on the amount of false information the model generates. For deep reasoning tasks, where even a small mistake at the beginning of a multi-step investigation can make the final report useless, the high level of factual accuracy is key.
Google validated the agent’s performance by posting strong results on advanced reasoning benchmarks. These include Humanity’s Last Exam (HLE), which measures general knowledge and complex reasoning—and the open-sourced DeepSearchQA benchmark, designed by Google itself to evaluate comprehensive, multi-step web research.
The high precision this technology offers is already present in demanding fields. Early testing shows the agent is effective in financial services, where it can automate the initial, labor-intensive stages of due diligence, and in biotech, where it assists with accelerating drug safety research by analyzing vast biomedical literature.
Integration and accessibility
To facilitate external development, Google simultaneously launched the new Interactions API. This is a unified interface for developers to work with the Gemini models and integrate the Deep Research agent directly into their applications. The move accelerates the ability of third parties to build custom, AI-powered research tools.
For consumers, Google plans to integrate the Deep Research agent’s capabilities into its flagship services. The list includes the main Gemini app, Google Search, and its research application, NotebookLM. The result should be the AI handling the preliminary heavy lifting of research, changing how users interact with information online.