news

Google Announces Deep Research Max: Autonomous Research Agents on Gemini 3.1 Pro

📋 Official source news. Content is reported neutrally and does not represent an editorial endorsement.

Google Announces Deep Research Max: Autonomous Research Agents on Gemini 3.1 Pro

Google has announced Deep Research Max, the latest generation of its autonomous research agents built on Gemini 3.1 Pro. The announcement, published April 21, 2026, introduces two distinct configurations: Deep Research, optimized for speed and interactivity, and Deep Research Max, designed for maximum comprehensiveness through extended test-time compute.

The video published on the Google for Developers channel features testimonials from early adopters in the financial and pharmaceutical sectors, highlighting how the platform enables tackling complex research questions in days rather than weeks.

Speed vs. thoroughness: the dual agent approach

According to Google, Deep Research (the standard tier) replaces the December release and offers reduced latency and cost with improved quality, making it ideal for interactive user interfaces. Deep Research Max leverages extended test-time compute to iteratively reason, search, and refine the final report. Google reports a score of 93.3% on DeepSearchQA and 54.6% on HLE (Humanity’s Last Exam) for Max.

The distinction between the two agents reflects a structural tradeoff in AI agent design: speed versus thoroughness. As VentureBeat noted, Deep Research is intended for financial dashboards and near-real-time responses, while Max targets asynchronous workflows such as overnight due diligence reports ready by morning.

MCP: opening up to proprietary data

One of the most significant additions is support for the Model Context Protocol (MCP), which allows Deep Research to query private databases, internal document repositories, and third-party data services without sensitive information leaving its source environment. Google is collaborating with FactSet, S&P Global, and PitchBook to integrate their financial data streams via MCP servers.

Multimodality and native visualizations

For the first time in the Gemini API, Deep Research natively generates charts and infographics within reports, using HTML or the Nano Banana format. The agent can also process multimodal inputs: PDFs, CSVs, images, audio, and video as research grounding context.

Partner testimonials

In the video, FactSet representatives emphasize the importance of data reliability in the financial sector: “You can have innovation and the most advanced features, but if the data is not rock-solid, our customers will not use it.” Axiom, which works on predicting clinical trial outcomes, highlights the ability to access information buried in complex documents: “Often what you need to know is on page 80 of a very long PDF.” The multimodal capability, combining sentiment from video and voice with quantitative data, is described as providing narrative richness surpassing traditional research.

Availability

Both agents are available in public preview through paid tiers of the Gemini API, accessible via the Interactions API introduced in December 2025. Additional features include collaborative planning (reviewing the research plan before execution), real-time streaming of intermediate steps, and multimodal input grounding.

Resources