
Gemini 2.5 Pro launched with native 1M token context, capable of processing entire books, large codebases, or hours of video+subtitles for coherent multimodal reasoning, significantly outperforming previous models and competitors on long-document QA, code comprehension, and video event-chain benchmarks.
