Google unveiled a flood of AI announcements during I/O 2025 designed to arm developers with next-generation models and tools.
Mat Velloso, VP of Product for the AI Developer Platform at Google DeepMind, said: “We believe developers are the architects of the future. That’s why Google I/O is our most anticipated event of the year, and a perfect moment to bring developers together and share our efforts for all the amazing builders out there.”
Following hot on the heels of an upgrade to Gemini 2.5 Pro Preview a few weeks back, which sharpened its coding capabilities, Google has now pulled back the curtain on a comprehensive suite of enhancements spanning its developer ecosystem. The clear objective? To make crafting AI applications a smoother, more powerful, and more intuitive process.
Gemini 2.5 Flash Preview: Sharper, faster, more controllable
Leading the charge is an updated version of the Gemini 2.5 Flash Preview model. Google announced that this new iteration boasts “stronger performance on coding and complex reasoning tasks that is optimised for speed and efficiency.” This offers developers a potent blend of high-end capability with the agility needed for rapid development and deployment.
It’s not just about raw power, though. Google is also championing greater insight and command over its models.
“Thought summaries are now available across our 2.5 models,” the company revealed, adding that they “will bring thinking budgets to 2.5 Pro Preview soon to help developers further manage costs and control how our models think before they respond.”
This directly addresses developers’ desires for more granular control and better cost-efficiency – crucial for real-world application.
For those eager to get their hands on these, both the updated Gemini 2.5 Flash and the existing 2.5 Pro are currently in Preview within Google AI Studio and Vertex AI. Wider general availability for Flash is pencilled in for early June, with Pro set to follow suit soon after.
New models for diverse developer needs
Recognising that one size rarely fits all in the dynamic world of AI development, Google has expanded its model arsenal considerably.
First up is Gemma 3n, which Google describes as its “latest fast and efficient open multimodal model engineered to run smoothly on your phones, laptops, and tablets.” Aiming to be a multimodal all-rounder, Gemma 3n handles audio, text, image, and video inputs.
Joining the lineup is PaliGemma, a new vision-language model tuned for tasks like image captioning and visual question-answering. This will be a boon for developers working on applications that need to ‘see’ and understand visual information.
For those needing to generate images at breakneck speed, Gemini Diffusion has been introduced. An experimental demo showcased Gemini Diffusion generating content at five times the speed of Google’s previous flagship model, all while matching its coding performance.
Creative developers haven’t been forgotten. Lyria RealTime is an “experimental interactive music generation model that allows anyone to interactively create, control and perform music in real time.” This opens up exciting new avenues for interactive audio experiences.
The versatile Gemma family continues to branch out, offering more tailored solutions:
- MedGemma: This is being pitched as Google’s “most capable open model for multimodal medical text and image comprehension.” It’s designed for developers to adapt and build innovative health applications, such as those involving the intricate analysis of medical images.
- SignGemma: An upcoming open model with a vital purpose: translating sign languages into spoken language text. Currently best at American Sign Language to English, its aim is to “enable developers to create new apps and integrations for Deaf and Hard of Hearing users.”
Google sharpens AI tools for developers at I/O 2025
Beyond the models themselves, Google is rolling out updates and new tools designed to take the friction out of AI development.
A “new, more agentic Colab” is on the horizon. Google promises this will “soon be a new, fully agentic experience. Simply tell Colab what you want to achieve, and watch as it takes action in your notebook, fixing errors and transforming code to help you solve hard problems faster.”
Gemini Code Assist, the AI-coding companion for individual developers, and its counterpart for collaborative work, Gemini Code Assist for GitHub, have both now hit general availability. In a key upgrade, “Gemini 2.5 now powers Gemini Code Assist, and a 1 million token context window will come to Gemini Code Assist Standard and Enterprise developers when it’s available on Vertex AI.”
Making the journey from concept to full-stack AI app even smoother is Firebase Studio, a new cloud-based AI workspace. Developers can “bring Figma designs to life right in Firebase Studio using the builder.io plugin.” Starting from the announcement, Firebase Studio is also introducing functionality to intelligently detect when an app needs a backend and will even provision it automatically.
Asynchronous coding agent Jules is now available to everyone. The idea behind Jules is that it “gets out of your way, so you can focus on the coding you want to do, while Jules picks up the random tasks that you’d rather not.”
Jules can tackle bug backlogs, juggle multiple tasks, and even take a first pass at building out new features. It integrates directly with GitHub, cloning repositories to a Cloud VM and preparing pull requests: