Google I/O 2025: 5 Key Takeaways as Google Goes All In on Gemini

0
May 22, 2025
  • Google provided updates on the latest developments across its software platforms, on Day 1 of its annual developer conference.
  • The key message from Google is clear – Gemini will be part of the Google experience, regardless of service and device form factor.
  • Google also had a small demo of how they are integrating Android XR beyond smartphones, AI Glasses and other form factors.


At Google’s ongoing annual developer conference, I/O 2025, the search giant showcased a more powerful, contextual and personal Gemini on Day 1 of the event. Google also highlighted changes in Search, along with new creative and developer tools – all driven by GenAI. With all this progress, Google is looking to create new user experiences which are helpful and proactive – while at the same time tying users, businesses and developers to the company’s services as a one-stop shop for all things AI.

Here are our top 5 highlights from Day 1 of Google I/O 2025:

Gemini – A Proactive AI Assistant

Google envisions Gemini to be the vehicle that delivers agentic AI experiences to its user base. To achieve this, Google wants to build Gemini into a "world model" – an AI assistant capable of not just responding, but understanding contexts, making plans and completing tasks, on behalf of the user. The key to this is video understanding, screen sharing, and enhanced memory, directly into Google products like Gemini Live and Search Live – derived from development done in Project Astra. If users allow Gemini to access their content across Google apps, Gemini can start anticipating an upcoming request, like initiating learning sessions for an upcoming test.

Google’s Gemini products including Gemini Live, Veo 3, Gemini in Chrome, Imagen 4, Deep Research, Canvas, and Agent Mode. Source: Google.

For enterprises, Gemini 2.5 Flash and Pro are also receiving substantial upgrades – including Thought summaries, Deep Think mode, and Advanced security. These features let businesses build sophisticated and secure AI-driven solutions, particularly in coding and tasks requiring advanced reasoning, by providing documentation for how a solution was developed and the reasoning behind it.

Google Puts AI in Search and Shopping

Google Search is evolving into an intelligence engine from an information retriever. The company started this transformation with AI Overviews. These enhanced responses to queries are now available in over 200 countries and more than 40 languages. Importantly, Google is starting to roll out ‘AI Mode’ in Search in the US. AI Mode enables Search beyond links to provide answers in a more holistic view, including additional content that goes beyond answering the main question. ‘Shop with AI Mode’ will include an agentic experience to reduce friction during the checkout capabilities in the US. Google Labs users in the US can ‘try on’ clothing items using a photo of themselves as a model. The view on the screen not only super-imposes clothes over the given image but rather adjusts the clothes to individual body shapes – in theory delivering a more personalized result.

Google presenter speaking at Google I/O about Gemini App feature, Body mapping.

Next-gen AI Media Models and Tools

Google is adding AI tools for creators and developers. Google’s cloud console tool Vertex AI features new GenAI media models – Imagen 4 for higher quality image generation, Veo 3 for video generation with integrated audio and speech, and Lyria 2 for greater creative control in music generation. The company also announced ‘Flow,’ an AI-powered filmmaking tool designed for Google’s most advanced models. Flow lets storytellers combine AI-generated content with existing material, for example, to expand the scope of a shot or create new characters based on existing material.

For developers, the new Agent Development Kit (ADK), Agent Engine UI, and enhancements to the Agent2Agent (A2A) protocol (with support from partners like Microsoft, SAP, and Zoom) enable connections between ecosystems. This is one of the main announcements from the keynote, as it creates a path for collaboration between the growing number of AI models and platforms. It is imperative that AI agents and platforms can function together across multiple ecosystems to fully deliver on the promise of AI. If cross-platform collaboration is not enabled, each system stays limited to its own silo.

AI Expanding Into XR

Google’s AI is extending into XR. Google showed off how Android XR will integrate Gemini in glasses and headsets, with partners like Samsung (Project Moohan). Gemini will enhance XR experiences by understanding what users are viewing and enabling hands-free actions for content consumption and work-related tasks.

Slide presentation on future Google headsets and smart glasses. Source: Google.

Gemini Beyond Smartphones

Ahead of I/O, Google made exciting announcements related to Android and other devices in a dedicated online event. Gemini connects all things Google. Not only will Android devices have enhanced features via the latest updates to Gemini, but WearOS and cars will also start to be exposed to Agentic AI in the near future. In vehicles, Gemini and Gemini Live will be available via voice and touch prompts as part of the user experience. Users can ask the same queries as on a handset, search for points of interest, navigate to them and even add stops. However, Gemini will also be integrated into the vehicle system and will have knowledge about the vehicle – allowing drivers to change settings in the car (like air conditioning) and ask questions about the vehicle (like those found in the manual).

The ability to access screen sharing and camera view with Gemini is an incredible enhancement to provide better accessibility. Gemini Live’s conversational abilities give users detailed information about their surroundings and also allows users to ask related questions.

Slide presentation on future Google AI prices for Google AI Pro and Google AI Ultra. Source: Google.

Lastly, Google changed the parameters of AI subscription pricing. The company introduced Google AI Ultra, a new subscription plan for $249.99/month in the US. This plan provides the highest usage limits and access to Google's most capable models and premium AI features, catering to power users.

This year's I/O event was not about individual product updates but was rather a statement about Google's ambition to weave AI into each of its offerings, aiming to make technology more intuitive, helpful, and more human.

Summary

Published

May 22, 2025

Author

Gerrit Schneemann

Gerrit has 17 years of experience in the telecoms and consumer electronics industry. With a long history of covering the global smartphone market, he provides clients with strategic insights and advice impacting short and long-term business needs and decisions. Before joining Counterpoint Research, he spent over a decade at iSuppli, IHS/Markit and finally Omdia, before a short stint at GfK Boutique.