My top 5 Google I/O demos, from Gemini robots to virtual dressing rooms

The headlining event of Google I/O 2025, the live keynote, is officially in the rearview. However, if you’ve followed I/O before, you may know there’s a lot more happening behind the scenes than what you can find live-streamed on YouTube. There are demos, hands-on experiences, Q&A sessions, and more happening at Shoreline Amphitheatre near Google’s Mountain View headquarters.

We’ve recapped the Google I/O 2025 keynote, and given you hands-on scoops about Android XR glasses, Android Auto, and Project Moohan. For those interested in the nitty-gritty demos and experiences happening at I/O, here are five of my favorite things I saw at the annual developer conference today.

Controlling robots with your voice using Gemini

Robot arms picking up objects with Gemini.

(Image credit: Brady Snyder / Android Central)

Google briefly mentioned during its main keynote that its long-term goal for Gemini is to make it a “universal AI assistant,” and robotics has to be a part of that. The company says that its Gemini Robotics division “teaches robots to grasp, follow instructions and adjust on the fly.” I got to try out Gemini Robotics myself, using voice commands to direct two robot arms and move object hands-free.

The demo is using a Gemini model, a camera, and two robot arms to move things around. The multimodal capabilities — like a live camera feed and microphone input — make it easy to control Gemini robots with simple instructions. In one instance, I asked the robot to move the yellow brick, and the arm did exactly that.

Gemini's robot arms picking up a gift bag.

(Image credit: Brady Snyder / Android Central)

It felt responsive, although there were some limitations. In one instance, I tried to tell Gemini to move the yellow piece where it was before, and quickly learned that this version of the AI model doesn’t have a memory. But considering Gemini Robotics is still an experiment, that isn’t exactly surprising.

Post Comment