View

MAY 2025 TECH REPORT: GOOGLE I/O, AI FILMMAKING & SMART WEARABLES

May_Title_1

May delivered a wave of breakthrough tools that blur the lines between AI creation, physical interaction, and consumer reality. Google’s I/O showcase dominated headlines with integrated AI workflows, while smart glasses evolved from prototype to practical. Meanwhile, immersive installations proved that spatial storytelling doesn’t always need complex tech – sometimes it’s about clever perspective and emotional design.

Here’s what caught our attention this month – and how we see it reshaping interactive experiences.

🤖 GOOGLE I/O 2025 SHOWCASE

Google’s annual developer conference dominated May with a comprehensive AI and XR showcase. The event revealed integrated workflows, practical wearables, and commerce-ready tools that signal AI moving from experimental to essential.

1 - Google Flow – Complete AI Filmmaking Platform

What’s new?

Google unveiled Flow at I/O 2025 – a unified platform combining Veo (video), Imagen (image), and Gemini (text) into one filmmaking interface. Creators can explore over 600 films and their exact prompts via Flow TV, making AI video creation transparent and accessible.

Why it matters

This is the first serious platform designed to make complex multi-modal AI creation accessible to brand teams and creative professionals. It removes technical barriers and makes high-quality video prototyping available to anyone who can write a prompt.

Where it’s useful

  • Client presentations: Create compelling mood films and concept videos in minutes, not days
  • Campaign ideation: Test visual styles and narratives before committing to production budgets
  • Social content: Generate branded short-form videos tailored to platform requirements
  • Stakeholder communication: Show rather than tell when presenting experiential concepts to decision-makers

 

Our opinion

We’re excited by Flow’s transparency – seeing prompts alongside outputs is rare in AI tooling and makes it easier for teams to learn and iterate. This will reshape the industry by democratising high-end creative capabilities, allowing brands to prototype and iterate concepts at unprecedented speed. We see this sparking a new era of creative experimentation where the barrier between idea and execution almost disappears.

2 - Google Veo 3 – Physics-Aware Video Generation

What’s new?

DeepMind’s Veo 3 generates video with realistic physics, sound effects, and dialogue from a single prompt. Key breakthrough: hyperreal water physics and multi-modal synchronisation that keeps visuals, audio, and movement aligned.

Why it matters

This elevates AI video from “interesting toy” to “usable tool.” The physics engine and audio sync mean prototypes can feel cinematic and believable – crucial for pre-sale presentations and creative boards.

Where it’s useful

  • Product demonstration videos with realistic physics
  • Atmospheric brand films for mood and tone-setting
  • Quick environmental storytelling for experiential concepts
  • Social content with believable motion and sound design

 

Our opinion

This is the most impressive text-to-video we’ve seen, and this is obviously only the start. We’re super excited to be working with this around new mediums of creative storytelling. This isn’t going to remove the barrier to making content good – you need this plus other creative skills – but it’s a powerful new tool in the arsenal. Based on AI progress, this is the worst it will ever be, which is still incredible.

3 - Google Android XR Glasses with Gemini AI

What’s new?

Google teased smart glasses with deep Gemini AI integration ahead of I/O 2025. Expected features include real-time translation, environmental understanding, and hands-free contextual assistance.

Why it matters

This technology moves beyond prototype stage to deliver real utility that consumers will use daily. For brands, it opens opportunities for persistent ambient experiences that layer helpful information into physical spaces without requiring app downloads or QR codes.

Where it’s useful

  • Retail flagship stores: Provide product information and personalised recommendations through subtle visual overlays
  • Live events and festivals: Offer real-time navigation, artist information, and contextual content without disrupting the experience
  • Tourism and hospitality: Enable multilingual support and location-based storytelling in hotels, museums, and attractions
  • Training and onboarding: Deliver hands-free, contextual guidance in warehouse, retail, or industrial environments

 

Our opinion

All of this is heading in the direction of wearable technology, which we’ve been highlighting for months. We believe a shift is happening where this will potentially become the norm – ambient, always-on information that feels natural rather than intrusive.

Credit: Android XR

4 - Google AI Try-On Integration

What’s new?

Google’s AI-powered “Try-On” feature integrated directly into Search allows users to visualise clothing across different poses and body types using generative modelling and personalisation.

Why it matters

This integration fundamentally changes customer expectations for online shopping. When try-on experiences become as standard as product photos, brands that don’t offer visual interaction will feel outdated. It also reduces purchase hesitation and return rates – solving real business problems.

Where it’s useful

  • E-commerce optimization: Reduce return rates and increase conversion by letting customers see products on themselves before buying
  • Product launch campaigns: Create interactive reveals where audiences can immediately try new collections
  • Retail store integration: Bridge online and offline with digital mirrors that access full inventory
  • Social commerce: Enable try-on experiences within social media shopping flows for impulse purchases

 

Our opinion

The fact that this is native to Google Search changes everything. No app downloads, no filters – just utility where people already shop. Retail brands should explore integrating their catalogues immediately and test conversion lift versus standard product pages.

👓 WEARABLE TECH BEYOND GOOGLE

5 - Reebok x Innovative Eyewear – AI Sport Glasses

What’s new?

Reebok launched AI-integrated smart eyewear designed specifically for sports performance, featuring high-fidelity speakers, outdoor-tuned amplifiers, and real-time coaching capabilities.

Why it matters

Fitness wearables are evolving into performance tools. This opens new terrain for real-time coaching, biometric analysis, and ambient content delivery during workouts or sport-related brand experiences.

Where it’s useful

  • Immersive fitness experiences and guided workouts
  • Real-time performance coaching in sports activations
  • Audio-guided brand experiences in outdoor settings
  • Hands-free content delivery during physical activities

 

Our opinion

We love the concept of what Reebok are doing and why they’re doing it. We certainly have experience working in fitness, athletics, and sports, and we’re looking forward to continuing to explore wearable technology and eyewear that provides data streams. This biometric and performance data can be fed into interactive, immersive experiences – creating personalised moments that respond to your actual physical state and performance.

Credit: Runway, Cristóbal Valenzuela

5 - AI-ENHANCED FIRST-PERSON CAPTURE

What’s new?

FOOH (Fake Out Of Home) experiences typically require heavy 3D rendering, animation, and compositing into scenes. However, new workflows now combine first-person footage from Meta Ray-Ban glasses with AI-powered generative editing tools. This allows creators to capture authentic moments passively, then enhance with AI post-production in a relatively straightforward process.

Why it matters

This “record now, edit later” model streamlines content production for live environments. It combines authentic capture with AI-driven scene manipulation for agile creative workflows.

Where it’s useful

  • Authentic UGC-style content with post-production polish
  • Live event documentation with cinematic enhancement
  • Travel and lifestyle brand content creation
  • Ambassador programs with professional-grade output

 

Our opinion

This is another angle showing why AI is fast-tracking creative capabilities. We’re excited to continue working with this approach, especially as we’ve had experience working in FOOH before. It’s a game-changer for content creation workflows.

Credit: Runway, Cristóbal Valenzuela

🌊 IMMERSIVE SPACES & VISUAL STORYTELLING

7 - Height-Based Immersive Installations

What’s new?

Viral installations using transparent surfaces and projected visuals create powerful depth illusions. Visitors stand on raised platforms while digital content below induces emotional intensity through clever spatial design and perspective tricks.

Why it matters

This proves that powerful emotional reactions don’t require complex interactive systems. Simple spatial design combined with digital motion and psychological perception can create unforgettable moments.

Where it’s useful

  • Museum exhibits and cultural installations
  • Retail showrooms with memorable brand moments
  • Pop-up experiences focused on emotional impact
  • Event spaces where visitors become part of the story

 

Our opinion

Sometimes you just need really cool-looking content for installations, and this really delivers on that. The user’s emotional response is triggered not by jump scares but by depth illusion and height. It’s a relatively straightforward experience that works at both small and large scales – you can imagine this concept adapted for intimate gallery spaces or massive event installations.

Video Credit – bbanzzak_mom

🧱 INTERACTIVE SPACES & GESTURE TRACKING

8 - Real-Time Light and Sound Installations

What’s new?

Perfect example of low-complexity, high-impact content designed for installations and experiences. This showcases the collaboration between lasers, light, sound, and physical objects with LED strips embedded in specific objects – where everything reacts and lights up in harmony.

Why it matters

This is a perfect use case for real-time, user-driven visuals that relate to audio, light, and human haptic feedback. The technical overhead is minimal, but the output creates dynamic, responsive environments.

Where it’s useful

  • Small-scale events requiring tactile, interactive moments
  • Gallery installations with ambient, responsive lighting
  • Brand experiences that respond to presence and movement
  • Event spaces where physical interaction drives visual storytelling

 

Our opinion

We see these cool experiences bringing together multiple elements – installation/fabrication, lighting, LED strips, integrated tech, motion, audio input – all combining to create real-time presence. This particular space for small-scale events that are really tactile and fun shows how simple tech can create profound interactive moments when thoughtfully orchestrated.

Video Credit – Justin Kittell & James Sartor

9 - HYPERVSN Hologram Displays – Mobile 3D DOOH

What’s new?

HYPERVSN launched the first mobile 3D hologram truck, displaying moving holographic content while driving through major cities – taking high-precision holographic displays to new locations, scenes, and experiences

Why it matters

This is just another step forward in how you can utilize high-precision displays and holographic technology in different environments. It creates fun, interesting, eye-catching visuals that bring immersive media to high-traffic areas without requiring fixed infrastructure.

Where it’s useful

  • Product launches requiring maximum attention
  • Campaign stunts in areas without screen real estate
  • Event marketing with portable, striking visuals
  • Seasonal brand roadshows and touring activations

 

Our opinion

We’ve worked with HYPERVSN on many occasions, and this is just another medium – another way of getting your message, brand, or experience out there in a striking way. It’s portable spectacle that demonstrates the evolution of display technology.

🛍️ COMMERCE GETS VISUAL

Photoreal Creator Worlds in Gaming

Gaming platform creators are building photoreal environments that rival traditional game studios, shifting from simple aesthetics to detailed, immersive worlds while maintaining rapid iteration and creator-first economics.

Why it matters

The “YouTube-ification” of gaming is becoming reality. UGC is surpassing AAA production cycles for some content types, meaning brand world-building might shift from major studio production into creator-led ecosystems.

Where it’s useful

  • Youth-focused brand worlds and virtual experiences
  • Product placement in high-engagement gaming environments
  • Community-driven brand storytelling and user-generated content
  • Virtual events and concerts within gaming platforms

 

Our opinion

There’s so much you can do on Roblox, and if you have a specific idea of what type of content you want to create, this platform shows you can reduce barriers and be as creative as you possibly want with the tools – whilst having massive traction, reach, and user engagement throughout. It’s becoming a serious creative platform, not just a game.

Video Credit: Stephen Dypiangco

🧰 INDUSTRY COMMENTARY & TRENDS

11 - 3D Digitisation Technology for Immersive Experiences

What’s new?

Advanced 3D scanning technology creates incredibly detailed, fully explorable 3D models with spatial annotations and real parallax effects that run directly in web browsers.

Why it matters

This technology creates incredibly detailed experiences that run in the browser with low barrier to entry, but you can still get up close and personal with all the information and detail. It’s photoreal fidelity without requiring downloads or special software.

Where it’s useful

  • Museums and galleries for interactive exhibitions
  • Sporting stadiums and behind-the-scenes experiences
  • Event venues and locker room tours
  • Heritage sites and cultural storytelling
  • Product showcases requiring detailed inspection

 

Our opinion

Picture this across installations, museums, galleries, but also sporting stadiums, events, locker rooms, behind-the-scenes experiences – there’s so much you can do. The quality is impressive, and spatial annotations add real editorial depth for any environment you want to digitise and share.

Credit: Arrival.Space

12 - Interactive Experiences for the Web – Pushing Creative Boundaries

What’s new?

Advanced web technologies are enabling breakthrough interactive experiences that create stunning organic visuals driven entirely by code and creative algorithms, with no heavy asset files required.

Why it matters

The latest web technology capabilities allow for real-time creative experiences without asset dependency. This means truly responsive, beautiful visuals using just creative code and the latest browser capabilities – making powerful interactive experiences accessible to any device with a web browser.

Where it’s useful

  • Brand microsites with generative, ambient visuals
  • Web-based installations and interactive galleries
  • Lightweight immersive experiences on mobile
  • Live, responsive websites that adapt to user interaction

 

Our opinion

This shows how far web technology has advanced – you can now create lightweight, fully interactive, and beautiful experiences that work across all devices. It opens up live, generative websites for brands demanding high visual impact with minimal technical overhead.

Credit: Niklas Niehus

May 2025 saw significant advancements in AI and wearable technology.

These technologies are becoming increasingly practical and impactful, transforming how we create, interact, and experience the world. The lines between physical and digital are blurring, and brands are finding new ways to connect with audiences through personalised and immersive experiences.

This progress signals a future where innovation is accelerating, and the possibilities for creative expression and engagement are expanding rapidly.

Now that the dithering was sorted it was time to turn to the main reason for the project the single line that runs through all the points. Because of the quadtree searching for the nearest pint to draw the line through was much faster than Processing. Unfortunately, the line drawing was flickering all over the place, it needed some temporal consistency so I tweaked the code to retain the line path from frame to frame and only update the path if the path can find a more efficient route.

LET’S TALK