April was the month XR stopped pretending it lived behind a screen. Wearables got more capable, gesture spaces got more responsive, and a few of the big platforms started behaving like they expect people to actually use this stuff in public. Here's what earned a place in our toolkit.
Eleven things worth your time this month
Smart glasses, wearable AI and personal AR
1. Meta Ray-Ban smart glasses, EU expansion
Meta rolled live translation, visual search and object identification out to more EU countries on the Ray-Ban line. The hardware story is unchanged; the interesting shift is that the prompts are landing in someone's eyeline rather than their pocket. We've been prototyping audio-glance-gesture loops for cultural venues for a while, and this is the first month the off-the-shelf kit feels close to deployable for a brief that ships. Meta AI rollout.
2. The AR glasses race: Meta vs Google vs Apple
Google previewed its Gemini-powered glasses at TED 2025 with on-object information and live translation. Apple's Project N50 keeps cooking. Meta's Orion and Ray-Ban lines keep scaling. Three platform owners visibly committed to glasses isn't just a hardware bet, it's an ecosystem bet, which is the part that actually decides whether briefs get built. We've already pushed our R&D toward activations that don't assume a phone in your hand. April made that look less like a guess.
Conversational AI
3. GPT-4.1, web search plus brand memory
OpenAI's 4.1 release blends live web search with internal file memory in the same response. For us that's the difference between a chatbot that sounds clever and one that can plausibly hold down a job at a brand activation. We delivered a project this month that ran exactly this pattern, internal training plus live data, and the experience read more like a knowledgeable host than a kiosk. Voice-driven product advisors, generative narration on demand, branded interview booths all become more honest pitches now. GPT-4.1 release.
Multi-sensory immersion and simulated motion
4. Robotic-arm motion rides
A demo doing the rounds this month strapped a VR headset to an industrial robotic arm and ran a high-speed dragon ride with full-body G-force, wind and sound in sync with the visuals. It's a familiar trick from theme parks, but the precision and footprint have improved enough that we're now scoping a scaled-down version for corporate events. Movement as a narrative driver, not a finishing flourish on top of a film.
Interactive spaces and gesture tracking
5. Gesture walls and spatial games
Two activations crossed our feed this month using real-time body tracking to power gamified walls and gesture-based painting. No handsets, no buttons, just movement. The reason it matters for our work is dwell time. In our recent fan-park installs, body-led interactions hold a visitor two to three times longer than tap-and-go ones, and they're trivially open to a queue.
6. Platform 9¾ illusions, magic on a budget
Universal Orlando's Platform 9¾ queue uses an angled mirror and good lighting to make guests appear to vanish through a brick wall. No sensors. No headset. The reason we keep flagging things like this is that low-tech stagecraft tends to outlast its high-tech equivalents on uptime, maintenance and the actual feeling in the room. Worth remembering when the brief defaults to "and then add AR".
7. AI-driven visual storytelling
A clutch of installations this month paired AI-generated visuals with live sensor input, so the projection responds to who's in the room rather than running a fixed loop. The use case we keep coming back to is museums, where content has to feel fresh for the third visit, not just the first. We're prototyping adaptive visual systems on exactly this brief, and it's where the technique earns its keep.
Immersive training and XR simulation
8. Virtual fire drills and safety simulation in headsets
VR drills are getting good enough to stand in for real-world training: warehouse fires, evacuations, decision-making under pressure, with metrics out the back. Quest and Vision Pro get most of their press for entertainment, but the steady, unglamorous demand we're seeing from clients is here. Repeatable, measurable, no risk to the trainee.
Industry signals
9. Meta opens scene-understanding APIs on Quest
Quest 3 and 3S developers now get scene-understanding APIs that anchor and persist virtual content in physical space across sessions. Persistent anchoring has been the missing piece for mixed-reality experiences that feel like part of a room rather than a thing floating in it. We're already retrofitting an upcoming installation around it, focused on content that reacts to the visitor's actual environment. Forbes coverage.
10. Humane AI Pin, a useful failure
TechCrunch's autopsy of the Humane AI Pin landed this month: short battery, fuzzy purpose, ambitious concept, expensive lesson. Worth reading if you're scoping anything wearable. The takeaway for us isn't that wearable AI is broken; it's that "shrink the existing interface" isn't a product strategy. Clarity of purpose is the thing visitors register first, and it's what we keep optimising for in our own installs. TechCrunch podcast.
11. Meta Reality Labs layoffs
Meta cut staff inside Reality Labs, mostly around the Supernatural fitness app, while keeping core VR/AR investment intact. Read it as the XR market consolidating around models that actually generate revenue, not as Meta walking away. The practical lesson for clients is one we've been giving for a year: design XR content that travels across platforms, because any single ecosystem could shift under you. The Verge coverage.
12. 100 uses for a robot vacuum
Researchers at the University of Hertfordshire published a list of 100 things a domestic robot vacuum could do beyond cleaning floors: entertaining the cat, watering plants, ferrying reminders, choreographing ambient movement around a room. It reads like a parlour game. It's also a textbook reframing exercise: the hardware was already in the house, the imagination wasn't. We bookmark these because creative repurposing usually beats new hardware on a fixed budget. BBC News.
What we're taking into May
The thread for April is that XR is leaving the screen behind in three places at once: on the face, in the room, and inside the system prompt. Wearables, gesture spaces and brand-trained AI are all maturing toward something that holds up in front of an actual audience. Get in touch if any of this maps to something you're trying to build.



