Google – Morphing Clay

// services
Interactive Installation
Experiential Design
Real-Time Visuals
Real-time gesture-driven AI ceramics experience built with Future Deluxe and Bit Studio for Google. Participants morph AI-generated clay forms using hand gestures, exhibited across two live events.
Shaping AI with your hands
Morphing Clay was built with Future Deluxe and Bit Studio for Google, exhibited across two live events as part of a programme exploring the creative possibilities of AI. The premise was direct: participants stand in front of a screen, make gestures, and watch AI-generated clay forms respond and transform in real time.The interaction feels physical even though nothing physical is happening. That tension between gesture and digital material was the design intent from the start. We wanted people to feel like they were sculpting, not operating a screen.For Google, the experience needed to communicate a specific idea: that AI is a medium for human creativity, not a replacement for it. The clay metaphor does that work without requiring any explanation. You shape it. It responds to you.
Real-time gesture recognition at event scale
Running a real-time AI experience in a busy event environment is a different engineering problem from running it in a controlled demo context. Lighting conditions change. Multiple people may be in frame. The system has to remain stable and responsive across hours of continuous use.We built the gesture recognition layer to handle noisy conditions robustly, with fallback behaviours that degrade gracefully rather than breaking. The AI model generating the clay forms was optimised for low-latency inference so the response felt immediate rather than computational. Calibration between gesture input and visual output was one of the most time-intensive parts of the build. The relationship between hand movement and clay deformation needed to feel intuitive on first contact, without any onboarding.
What it demonstrated for Google
The events gave Google a live demonstration of AI creativity that was participatory rather than observational. Attendees were not watching a model generate images; they were co-authoring something with it, in public, in front of other people.That format generates a different quality of attention and memory than a passive demo. People tried things, compared results with other participants, and formed opinions about what the AI could and could not do. Across two events the installation handled a high volume of participants without performance degradation. The technical reliability mattered as much as the creative concept, because an AI experience that stutters in front of a crowd communicates the wrong message entirely.