Is Real-Time AI the Game-Changer for Experiential Spaces in 2024?

Real-Time AI in Experiential Spaces

Image generators have been part of the studio's toolkit for a while, but they've always lived offline. Prompt, wait, render, review. What's shifting in 2024 is the wait. Real-time AI is closing the gap between input and output to the point where the audience can sit inside the loop, and that changes what an experiential space can be.

Why real-time AI matters for experiential in 2024

Current state of play

Stable Diffusion, Runway, Midjourney and DALL-E have widened the creative ceiling for everyone. They're also, mostly, offline tools. You queue a render and come back to it. That cadence works fine for a finished campaign asset, less so for a gallery floor where the visitor expects the work to react to them. Data visualisation in exhibitions has gotten more sophisticated year on year, but the prompt-and-instantly-generate loop hasn't really arrived in the room with the audience yet.

The next step forward in AI

This is the year that starts to change. StreamDiffusion and the systems chasing it are pushing frame-by-frame generation toward something fast enough to feel responsive. The hard problem isn't model quality any more, it's latency. Close that gap and the whole shape of an activation changes: a screen the audience prompts, a surface that paints itself in response to the room, a brand world generated live for whoever's standing in front of it.

User experience in real time

Picture an interactive space where every movement turns into an image. That's the territory real-time AI opens up. Visuals that shift with a gesture, prompts a visitor types or speaks, styles that morph from one second to the next. Sound, motion and language all feeding the same generative pipe. The piece a brand commissions stops being a finished video and becomes a set of rules the audience finishes themselves, every visit different from the last.

Real-time AI generative visuals concept for experiential design

Striking a balance between potential and limitations

The hardware is the catch. Real-time Stable Diffusion eats GPU, and minimising latency without losing image quality usually means a high-end rig per output. That limits scale and rules out certain venues. Pair it with a live AV brain like TouchDesigner or Notch though, and the pipe gets a lot more flexible. Generated content can be projected onto surfaces, fed into LED screens, or sequenced into a live show. The constraints are real, but so is the room they leave for design.

For a closer look at how this stacks up in a real pipeline, our Future Sounds Good experiment uses Stable Diffusion and TouchDesigner to drive an audio-reactive visual end to end.

Where Solarflare is taking it

We're adapting our pipelines for both brand activations and gallery work, with real-time AI sitting alongside the more traditional computer-graphics layers we already lean on. The early trials have told us a lot about where the seams are: where the latency budget gets tight, which prompts behave on stage, what kinds of input the audience actually reaches for. Get in touch if you have a brief where this might earn its keep.

more articles

FIELD NOTES //
SOLARFLARE STUDIO