That awkward moment when you realise the Artifical Intelligence you just programmed has more rhythm than you.
Multi-sensory experiences have longer-lasting effects on memory and association. Yet audio is often considered an after-thought for many virtual experiences. We consider 'the sound' of an experience as a crucial part of the immersion we want audiences to feel.
Internally, we've been playing around with the idea of Artifical Intelligence (AI), developing human skills, like listening and genuinely reacting to music. How would it react emotionally? Would it dance? If so, how would it move? What shapes would it choose to reflect the mood? How could we give AI visual tools to use, to express itself?
We created a set of geometric shapes and lines (dance moves) for the code to pick up on when reacting to a beat.
Unsurprisingly, (and a little annoyingly) the AI moves better than us.