Virtual reality and immersive technologies have the potential to place people inside a story. Spatial or 3D audio provides much of the sensory information and sensations that are crucial for enveloping a person in a VR environment, as shown by the recent ‘live cinema’ production, Cosmos Within Us. Described by its director as “75 per cent sound”, this critically acclaimed presentation combines film with live performance as it explores the nature of memory and loss.
First presented at the Venice Film Festival in August, Cosmos Within Us was next seen at the Raindance Festival in London during September. This has become a major showcase for immersive storytelling and the piece was presented with the competition’s top prize, the ‘Spirit of Raindance’ award. Last month the 360-degree experience was staged at Eye Filmmuseum in Amsterdam, where it was presented on a larger scale than before.
Described as a multi-sensory film that is shown in a live environment with actors and musicians, the story of Cosmos Within Us is told from the perspective of Aiken, a 60-year old man with Alzheimer’s disease. The audience, which started as only four people on the show’s first performance and grew to over 100 for the Amsterdam staging, is able to see on a big screen the visuals as experienced by an ‘inter-actor’ wearing a VR headset.
Differing elements of the sound are fed through a loudspeaker system and binaural headphones to create an immersive whole. Cosmos Within Us was created by director Tupac Martir, founder of creative studio and technology developer Satore, in collaboration with the documentary and VR production company a_BAHN. “What we’re interested in with VR and augmented reality is to make it a performantive art,” comments Martir. “From the beginning we wanted Cosmos to have as many live aspects as we could as a way of extending it as an experience.”Martir says the sound carries a large proportion of this and is divided into two sides: the audience and the inter-actor hear the voice-over and many sound effects through open headphones, while an ‘environmental layer’, comprising music and additional sounds, play through a surround sound system in the auditorium. “Some sounds can start on the environmental layer, say to the left, and then move to the left headphone,” Martir explains. “For example, the opening scene has about 24 different sounds but it isn’t until people approach where they are supposed to be happening that they are triggered.”
The sound design for the VR experience was created by Gareth Llewellyn and Jon Olive of specialist audio immersive system Magic Beans. Co-founder and chief executive Llewellyn started his career in film sound post-production, after which he joined Galaxy Studios in Belgium, where he was involved with the Auro-3D spatial audio format. Olive, also a co-founder as well as chief technology officer, started out in classical music recording before moving into the film sector, where he worked in sales and product development.
Satore and Magic Beans have been working together for around two and half years. Olive says the brief for Cosmos Within Us, which he describes as “one of the nicest” Magic Beans has received, was to make “something that sounds cool” but which was also great audio. “It was about much more than cinema and making a realistic space,” adds Gareth Llewellyn.
The sound for the experience is based on the combination of a 3D loudspeaker array, usually in 11.1, and binaural headphones worn by the audience. In this way, explains Llewellyn, it is possible to create an overall sonic environment while at the same time having people in a “bubble of sound”. The system for the Raindance show included 12 Genelec 8010A studio monitors with 7050B subs. The presentation at the Eye Filmmuseum relied on the venue’s in-house rig, which was the first time it was not possible to have a height element to the loudspeaker configuration. “We had to fold the periphonic mix down into 7.1,” says Llewellyn. “We missed the height slightly but it worked for the audience at large.”
Audience members wore Sennheiser binaural headphones, which Llewellyn says provided the right balance between cost and quality. “We wanted to make sure we didn’t lose anything, particularly on the low end,” he comments. “We also want people to hear the rest of the room, including the person next to them. The aim is to make the audience feel they are in the room with the character, hearing the rain pouring down and thunder outside.”
The headphones were fed from Sennheiser G3 wireless transmitters and “broadcast” to over 100 G3 receiver packs. “We did explore the best way to feed the audience headphones and realised that to cable everything, particularly in the time available, wouldn’t work,” says Olive.
Several aspects are involved in producing the overall sound for both the speaker array and the headphones. The Unreal Engine, used to generate the VR visuals, sends triggers to the musicians, who are playing live but also have pre-recorded instruments and a click track on an Ableton workstation. Llewellyn and Olive designed their own audio engine to work in conjunction with a Unity 3D sound system running custom tools, with the two components communicating over the OSC (Open Sound Control) protocol.
All audio outputs were then sent to an Avid Pro Tools system, which is used for mixing and then distributing the various sounds to either the loudspeakers or the headphones. Monitor mixes and communications were mixed by Hugh Fielding through a Dante-enabled Yamaha console. The whole audio set-up ran over a Dante network, which Olive explains avoids a large amount of cabling.
Tupac Martir is enthusiastic about the role sound plays in the realisation of his story: “We could almost make a podcast out of it and still get the chills because you don’t have to see the visuals to understand what is happening. You just hear the voice and the music and sound effects.”