Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


All fun and game audio

Epic soundtracks, massive file counts, middleware and loudness. PSNEurope grabs the controls and explores the audio coming out of another kind of console…

For the complete game audio feature, check out the February digital edition of PSNEurope.

Robin Rimbaud aka Scanner is an artist, musician and composer working in London who was the keynote speaker at the Audio Engineering Society’s 49th International Conference on Audio for Games.

Rimbaud’s musical works includes nearly every genre imaginable: sound design, film scores, large-scale multimedia performances and product design just to name a very few. But no video games: just what the AES wanted. The appeal of having Rimbaud deliver the keynote was how he has worked with sound on past projects.
Reflecting on the current state of audio for games, he says: “It’s a world that embraces sound in a really encouraging way. It meets the criterion that interests me in creating work, which is about audiences, experimenting and creating imaginary spaces for people in sound. Games do that perfectly.”
But how, and why, could that be? Sounds and music Today’s game audio has come a long way from the simple beeps and boops of Pong and other early video games. It wasn’t until 1978 when Taito released Space Invaders that music became more than just an accompaniment to games: as the invaders drew closer and closer the tempo would increase, heightening the player’s sense of urgency and thus becoming a part of the actual gaming experience. That music was a simple four-note bass line on repeat throughout.
Fast-forward to November 2012: the Halo 4 Original Soundtrack, released on Northern Ireland’s 7Hz Productions, debuted on the Billboard charts in America at number 50, making it the highest charting video game soundtrack in history.

The music was composed by Neil Davidge, known for his production work with Massive Attack, and recorded in London’s Abbey Road studios with a 16-person, hand-picked, male tenor/bass choir plus 10 female Bulgarian vocalists and a full 50-piece orchestra among other performers. The accompanying remix collection features some of the biggest names in dance music, including Caspa, Gui Boratto and Sander Van Doorn.
Rather than being the exception, celebrity composers and full orchestras are becoming the norm. At London-based music supervision and production company Air Edel, two very big ‘Project X’ titles are already underway for 2013; so dubbed because the company can’t yet talk about them, says manager of commercials and new media Trevor Best.

“I’ve been asked to pitch on another game and this is only January now. To open up with three games in January is brilliant.”
Best noticed game music budgets opening up when Air Edel was asked to provide music for the Electronic Arts (EA) title Harry Potter and the Goblet of Fire (2005), having already provided music for the film. “With Harry Potter, they made the film, it’s flying, it’s fantastic, everybody loved it. Then EA managed to get the rights to make the game. Right from that point they recognised how big it was and wanted to be as close as they could be to the film, so they found the money [for the music].”
The other half of the gaming experience is the in-game audio, roughly consisting of SFX, foley, dialogue and atmos. According to Garry Taylor, audio director at SCEE, game audio has “changed out of all recognition. When I started in games I was the audio guy, and I did everything. That’s just not viable these days. Budgets have grown and now there’s a huge amount of people involved in the production.”
Actually creating the audio differs slightly from the sound design for a film or advert, requiring a bit more consideration of the final product because “it’s not linear. The way a game is built, the player might end up triggering several sounds at once that might not be used in the way you intended. You have to be very aware of where things are going, and also where things might clash,” says Anthony Matchett, head of gaming at Wave Studios in London.
As an example, during a voiceover session Matchett will use a close-mic’ed Neumann U87 in the vocal booth, but add a Sennheiser 416 a few feet away “to avoid it sounding too isolated and give it more of a performance feel”. 

Editing captured audio can involve everything from Adobe Audition, Sound Forge or WaveLab to Pro Tools, depending on the developer and the scope of the game, with the majority of the processing happening in the box (“Waves is a staple for us, especially from an engineering side. They get used in every session.”).
Bigger differences are found in a game’s production cycle, and Taylor is quick to point out that there is no ‘typical’: audio budgets can range from none whatsoever to millions of pounds, taking anywhere from six months to up to four years depending on the developer, how the audio is implemented and, of course, the platform it will be played on.
Taylor recalls one particularly heavy sound effects recording assignment: “For Gran Turismo 4 I recorded all the European cars, and we had to record every car if possible. I think for that game I recorded 300 or 400 cars.” The number of dialogue files can become even more unwieldy as a script can call for up to 10,000 lines, in English, though any territory can request a translation if it thinks the game will be a success there. Taylor’s record number of translations currently stands at 24. Next level In Europe, third-party software developers like AudioGaming are creating the next generation of those game audio tools, such as the company’s series of procedural audio plug-ins including AudioWind and AudioRain. “In the video game world, procedural audio refers to the computational process of generating audio from nothing, or almost nothing. The goal is to use almost no .wav data but rather models that generate in real time the equivalent audio data that would be contained in pre-recorded files,” says AudioGaming CEO Amaury La Burthe.
Procedural audio saves memory, relying on code rather than audio files, and since it links directly with a game’s audio engine, changes in game parameters will automatically modify the generated audio; like an interactive synthesizer.
Interactive and dynamic audio processing was just one of the many forward-thinking topics of discussion at AES’s Audio for Gaming conference. Other possibilities facing the gaming world are 3D audio and the emerging Web Audio API.
Michael Kelly, vice chair technical council at AES, has been spearheading the organisation’s involvement in audio for video games since the first technical papers came out in 2006. He recalls the changes in issues and challenges since the first Audio for Games conference in 2009: “When we set up the first one it was all about horse power and DSP. You had these consoles with big, heavy chips in them that could do a lot. Things like convolution reverb were what people were talking about.

“We’ve moved on from that now in two ways technically: now it’s about people knowing how to use the technology we have in a game-related way, rather than bringing ‘studio’ DSP into a game, and understanding the application of the technology rather than the ability to do it.

“The second one is the way people play games. We’re seeing huge advances in just what we can do even in a web browser now. That underlines the message that horsepower isn’t the issue it used to be. It’s also about understanding the different ways audio needs to be done for the different ways people play a game. We’re trying to focus a lot on that at the conference this time.”
These are challenges keynote speaker Rimbaud (pictured) has already faced in past projects. He recalls one project in particular, which involved designing the sound for the Philips Wake-Up Light (released in 2009): “I very much had to look at the technical details of how the physical sound actually works as opposed to just writing a tune,” said Rimbaud.

“I thought that kind of experience would lend itself quite well to games. [The keynote invitation] was really an invitation to present some ideas and shake people up a bit. The only stipulation I had was to not mention Steely Dan!”

We’ve barely scratched the surface when it comes to the challenges that this market represents, though Taylor neatly summarises the general feeling of where it’s heading: “We have to think on our feet and we have to find new ways to solve problems because in the world of console game development at least, the goalposts change every 5-10 years with the advent of new platforms. The tools are always getting better but they’re also getting more complex because designers always want to try new things. That has a knock-on effect on team sizes and development costs. It’s an exciting time for all game audio people at the moment, it’s a very exciting industry.”