Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

PhD student creates soundscape app at York University’s AudioLab

York PhD student Marc Green is working on an innovative soundscape project in the form of an app, called Soundscap AR, which utilises Sennheiser’s AMBEO Smart Headset

A Sennheiser AMBEO Smart Headset is in use at York University’s AudioLab, enabling spatial recordings for an innovative PhD soundscape project in the form of an app, designed by student Marc Green. Simon Duff delves into the project, speaking to the Professor of soundscapes and virtual acoustics, and the designer himself…

York University’s AudioLab creates high quality Ambisonics recording and playback, and is part of the Department of Electronic Engineering Communication Technologies Research Group. Based around a sphere of 50 loudspeakers, the rig in the Lab comprises of 40 Genelec 8030s and 10 Genelec 8040s. A mh Acoustics Eigenmike, Soundfield mics, and the notable Sennheiser AMBEO Smart Headset, paired with a powerful Max MSP system and an exciting anechoic chamber, make up the rest of the AudioLab.

Work at the Lab consists of audio-laborious research on signal processing, acoustic modelling, and machine learning, with experimental projects also taking place with a focus on psychoacoustics and perception. It has a strong track record of working with the pro audio industry – Google, Huawei, the BBC, York Theatre Royal, Meridian Audio, ARUP and AECOM – using expertise to present creative solutions for audio and bio acoustic applications. In addition, the university’s Lab has been part of Sennheiser’s AMBEO Developers Programme since 2018.

The AudioLab has been in existence in one form or another for the past 25 years. At present, the key members of its faculty are Professor Damian Murphy (soundscapes and virtual acoustics), Dr Gavin Kearney, (immersive audio), Dr Helena Daffern (researching voice science, acoustics and performance), Dr Jude Brereton (interactive sonification and performance), Professor Andy Hunt (interactive sonification) and Dr Frank Stevens (acoustic environments). Professor Murphy’s own research work over the years has focused on virtual acoustics, spatial audio, physical modelling, and audio signal processing. An active sound artist, in 2004 he was appointed as one of the UK’s first AHRC/ ACE Arts and Science Research Fellows, investigating the compositional and aesthetic aspects of sound spatialisation, acoustic modelling techniques and the acoustics of heritage spaces.

AMBEO

AMBEO is a series of systems designed to enhance the Virtual and Augmented Reality experience, aiming to push the boundaries of spatial audio with a mission to create compelling audible AR experiences. The manufacturers’ claim is that by blending virtual 3D sound with a user’s real acoustic world, and with the help of Sennheiser’s unique so¢ware and hardware tools, they will be able to take full control of their AR and MR experiences.

Professor Murphy said of the benefits of AMBEO: “It is a novel technology that has real potential as a tool for developing binaural immersive experiences that do not close the subject off from the wider world. There is significant opportunity to develop new augmented audio experiences with better interaction between individuals sharing in the same experience. The AMBEO headset is also a really interesting, compact and creative device for making immersive binaural recordings.”

PHD SOUNDSCAP AR

Marc Green is a current York University PhD student researching at the AudioLab, who has made extensive use of AMBEO while working under the guidance of Professor Murphy. One of Green’s recent publications is ‘EigenScape: A Database of Spatial Acoustic Scene Recordings.’ Originally trained as a classical pianist, his career has progressed through music production courses and studio work to, at present, high level sonic research. Much of his work is based around “Environmental Soundscapes’, both in practice and research, and he is currently working on measurement systems, which constitutes looking into the sonic content of a landscape and how people react to it. He is also investigating how machine learning can be deployed to create new ideas. For his degree, a few years back, Green created a music, sound, and visual art work featuring content around human speech, based on the condition Hyperlexia, a form of autism, and for his Masters research, Green worked on Acoustic Feedback Processing in conjunction with Allen & Heath.

As part of his current PhD studies, Green has developed a new app called Soundscap AR, now available on the Apple App Store. Intended to analyse the sounds of local environments, the app uses machine learning to go beyond the traditional decibel level measurement techniques, and instead, give users readings for how much natural, mechanical, or human sound makes up a given sound scene. Green has long argued that current noise level measurements give no information on the actual content of a sound scene or, in other words, how loud an environment is. Therefore, his intention has been to use machine learning to better inform by creating readings that can be done remotely.

Soundscap AR uses AR to add a selection of virtual sound sources, or a virtual sound barrier, to a given scene, monitoring how these affect the readings and perception, and the app works best with Sennheiser’s AMBEO Smart Headset. AMBEO Smart allows users to hear augmented sound, featuring built-in microphones so users can hear their environment, real and virtual, as though they are not wearing earphones. Green has been more than happy with AMBEO: “The best thing about the headset is that the microphones are really good quality and very easy to use.”

Green explains further about his Soundscap workflow: “The first thing I did was use a em32 Eigenmike and a surround sound microphone, recording in multiple locations in the north of England, from cities to nature. The em32 is an incredible mic made by mh acoustics, and is composed of multiple professional mics positioned on the surface of a rigid sphere. Based on those recordings, I worked to create a computer-based machine learning system, hoping that the computer would learn what those spaces are and how they behave. Machine learning is based on the idea of creating a device that would react to sound without having to take people to a location, and not necessarily based on a decibel reading alone.” The second part of the app consists of ‘Positive Sound’, which overlays sounds made by Green to create a Virtual Sound Objects Soundscape; four layers are currently available as virtual objects, a Virtual Water Fountain, a Virtual Bird Song, an Acoustic Barrier and a Car. Impressively, users can also create their own virtual sound objects.

Looking forward, Green is developing various new ideas and projects: “Future ideas will involve using the Eigenmike and new work on source tracking within recording, using more detailed sound measurement. I am also thinking about news ways of creating original audio and music scene generation for the gaming industry that will react to a users location.” 

Close