Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


Assessing the automated world

Automation has become an integral part of broadcasting in the last 20 years but, as Mark Errington of IT automation specialist OASYS tells Kevin Hilton, the last five years have seen the biggest changes in how TV is played out, with audio a key factor in recent developments.

Broadcast automation is now an accepted way of life in both TV and radio. The first generation of what we would think of as automated play-out equipment was computer controlled but still very much based on mechanical principles, with huge videotape carousels holding a station’s output. The shift to a real IT infrastructure has happened only in the last few years as broadcasters have moved wholesale to digitisation, with programmes in file-form rather than on tape. UK automation system developer OASYS (the trading name of On-Air Systems) is celebrating its 20th year in the business but, says chief executive Mark Errington, the real changes have come only in the last five years and many have had a direct influence on how audio is handled in the broadcast chain. “Seven to eight years ago people were typically using just one audio output, they didn’t use multiple feeds,” Errington comments. “The significant change started to happen only about five years ago through a combination of things. There’s been a proliferation in file formats and a move away from doing the main control using hardware and computer boards.” Errington explains that at one time very little control and processing for automation was done using external software. This, he says, placed a major limitation on the number of tracks that could be used on a system. “Things moved on with the use of MFX [Material eXchange Format] as a general wrapper but most of what was going on was still at the scheduling end. That meant it all had to be programmed into the main system, there was nothing happening through metadata on the files themselves. Today, and in only a few years, the move has been to more control from metadata, with the schedule not dictating everything that happens.” So for the first 13 or so years of OASYS, not much changed in the automation market. Errington does not think broadcasters and developers have been taken by surprise by the rapid rate of change in a relatively short space of time, although he does observe that many are still “playing catch up with the format wars”. This is despite hardware/software manufacturers like Matrox, whose technology OASYS uses in its systems, declaring themselves “format agnostic” and moving away from doing most of the hard work of processing and control on computer boards. During this year’s IBC OASYS will introduce new features for its automation systems that reflect recent technological advances and the changes in attitude to how this critical area of broadcasting is done. Errington says the company has been talking to its clients to get feedback on how they work today and what they need to do the job. The enhancements being launched in Amsterdam are part of an ongoing programme to “make clients’ operations smoother and more efficient”. Because OASYS systems are modular, the new features can be integrated into both new and established installations. The three new products are: TimePlay, which combines automatic event refresh on an item that is playing with time delay event synchronisation, allowing different material to be inserted on many streams, such as local news programmes; DVB subtitling, using the Digital Video Broadcasting standard and streaming outputs, as well as Teletext and Open Sub-titles; and Audio Shuffling. This last feature enables audio tracks to be moved between outputs so that different languages can be assigned to specific outputs. This is controlled either from the main schedule, metadata or a Quick Time reference file. Errington says being able to handle multiple languages automatically is “an absolute necessity”. Before, he says, this function would have been dealt with only by the scheduling software but now there are different options available to users: “There can be a combination of things. There are the QuickTime reference files and XML-based secondary metadata, as well as the schedule. With override capability it means that a system can cater for a range of scenarios and the system is more flexible.” There are, however, still little traps to be aware of, Errington warns. Clear labelling of files in metadata is very important, he says, giving the example of a broadcaster that programmed its system to recognise PCM audio files and then wondered why the video played but there was no sound. It was later discovered that the files had been tagged as “broadcast audio”, not PCM. As for the next five years, Errington sees some specific changes ahead in automated audio processing. “One of the big areas is handling all the relevant Dolby formats, including Dolby E and Dolby Digital,” he says. “There are also questions about whether to do audio ducking within the audio processing engine and using third party equipment for loudness control. These are all challenges because what we will be looking at is more than just standard PCM audio.”