News Training
studiomaterials training

Testing audio ADCs and DACs

Jon Chapple 8 May 2015
David Mathew, Audio Precision, PSNTraining

By David Mathew, technical publications manager, Audio Precision

In 2015, there’s not much question about audio storage, transmission or streaming: it’s digital. Apart from rare sightings of vinyl or open-reel tape in boutique sales or creative enclaves, audio is digital. Done right, digital audio is flexible, robust and of very high quality. PCM recording, lossless surround formats and even lossy compression (at least at high bitrates) provide the soundtrack for our lives.

But, of course, sound in air is not digital. The pressure waves created by a human voice or a musical instrument are recorded after exciting a transducer of some sort, and the transducer responds with an electrical voltage that is an analog of the pressure wave. Likewise, at the end of the chain the digitised audio signal must eventually move air, using a voltage that is the analog of the original sound wave to drive a transducer that creates a pressure wave.

Near the beginning of a digital chain, then, we must use an ADC (analogue to digital converter) to transform the analogue electrical signal to a digital representation of that signal. Near the end of the chain, we must use a DAC (digital to analogue converter) to transform the digital audio signal back into an analoue electrical signal. Along with the transducers, these two links in the chain (the ADC and the DAC) are key in determining the overall quality of the sound presented to the listener.

Testing audio converters
The conventional measurements used in audio test can also be used to evaluate ADCs and DACs. These measurements include frequency response, signal-to-noise ratio, interchannel phase, crosstalk, distortion, group delay, polarity and others. But conversion between the continuous and sampled domains brings a number of new mechanisms for non-linearity, particularly for low-level signals. This article looks at problems seen in audio A-to-D and D-to-A conversion, and some methods that have evolved to address these issues.

Of course, ADCs and DACs are used in a great number of non-audio applications, often operating at much higher sampling rates than audio converters. Very good oscilloscopes might have bandwidths of 33GHz and sampling rates up to 100gsps (gigasamples per second), with prices comparable to a Lamborghini. Although audio converters don’t sample at anywhere near that rate, they are required to cover a much larger dynamic range, with high-performance ADCs digitising at 24 bits and having SNRs over 120 dB. Even a high-end oscilloscope typically uses only an 8bit digitiser. 24bit conversion pushes the measurement of noise and other small-signal performance characteristics to the bleeding edge; consequently, measurements of such converters require an analyser of extraordinary analog performance.

Test set-ups
The typical test setups are straightforward.

For ADC testing, the analyser must provide extremely pure stimulus signals at the drive levels appropriate for the converter input. For converter ICs, the analyser must have a digital input in a format and protocol to match the IC output, such as I2S, DSP or a custom format. For a commercial converter device, the digital format is typically an AES3-S/PDIF-compatible stream. For devices that can sync to an external clock, the analyser should provide a clock sync output.

Figure 1: ADC test block diagramFor DAC testing, the analyser must have a digital output in the appropriate format, and analog inputs of very high performance.

The graphs in this article were created by testing commercial converters using the AES3 digital transport. The analyser is the Audio Precision APx555, launched in September (see EXCLUSIVE: Audio Precision APx555 analyser sets “new standard for testing”).

Figure 1: DAC test block diagramAs mentioned previously, ADCs and DACs exhibit behaviors unique to converters. The Audio Engineering Society has recommended methods to measure many converter behaviours in the AES17 standard. The following examples examine and compare a number of characteristic converter issues.

Figure 3: FFT idle channel noise, DAC 'A'Idle tones
Common audio converter architectures, such as delta-sigma devices, are prone to have an idling behavior that produces low-level tones. These “idle tones” can be modulated in frequency by the applied signal and by DC offset, which means they are difficult to identify if a signal is present. An FFT of the idle channel test output can be used to identify these tones.

The DAC in figure 3 (above) shows a number of idle tones, some with levels as high as –130 dB. The idle tones (and the noise floor) in Figure 4 (below) are much lower.

Figure 3: FFT idle channel noise, DAC 'A'Signal-to-noise ratio (dynamic range)
For analogue audio devices, a signal-to-noise ratio measurement involves finding the device maximum output and the bandwidth-limited rms noise floor and reporting the difference between the two in decibels.

With audio converters, the maximum level is usually defined as that level where the peaks of a sine wave just touch the maximum and minimum sample values. SNR for DAC 'B'This is called “full scale” (1FS), which can be expressed logarithmically as 0dBFS. The rms noise floor is a little tricky to measure because of low-level idle tones and, in some converters, muting that is applied when the signal input is zero. AES17 recommends that a –60 dB tone be applied to defeat any muting and to allow the converter to operate linearly. The distortion products of this tone are so low they fall below the noise floor, and the tone itself is notched out during the noise measurement. IEC61606 recommends a similar method, but calls the measurement dynamic range.

SNR for DAC 'C'Pictured is a comparison of the signal-to-noise measurements of two 24bit DACs operating at 96ksps (kilosamples per second) using this method. As can be seen, some converter designs are much more effective than others.

Jitter
For ADCs, clock jitter can occur within the converter, and synchronisation jitter can be contributed through an external clock sync input. For DACs receiving a signal with an embedded clock (such as AES3 or S/PDIF), interface jitter on the incoming signal must be attenuated.
Figure 7: Jitter sidebands of 10KHz, DAC 'B'Sinusoidal jitter primarily affects the audio signal by creating modulation sidebands, frequencies above and below the original audio signal. More complex or broadband jitter will raise the converter noise floor. A common measurement that reveals jitter susceptibility is to use a high-frequency sinusoidal stimulus and examine an FFT of the converter output for jitter sidebands, which are symmetrical around the stimulus tone. DAC C (below) shows strong sidebands, while DAC B (above) shows none. Note that the strong tones at 20kHz and 30kHz are products of harmonic distortion and are not jitter sidebands.

Figure 8: Jitter sidebands of 10KHz, DAC 'C'Jitter tolerance template
AES3 describes a jitter tolerance test, where the capability of a receiver to tolerate defined levels of interface jitter on its input is examined. A digital audio signal is applied to the input. The signal is jittered with sinusoidal jitter, swept from 100Hz to 100 kHz.As the jitter is swept, its level is varied according to the AES3 jitter tolerance template. Jitter is set at a high level up to 200Hz, then reduced to a lower level by 8kHz, where it is maintained until the end of the sweep.

Figure 9, DAC 'B' THD+N during jitter tolerance sweepAn interface data receiver should correctly decode an incoming data stream with any sinusoidal jitter defined by the jitter tolerance template of figure 9. The template requires a jitter tolerance of 0.25UI peak-to-peak at high frequencies, increasing with the inverse of frequency below 8kHz to level off at 10UI peak-to-peak below 200 Hz.

In this case, jitter is set to about 9.775UI at the lower jitter frequencies, and drops to about 0.25UI at the higher frequencies. The blue trace is the THD+N ratio (distortion products of the 3kHz audio tone), Figure 10: ADC 'C' anti-alias filter OOB rejectionwhich remains constant across the jitter sweep, indicating good jitter tolerance in this DUT. As jitter level rises, poor tolerance will cause a receiver to decode the signal incorrectly, and then fail to decode the signal, occasionally muting or sometimes losing lock altogether.

Filter effects
Figure 10 (above) shows the response of the anti-alias filter in ADC “C”. A tone at the input of the ADC is swept across the out-of-band (OOB) range of interest (in this case, from 40kHz to 200kHz) and the level of the signal reflected into the passband is plotted against the stimulus frequency. A second trace shows the converter noise floor as a reference.

Figure 11: ADC 'B' anti-alias filter OOB rejectionFor Figure 11 (below), spectrally flat random noise is presented to the DAC input. The analog output is plotted (with many averages) to show the response of the DAC’s anti-image filter. In this case, a second trace showing a 1kHz tone and the DAC noise floor is plotted, scaled so that the sine peak corresponds to the noise peak.

Polarity
Audio circuits (including converters) often use differential (balanced) architectures. This opens the door for polarity faults.

An impulse response stimulus provides a clear observation of normal or reversed polarity, as shown in Figure 12.

Summary
Figure 12: Using an impulse response to check polarityTests for the high-level non-linear behavior of an ADC are similar to those for non-linearities in analogue electronics, using standardised tests for harmonic distortion and intermodulation distortion. But audio converters bring new mechanisms for non-linearity, particularly for low-level signals. AES17 and Audio Precision’s Technote 124 describe effective testing methods for audio converter measurements.

www.ap.com

 

Glossary
AES. The Audio Engineering Society, with headquarters in New York City.

AES3, S/PDIF. In the consumer and professional audio field, digital audio is typically carried from point to point as a bi-phase coded signal, commonly referred to as AES3, AES/EBU or S/PDIF. There are electrical and bitstream protocol differences among the variations of bi-phase coded digital audio, but the various signals are largely compatible. Variations are defined in the standards AES3, IEC60958, and SMPTE276M.

Anti-alias filter. In sampled systems the bandwidth of the input has to be limited to the folding frequency to avoid aliasing. Modern audio ADCs normally have this anti-alias filter implemented with a combination of a sharp-cutoff finite impulse response (FIR) digital filter and a simple low order analog filter. The digital filter operates on a version of the signal after conversion at an oversampled rate, and the analogue filter is required to attenuate signals that are close to the oversampling frequency. This analogue filter can have a relaxed response, since the oversampling frequency is often many octaves above the passband.

Anti-image filter (reconstruction filter). Digital audio signals can only represent a selected bandwidth. When constructing an analog signal from a digital audio data stream, a direct conversion of sample data values to analog voltages will produce images of the audio band spectrum at multiples of the sampling frequency. Normally, these images are removed by an anti-imaging filter. This filter has a stopband that starts at half of the sampling frequency – the folding frequency.

Modern audio DACs usually have this anti-imaging filter implemented with a combination of two filters: a sharp cut-off digital finite impulse response (FIR) filter, followed by a relatively simple low-order analog filter. The digital filter is operating on an oversampled version of the input signal, and the analog filter is required to attenuate signals that are close to the oversampling frequency.

PCM. Pulse code modulation, a form of data transmission in which amplitude samples of an analog signal are represented by digital numbers.

UI. Unit interval. The unit interval (UI) is a measure of time that scales with the interface data rate, and is often a convenient term for interface jitter discussions. The UI is defined as the shortest nominal time interval in the coding scheme. For an AES3 signal, there are 32 bits per subframe and 64 bits per frame, giving a nominal 128 pulses per frame in the channel after bi-phase mark encoding is applied. So, in our case of a sampling rate of 96 kHz,

1 UI = 1 / (128 x 96000) = 81.4 ns

The UI is used for several of the jitter specifications in AES3.

author twitter FOLLOW Jon Chapple
Similar stories