Elettronica Plus

I tutorial di EO-WEB (capitolo 1): AdcERT

Signal Chain Processing
A number of industrial applications requires the measurement of analog signals from the field (sensors) with a high degree of accuracy.

In this kind of applications designers have to cope with two main problems:

1. the desired accuracy of the whole measurement system
2. the input signals range which can be in general very wide.

In digital systems, sampling frequency is usually not an issue, since a few tens or hundreds of hertz are very often more than enough. Stand alone Analog to Digital Converters or converters as micro peripherals nowadays reach a very high sampling frequency, should this be needed. The desired accuracy involves computation of the entire system performance and considerations on the specs and errors (for instance resolution and gain/offset errors in converters, offset voltage in opamps, deratings vs temperature) introduced by each component in the signal flow.

The input range problem stems from the necessity to present at the input of the Analog to Digital converter a signal with the maximum possible voltage swing (i.e. its input range) in order to exploit the converter itself at best. The problem can be solved quite easily in the case of a stable input signal whose dynamic range is known a priori; but it can become challenging if for instance the signals come from different sensors, with different sensitivities and full scale ranges or if the user is to be allowed to change the input range at his wish. More difficulty is added if these changes must be done in real time.

Considering a generic system that reads a signal from the field (acquired by a sensor), operates some kind of processing on it, and gives the output result back to the field, two different approaches can be used in the processing of data from the field:

1. an analog approach, where the signal front end interfacing and processing are performed by analog circuits (op amps are normally the fundamental building blocks) (Fig. 1a)
2. a digital approach, where, after some analog signal preprocessing, an Analog to Digital Converter is used to convert the signal amplitude to a number, followed by some kind of processing on these numbers; usually at the end of the processing, these values are converted to an analog signal and fed back to the field (Fig.1b.).

Figures 1a and 1b

The main focus of this and the following sections will be on the system described in figure 1b. Some considerations, valid also for the pure analog approach, will be done and investigated. Some general overview of the two chains will be followed by a much more detailed description of the digital system. Great emphasis will be dedicated to the practical information required to design a “working” digital system; tools will be suggested and investigated to enable readers to easily and quickly exploit the content of this article in their real designs.

Some preliminary considerations will help designers to envision ways to increase the performances of the system. The first block to be analyzed is the Analog to Digital Converter; different techniques that can be used to extend its dynamic range will be targeted.

Analog to Digital Converter: higher than needed resolution
The first problem to solve is how to manage a very wide dynamic range at the input of the Analog to Digital Converter. The most obvious solution is to:

1. make use of an A/D converter with a resolution higher than necessary
2. scale the input signal so that its widest amplitude is exactly the same as the converter input range.

This way, the wide amplitude signals are converted at best, but, as the amplitude diminishes, some output code words will never be generated. If the A/D converter is followed by a digital processor it is easy to amplify in the digital domain the converted value, but fine variations would be irreparably lost.
A key concept, widely used in defining the performance of an Analog to Digital Converter, is the Signal to Noise Ratio (SNR). See Appendix 1 for details.

From the mere point of view of cost, many high resolution A/D converters are very appealing. However, the use of such precision components requires that the whole circuitry around them (input buffering opamps, reference voltage generators and what else) has the same level of precision, otherwise the added noise and reduced performances will make the total accuracy low. The total cost (even in terms of design development time) can then be too high.
A much more detailed analysis of the Analog to Digital Converter is postponed to a later section, where details of using such a component will be presented and analyzed in details.

Analog Front End
The Analog Front End block, eventually associated with a low-pass filter, is the circuit that normally precedes the Analog to Digital Converter. Analog Front End, by the way, is a very generic name. It is used to extrapolate, from the signal acquired from the field, a reasonably clean and neat waveform, that contains all the relevant information that the system is supposed to operate on. Typical processing performed by the Analog Front End is filtering, level shifting, amplification, removal of offsets, conversion from a current to a voltage signal, and so on.

Analog Front End: non-linear compression
One possible approach to increase the input dynamic range to the Analog to Digital Converter is to nonlinearly compress the input signal by means of a logarithmic amplifier. These techniques seem to give more bits of resolution than those the converter is able to give. This is the same process widely used for instance in the telephone network to compress the wide dynamics of speech into an 8 bit Analog to Digital converter (Fig. 2a and 2b).

Figures 2a and 2b

The converter equally partitions its input range into 2n intervals, but the presence of the log amplifier arranges things so that these equal voltage divisions correspond to increasingly wider input voltage intervals. In telephone applications, the use of such a technique however finds its reason in the specific statistics of the signal at hand. In general, if such statistics is not known, the use of a log amplifier can bring to unexpected results.
Of course there are some drawbacks in using non-linear amplifiers:

1. first of all the compression process non-linearly distorts the value of the Least Significant Bit. So the amplitude of the LSB is different if the signal is near full scale or near the minimum input value. Consequently this technique should not be used if high resolution is necessary at all points of the input range
2. the converted value is in the log scale: if it has to be used in digital processing, some form of antilog conversion back to a linear value must be performed. Even if a lookup table is used, this kind of computation can be difficult to implement in microcontroller with reduced program memory space or when very high speed computations are required
3. the argument of the log function can only be positive (to give a real value): some tricks must be implemented if the input signal is bipolar.

Analog Front End: Programmable Gain Amplifiers
An alternate solution is possibly the most common one: in front of the A/D converter a circuit whose gain can be easily changed with the help of a microcontroller is used. A very compact and elegant solution is to use a programmable gain amplifier (PGA), whose gain can be changed in finite steps controlling some digital input pins. There is a wide proliferation of this kind of components with different topologies (single amplifier or instrumentation amplifier topology; single-ended or differential input; external offset adjustment pins) but some features are common to them all:

1. the gain can vary onl
y in a limited number of finite steps
2. these steps are fixed and usually are decadic (gains of l, 10, 100, 1000) of binary (1, 2, 4, 8)
3. designers often have to cascade two PGAs with different gain steps to obtain the required amplification values.

A major problem, which designers must be well aware of, comes from the offset voltage and its drift in temperature. Typical offset specs for a PGA are: offset, referred to input, of ± 10 + 20/G µV; drift ± 0.1 + 0.5/G µV/ºC.

It is interesting to compare the total offset output voltage in the two extreme cases of gain 1 and gain 1000 (Table 1).


 

Table 1

As shown, the offset can be of 10 m V, and the greatest value is associated with the maximum gain, that is when the input signal is smallest. In a 16 bits resolution A/D converter with a full scale input of 5 V, this means that 10 mV/76,3 µV = 131 LSBs are involved.
Again it is interesting to compare two different situations where, starting from different input values, the signal is amplified in order to obtain a 5V peak to peak range (Table 2).

Table 2

This forces designers to correct some way the offset of the PGA or possibly of the whole system, either with resistive trimmers (which require a calibration unit per unit) or with more sophisticated techniques involving a DAC (which in general must be a precision dac, since it should output voltages in the order of mV or even tens or hundreds of microV). Although possible, these techniques are sometimes difficult to implement and expensive both in terms of components count and in terms of hardware and firmware complexity.

= = = =
Appendix 1

Signal to Noise Ratio in an Analog to Digital Converter
Let us consider a sine wave signal. Let the sine wave function be:

eq. A1-1

where Vpp is the peak-to-peak amplitude and f0 is the sine frequency.
The rms value of such a signal is:

eq. A1.2

Considering an N bit Analog to Digital Converter, the quantization noise rms is:

eq. A1.3

where Δ is the converter LSB.
The signal to noise ratio (SNR) is defined as:

eq. A1.4

The reason for qualifying this value as “theoretical” will become clear soon. The derivation of the quantization noise rms value can be found in Appendix 2.

If we express the SNR in dB we have:

eq. A1.5

This is the theoretical signal to noise ratio.

A “real” ADC will have some additional noise sources, due to static and dynamic nonlinearities. This will bring to a somehow deteriorated SNR. A different approach is normally used to “measure” the quality of an ADC. The concept of ENOB (Effective Number of Bits) is introduced, where:

eq. A1.6

The actual value of the SNR is measured using digital signal processing techniques: a predetermined frequency and amplitude sine wave is fed into the ADC and a set of converted values is recorded. An FFT (Fast Fourier Transform) is performed on these samples. Some care must be taken in the number of samples and how many sine wave periods are used in the computation, but this is far beyond the scope of this article and this topic won’t be addressed.
The actual SNR is then computed as the ratio of the rms energy contained in the fundamental sine wave (computed by taking the square root of the sum of the squares of the appropriate number of samples around the fundamental peak) and the rms energy of the remaining samples representing the quantization noise (that is by taking the square root of the sum of the squares of these samples and excluding the dc component).

Note that the theoretical SNR should be corrected, as shown below, in a system where we can change the sampling frequency:

eq. A1.7

where fs is the sampling frequency and fm is the maximum frequency of the signal spectrum.
A consequence of this equation is that if we double the sampling frequency:

eq. A1.8

one gets

eq. A1.9

Consequently the ENOB will become:

eq. A1.10