This is a continuation of a discussion about quantization and analog-to-digital converters. In that discussion, the normalized quantization step through an N-bit ADC was denoted *q*, where *q* = 1/2^{N}. The ADC encoder transfer function yielded a quantization error range over the interval [-*q*/2,+*q*/2].

Quantization is a highly nonlinear process. Denoting the input and output of a quantizer as *u[n]* and *u _{q}[n]*, respectively, the error from quantization

*u*can be re-arranged to yield the additive noise model of quantization error:

_{q}[n]-u[n]*u*, where

_{q}[n]=u[n]+e*e*is the quantization error.

The figure below shows the quantization error for a full-scale sine wave over a single period. Also shown is the quantization error for a full-scale sawtooth ramp signal.

Although the quantization error from the sinusoid is signal-dependent and nonlinear, the commonly used additive noise model assumes a stochastic process in order to simplify the analysis. In particular, the error is treated as an independent and identically distributed (i.i.d.) random variable.

If the quantization error is modeled as a random variable with a uniform distribution, the probability density function is given by:

The root mean-square (RMS) quantization error with such a distribution can thus be derived:

The RMS value of a full-scale sinusoid whose peak-to-peak swing has been normalized to unity is given by:

The signal-to-quantization-noise ratio (SQNR) through the ADC can then be computed and expressed in decibels (dB) as:

Substituting *q*=1/2^{N} gives:

This is the well-known equation for SNR or dynamic range through an N-bit ADC using the additive noise model of quantization error, and in the absence of all other noise sources like thermal noise in the analog circuitry, dither and sampling jitter. Note that no over-sampling is assumed here.

This analysis assumes that quantization errors are uniformly distributed over the quantization interval. In reality, the errors are not uniformly distributed for a sinusoidal input. For example, referring back to the time-domain quantization error from a sinusoid and a sawtooth ramp shown in the figure above, the respective error distributions are shown in the figure below.

The quantization error of the sawtooth wave appears to be uniformly distributed, but that of the sinusoid is clearly not. This is due to the signal-dependence of the sinusoid’s quantization error. Since the sawtooth actually produces uniformly distributed quantization errors, it is instructive to compute the SQNR from quantizing such a signal.

The RMS value of a full-scale sawtooth whose peak-to-peak swing has been normalized to unity is given by:

Using the RMS quantization error derived above for a uniformly distributed quantization error, the SQNR of a sawtooth wave applied to an ADC can be expressed as:

In general, the computed SQNR depends on the signal source and the model used for the quantization error. For sinusoidal inputs, the approximation of uniformly distributed quantization error improves as the ADC precision increases.

The figure below compares the error distribution of the sawtooth with that of four ADC resolutions (3 bits, 6 bits, 9 bits, and 12 bits). Clearly, the distribution approaches the quantization model of a sawtooth as the ADC resolution is increased.

Modeling the SQNR as 6*dB* per bit of ADC precision is a good approximation, especially as the ADC precision asymptotically increases. For many signal processing applications, the usefulness of approximating the quantization error as an i.i.d. noise source, far exceeds the inaccuracy of the model.

**Copyright © 2008 – 2012 Waqas Akram. All Rights Reserved.**

on January 31, 2012 at 12:11 pm |Georg Stadler (@georgst)Would it really be so much more expensive to use a different pdf for the error?

From a Bayesian perspective (yeah, I am one of those Bayes fans) you are making the argument that the influence of the distribution choice (that’s the prior in the Bayesian context) decreases as more data through the higher sampling rate is available. Do I get that right?

I work on problems where getting those data points is very expensive (solution of systems of differential equations) so the prior has a significant influence on the error distribution.

on February 4, 2012 at 11:08 pm |dwellangleThis is not really a problem of sample rate, but of modeling the quantization error, which is always signal-dependent. Modeling it as an additive source of independent errors facilitates the use of classical linear techniques for circuit analysis.

What is discussed here is the observation that the distribution of the quantization error looks the most uniform when the signal amplitude occupies the full conversion range, but becomes less uniform (or more obviously signal dependent) as the signal amplitude gets smaller.