What is discussed here is the observation that the distribution of the quantization error looks the most uniform when the signal amplitude occupies the full conversion range, but becomes less uniform (or more obviously signal dependent) as the signal amplitude gets smaller. ]]>

From a Bayesian perspective (yeah, I am one of those Bayes fans) you are making the argument that the influence of the distribution choice (that’s the prior in the Bayesian context) decreases as more data through the higher sampling rate is available. Do I get that right?

I work on problems where getting those data points is very expensive (solution of systems of differential equations) so the prior has a significant influence on the error distribution. ]]>

The Fink Project will always have a warm place in my heart, for it blazed the trail of bringing reliably packaged open-source software to the once-nascent Unix/OS X community.

]]>