Feeds:
Posts
Comments

Archive for April, 2012

Google interviews

From an interesting blog post about the level of depth during a Google technical interview:

They also sent me an email with advice. It can be summed up as “You should know everything. If it’s to do with computers, you should know it. Here are 5 books and 4 fancy algorithms you should read. You must also be intimately familiar with all these basic-ish algorithms. This is your two week notice. Good luck. Oh and take a look at these videos too!”

I have a few friends who work at Google, and they are all top-notch engineers and thinkers, so it’s not a coincidence. Any good organization has a technical screening process that covers more than just the basics, and it looks like Google is no exception. In fact, the above note makes it seem like they go out of their way to ensure the candidate comes prepared to show their best, something that not all companies do. In this case, the important thing to note is that Google is more interested in smart software engineers than simply web developers.

Edit: For whatever reason, the linked blog is down. However, the cached version of the blog can be found here (cached by Google… so meta).

Read Full Post »

During a recent interview, the great musician Neil Young expressed his desire for high-quality formats for music downloads. In terms of popular music, high-quality refers to file formats which preserve high-resolution data (e.g. 24-bit) sampled at rates much higher than necessary (e.g. 192 kHz). Whereas recording the original sources at higher sampling rates may provide some benefits with respect to the particular equipment being used, preserving these sample rates for distribution of the final mixed music makes no sense. An excellent discussion of why this format is unnecessary can be found here (xiph.org).

Over the years, many “audiophiles” have insisted on creating new file formats, distributing the audio files at absurd sampling rates, for the sake of “remaining faithful to the original audio waveform.” While many people know about the sampling theorem, there is a common misconception present in the minds of the lay-person when they look at the image of a sampled waveform and try to apply their intuition: they see the output of a the sampling process as a disjointed and distorted-looking stair-step response.

It has become common practice to represent the sampled waveform through an analog-to-digital converter (ADC) as a stair-step response (including on this blog). This representation is not strictly correct because it presumes that signals produced at the output of an ADC have a continuous-time representation. What actually emerges from an ADC is a signal in the discrete-time domain, where the waveform discontinuous and only exists at the sampling instants. This may seem like a trivial point, but there are ramifications for the untrained eye. When someone who is not well-versed in signal processing theory views an image showing the classic stair-step sampled waveform, their mind intuitively views this as a grossly degraded version of the original waveform. This leads to scores of “audiophiles” to incorrectly assume that an audio signal sampled at 192 kHz is inherently “more accurate” than more traditional (and sufficient) rates of 44.1 kHz (compact disc) or 48 kHz.

In reality, the output of an ADC looks more like a discontinuous sequence of points (“dots”) which when interpolated recreate the original signal. When such an image is shown to the human eye, the sampled waveform does not appear as distorted as the stair-step representation. The digital (discrete-time) circuitry that follows the ADC has no concept of what the signal might look like in-between the samples. The signal only exists at the active clock edges, and as long as Nyquist is satisfied the samples accurately represent the input waveform (assuming all setup-time and hold-time constraints also remain satisfied).

In order to understand how the stair-step response comes about, we need to consider the operation of a digital-to-analog converter (DAC). When converting a discrete-time signal to a continuous-time waveform, something known as a reconstruction filter is required. This reconstruction filter is specially designed to produce a continuous-time output when provided with a discrete-time input. A common type of DAC reconstruction filter is the zero-order hold, which is implemented by simply holding constant the previous sample until the next sample is encountered. The zero-order hold reconstruction filter is what leads to the aforementioned stair-step representation of the input signal. The sight of this repulsive-looking waveform leads to further questions. What do those “stair-steps” represent? Are they harmful? How do we remove these effects to recreate the original smooth signal? In order to answer these questions, we must dig deeper.

The filtering operation is basically a time-domain convolution of the input signal with the filter’s impulse response. This corresponds to a multiplication in the frequency domain. The impulse response of a zero-order hold reconstruction filter is a single square pulse, with a width equal to the sampling period. Its frequency-domain representation looks like a sinc function, which continues forever in both positive and negative frequency directions, with nulls at multiples of the sampling rate. Any discrete-time signal has a frequency-domain representation which contains an infinite number of copies of the input signal band, spaced at multiples of the sampling rate. The time-domain convolution of this signal with the reconstruction filter is equivalent to the multiplication of their frequency-domain representations.

As a result, the reconstructed signal still contains an infinite number of copies of the original waveform, albeit attenuated as we move further and further away from the origin in the frequency domain. The presence of these higher-frequency copies is what leads to the stair-step shape of the signal waveform. As long as the repeated copies can be removed without harming the primary signal band, the original signal can be perfectly reproduced without any loss. These copies need to be filtered out in order to leave us with a clean single spectral copy of the original waveform. This is usually achieved using a low-pass filter at the output. Throughout this signal-chain, there are practical issues that need to be dealt with, such as correcting the pass-band droop in both the reconstruction and the low-pass filters, as well as compensating for any phase non-linearities.

The point of all this is that what actually emerges at the output of an ADC is a series of instantaneous sample dots, floating in time and space, and ready to be consumed by the next discrete-time (digital) processing circuit. The human brain finds it much easier to spatially interpolate these points and imagine these to be a reasonably accurate representation. However, a stair-step depiction of the waveform is not only rejected by our intuition, but strictly speaking, it is also not what actually emerges from the ADC as digital samples. The stair-step waveform more closely represents an intermediate signal within a DAC that happens to use a zero-order hold reconstruction filter, and this is the wrong waveform to which the lay-person’s intuition should be applied.

Copyright © 2012 Waqas Akram. All Rights Reserved.

Read Full Post »