The Fibonacci sequence is the sequence of numbers formed when each number in the sequence is equal to the sum of the two Fibonacci numbers immediately preceding that number. The sequence is initialized by the numbers 0 and 1, and looks something like this: 0,1,1,2,3,5,8,13,21,34,55,89,144,233,377,etc. It is trivial to design a digital filter that produces the Fibonacci sequence as it’s impulse response. The impulse response of a discrete-time or digital filter is the response produced when the filter is stimulated by the Kronecker delta function:

If the initial state of such a filter is such that all internal elements are at zero, the filter described by the following difference equation can produce an impulse response equal to the Fibonacci sequence. The input and output sequences are denoted *x[n]* and *y[n]*, respectively.

Taking the *Z*-transform of this difference equation yields the following transfer function *H(z)* through the filter:

This transfer function has two *zeros* at the origin, and a pair of real-valued *poles* at *z*=-0.618 and *z*=1.618. A pole-zero plot can quickly be produced using the *roots* and *zplane* commands in Matlab:

zplane(roots([1,0,0]), roots([1,-1,-1]))

For the transfer function *H(z)*, the coefficient matrices of the numerator and denominator polynomials in *z*^{-1} are given by *B*=[1] and *A*=[1,-1,-1], respectively. The frequency response of the filter can easily be computed using the *freqz* command in Matlab. In the following snippet of code, *H* is the complex-valued DFT and *W* is a vector of radian frequencies at which *H* has been computed. The following plot shows the magnitude of the frequency response.

% numerator polynomial B = [1]; % denominator polynomial A = [1,-1,-1]; % compute frequency response [H,W] = freqz(B,A); % plot the magnitude response (normalize W to pi) plot(W/pi,abs(H));

The frequency response is computed along the unit circle in the *Z*-domain (*z*=e^{jw}). Since the response is symmetric, the above plot only shows *W*/pi going from 0 to 1. As expected, the magnitude response shows peaks near 0 and 1, due to the relative proximity of the poles to these two points.

In the time-domain (or sample-domain), the response of such a system to a particular input sequence can easily be computed using the *filter* command in Matlab. In the following fragment of code, the above filter polynomials are used, along with initial states of zero. The input stimulus sequence *x* is ten samples of a Kronecker delta function, starting at *n*=0 and ending at *n*=9. The filter output *y* is initially 0, but then proceeds through the next ten numbers in the Fibonacci sequence.

% Kronecker delta function starting from n=0 to n=9 x = [1,zeros(1,9)]; % initial conditions of filter init = [0,0]; % compute the filter response y = filter(B,A,x,init);

A quicker way to compute the impulse response is through the *impz* command. The following line of code not only gives the first 10 values of the impulse response, it also produces a stem-line plot of the impulse response.

% compute impulse response impz(B,A,10)

A simpler way to implement a Fibonacci sequence generator would be to discard the input *x* and initialize the filter to [1,0] instead of [0,0]. In any case, the Fibonacci filter example is not really practical, as the numbers increase until the internal states exceed the numerical limitations of the implemented machine. An unbounded response indicates that the filter is unstable, and this is confirmed by the location of a pole outside the unit circle in the *Z*-domain.

**Copyright © 2012 Waqas Akram. All Rights Reserved.**

]]>

They also sent me an email with advice. It can be summed up as “You should know everything. If it’s to do with computers, you should know it. Here are 5 books and 4 fancy algorithms you should read. You must also be intimately familiar with all these basic-ish algorithms. This is your two week notice. Good luck. Oh and take a look at these videos too!”

I have a few friends who work at Google, and they are all top-notch engineers and thinkers, so it’s not a coincidence. Any good organization has a technical screening process that covers more than just the basics, and it looks like Google is no exception. In fact, the above note makes it seem like they go out of their way to ensure the candidate comes prepared to show their best, something that not all companies do. In this case, the important thing to note is that Google is more interested in smart software engineers than simply web developers.

**Edit:** For whatever reason, the linked blog is down. However, the cached version of the blog can be found here (cached by Google… so meta).

]]>

Over the years, many “audiophiles” have insisted on creating new file formats, distributing the audio files at absurd sampling rates, for the sake of “remaining faithful to the original audio waveform.” While many people know about the sampling theorem, there is a common misconception present in the minds of the lay-person when they look at the image of a sampled waveform and try to apply their intuition: they see the output of a the sampling process as a disjointed and distorted-looking stair-step response.

It has become common practice to represent the sampled waveform through an analog-to-digital converter (ADC) as a stair-step response (including on this blog). This representation is not strictly correct because it presumes that signals produced at the output of an ADC have a continuous-time representation. What actually emerges from an ADC is a signal in the discrete-time domain, where the waveform discontinuous and only exists at the sampling instants. This may seem like a trivial point, but there are ramifications for the untrained eye. When someone who is not well-versed in signal processing theory views an image showing the classic stair-step sampled waveform, their mind intuitively views this as a grossly degraded version of the original waveform. This leads to scores of “audiophiles” to incorrectly assume that an audio signal sampled at 192 kHz is inherently “more accurate” than more traditional (and sufficient) rates of 44.1 kHz (compact disc) or 48 kHz.

In reality, the output of an ADC looks more like a discontinuous sequence of points (“dots”) which when interpolated recreate the original signal. When such an image is shown to the human eye, the sampled waveform does not appear as distorted as the stair-step representation. The digital (discrete-time) circuitry that follows the ADC has no concept of what the signal might look like in-between the samples. The signal only exists at the active clock edges, and as long as Nyquist is satisfied the samples accurately represent the input waveform (assuming all setup-time and hold-time constraints also remain satisfied).

In order to understand how the stair-step response comes about, we need to consider the operation of a digital-to-analog converter (DAC). When converting a discrete-time signal to a continuous-time waveform, something known as a reconstruction filter is required. This reconstruction filter is specially designed to produce a continuous-time output when provided with a discrete-time input. A common type of DAC reconstruction filter is the zero-order hold, which is implemented by simply holding constant the previous sample until the next sample is encountered. The zero-order hold reconstruction filter is what leads to the aforementioned stair-step representation of the input signal. The sight of this repulsive-looking waveform leads to further questions. What do those “stair-steps” represent? Are they harmful? How do we remove these effects to recreate the original *smooth* signal? In order to answer these questions, we must dig deeper.

The filtering operation is basically a time-domain convolution of the input signal with the filter’s impulse response. This corresponds to a multiplication in the frequency domain. The impulse response of a zero-order hold reconstruction filter is a single square pulse, with a width equal to the sampling period. Its frequency-domain representation looks like a sinc function, which continues forever in both positive and negative frequency directions, with nulls at multiples of the sampling rate. Any discrete-time signal has a frequency-domain representation which contains an infinite number of copies of the input signal band, spaced at multiples of the sampling rate. The time-domain convolution of this signal with the reconstruction filter is equivalent to the multiplication of their frequency-domain representations.

As a result, the reconstructed signal still contains an infinite number of copies of the original waveform, albeit attenuated as we move further and further away from the origin in the frequency domain. The presence of these higher-frequency copies is what leads to the stair-step shape of the signal waveform. As long as the repeated copies can be removed without harming the primary signal band, the original signal can be perfectly reproduced without any loss. These copies need to be filtered out in order to leave us with a clean *single* spectral copy of the original waveform. This is usually achieved using a low-pass filter at the output. Throughout this signal-chain, there are practical issues that need to be dealt with, such as correcting the pass-band droop in both the reconstruction and the low-pass filters, as well as compensating for any phase non-linearities.

The point of all this is that what *actually* emerges at the output of an ADC is a series of instantaneous sample *dots*, floating in time and space, and ready to be consumed by the next *discrete-time* (digital) processing circuit. The human brain finds it much easier to spatially interpolate these points and imagine these to be a reasonably accurate representation. However, a stair-step depiction of the waveform is not only rejected by our intuition, but strictly speaking, it is also not what actually emerges from the ADC as digital samples. The stair-step waveform more closely represents an intermediate signal within a DAC that happens to use a zero-order hold reconstruction filter, and this is the wrong waveform to which the lay-person’s intuition should be applied.

**Copyright © 2012 Waqas Akram. All Rights Reserved.**

]]>

*Free Range VHDL*, by Brian Mealy and Fabrizio Tappero, is the latest in a long line of books written about VHDL. What makes this a unique offering is its simplicity as an introductory text, and the fact that it is absolutely free. The book is offered under the Creative Commons Attribution-ShareAlike Unported License, and can be downloaded in PDF and LaTeX format from the Free Range Factory website.

The book is far from a complete reference on VHDL, missing coverage of several important topics. These omissions are actually the basis for its strength. As anyone trying to learn a new language can attest, there is a fine line between introducing basic concepts and overloading the reader with unneccesary information. If you want to learn VHDL, this book may not be the final word, but can most certainly serve as the first.

]]>

When creating technical documents using the LaTeX document markup language, mathematical equations can get complicated enough to require frequent visits to the corresponding reference manuals. This tool is apparently written using JavaScript and simplifies the process of creating equations through an intuitive interface and a simple cut-and-paste operation.

It does pretty well with my track-pad written Discrete Fourier Transform:

]]>

In order for an educational institution to improve affordability and value for students, they drastically need to either cut costs, improve enrollment using the same resources, or both. Online education has the potential to reach whole new markets around the world, as shown by the recent example of Sebastian Thrun‘s class *Introduction to Artificial Intelligence*. The Stanford professor allowed anyone to enroll for the class for free and taught the exact same course material and handed out the same homework assignments as the paying students. The response was overwhelming, to the tune of more than 160,000 students world-wide. Some students created an online study group on the aggregation website Reddit. Here is NPR’s story about Thrun’s class on All Things Considered.

That this is more affordable to an institution is unquestionable. Once the lectures are recorded, universities need only provide the server capacity to handle student clients from all over the world. Online tests and homework assignments make the job of grading that much easier. Humans need not be involved once everything is in place.

The benefits of online education are numerous, but two points stand out above the rest: the lowering of educational costs, and the ease with which quality education will reach parts of the world (and the country) where this was not possible before. One of the greatest causes for concern with a young family is the kids’ college fund. Wages have to keep up not only with inflation, but with already-astronomical and still rising educational costs.

Thrun’s experiment in reaching students around the world made him realize the importance of reaching such a wide and socio-economically disadvantaged audience. He has now started an online university called Udacity, offering free online coursework to anyone around the world. Other initiatives have been in place for longer, such as the Khan Academy, MIT’s OpenCourseWare initiative and the OpenCourseWare Consortium.

Initiatives like these indicate that the established providers of quality education are about to see a rapid rise in competition for students and their tuition fees. As the competition heats up, there is a push for more conventional universities to shift to a more technology-driven curriculum in order to lower their cost-per-student. It may be easy to dismiss faculty concerns as threats to entrenched interests, but there are some genuine concerns.

Once the material has been recorded and uploaded, universities have little incentive to update the material as often, given the relatively higher cost to re-record lectures and keep the material fresh. There would be a disturbance in the environment that fosters bidirectional exchanges of ideas and the organic atmosphere between students and instructors that tends to improve lecture styles year-over-year. The material and focus of the lectures may not be as dynamic and responsive to student needs as the semester progresses. Finally, there would be a threat to the teaching profession as fewer professors are hired, reducing the amount of fundamental research done at the university.

Indeed at Stanford University itself, there is a renewed sense of urgency to provide a more diverse portfolio of online coursework such as their Center for Professional Development and the Stanford Engineering Everywhere programs. It remains to be seen whether the online classroom will ultimately prove to be a great boon to education, but it is clear that the upside in the short-term far outweighs the downside.

]]>

From their abstract:

We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. We show:

• An O(k log n)-time algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and

• An O(k log n log(n/k))-time algorithm for general input signals.

Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k = o(n). Further, they are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = nΩ(1).

We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least Ω(k log(n/k)/ log log n) signal samples, even if it is allowed to perform adaptive sampling.

]]>

What do Bertrand Russell (philosopher, logician and mathematician), Gottlob Frege (the great logician), Georg Cantor (inventor of set theory), David Hilbert (inventor of Hilbert spaces), Kurt Gödel (guru of incompleteness theory), Ludwig Wittgenstein (god of 20th century philosophy), the Greek city of Athens, the Oresteian trilogy and a dog named Manga all have in common?

They all star in the 2009 graphic novel *Logicomix: An Epic Search for Truth*, written by Apostolos Doxiadis and Christos Papadimitriou, drawn by Alecos Papadatos and colored by Annie Di Donna. It’s a rip-roaring tale of passion and madness, mixing fiction and reality, with frequent breaches of the fourth wall, and all presented through bright and colorful art and word balloons. For anyone interested in Logic, Mathematics or Philosophy, it presents Russell’s maddening journey during his quest for establishing the foundations of mathematics through logical certainty. It was during this journey that Russell, along with Alfred North Whitehead, wrote their seminal three-volume work, *Principia Mathematica*.

I first read about the book in the *New York Times’* Sunday Books Review way back in the fall of 2009, but was only able to read it during a recent vacation. The story is engaging and entertaining, with just enough provocation for those who are more theoretically inclined to continue learning about these pioneer logicians beyond the pages of the book itself. The art is very reminiscent of Hergé’s *ligne claire* drawing style. Highly recommended.

]]>

Unfortunately, one of the greatest challenges during integration and IP reuse of legacy Verilog code is the potential for non-deterministic behavior during execution. These issues typically arise when there is an execution race condition within the Verilog code. Although a wider discussion of the relative strengths of each language is beyond the scope of this essay, these race conditions are not possible in legacy VHDL code, due to the determinism inherently guaranteed by the language definition.

Unlike strongly-typed languages such as C or VHDL, the Verilog language was not standardized by the IEEE until after it had achieved widespread popularity. As a result, the language started off with a *de facto* standard as defined by the behavior of the Verilog-XL compiler/simulator (which is currently owned by Cadence). Unfortunately, flaws in this *de facto* language definition allowed users the latitude to create code that exhibited simulator-dependent behavior during execution.

One such example of non-deterministic behavior can occur when blocking assignments are used without care. In any HDL, there are two types of signal assignments: blocking and non-blocking. A blocking assignment is executed immediately, just as in any sequential programming language such as C. A non-blocking assignment is added to an event-queue and not executed until a subsequent iteration, in keeping with the semantics of the particular event-driven programming language being used. This behavior gets further complicated when time delays are added.

The following snippet of Verilog code (from Clifford Cummings, *Nonblocking Assignments in Verilog Synthesis, Coding Styles That Kill!*, SNUG-2000) illustrates an example of simulator-dependent behavior:

module fbosc1 (y1, y2, clk, rst); output y1, y2; input clk, rst; reg y1, y2; always @(posedge clk or posedge rst) if (rst) y1 = 0; // reset else y1 = y2; always @(posedge clk or posedge rst) if (rst) y2 = 1; // preset else y2 = y1; endmodule // fbosc1

The Verilog standard does not require the two “always” blocks to be executed by the simulator in any particular order, and as a result, the final values of the variables y1 and y2 become dependent on the order in which the simulator decides to execute the blocks. Fortunately, this problem can be entirely avoided by using non-blocking statements instead, as shown below.

module fbosc2 (y1, y2, clk, rst); output y1, y2; input clk, rst; reg y1, y2; always @(posedge clk or posedge rst) if (rst) y1 <= 0; // reset else y1 <= y2; always @(posedge clk or posedge rst) if (rst) y2 <= 1; // preset else y2 <= y1; endmodule // fbosc2

In fact, there are many published guidelines on good Verilog design practices, and these are largely followed by the design community nowadays. However, it is a significant shortcoming of a language when “best practices” are required to ensure deterministic behavior of the executed code.

This kind of behavior is avoided when using VHDL, which purposefully limits the scope of variables which can accept blocking assignments. Where Verilog uses the *reg* and *wire* transport types, VHDL uses only *signal*. According to the VHDL language definition, a *signal* can only accept non-blocking assignments, which in turn are handled via the event queue. However, blocking assignments can be made to the *variable* type, but by definition the scope of a *variable* is restricted to within a *process* block (a *process* block in VHDL is equivalent to an *always* block in Verilog). If it is desired that the value of a computed *variable* be made visible outside a *process* block, the value of the *variable* needs to first be assigned to a *signal* (whose scope by definition extends to the hierarchy above). As a result, the language definition preserves determinism by preventing the occurrence of race conditions created by careless use of blocking assignments.

As mentioned above, this post is not meant for passing judgement on the suitability of using either VHDL or Verilog for a particular purpose. Rather, it is intended as a warning and encouragement to heed established guidelines for achieving the best results. Nowadays, robust design methodologies require certain Verilog compiler flags and Lint options to be enabled in order to allow detection of such cases so they may be corrected before simulation. However, older code bases of unknown origin and questionable quality can still pose a potential source of problems and project delays. Seasoned industry veterans advocate the use of extensive simulation verification test suites in order to provide confidence, but a “deterministic by compilation” approach will always be preferred.

**Copyright © 2008 – 2012 Waqas Akram. All Rights Reserved.**

]]>

No one ever gets talker’s block. No one wakes up in the morning, discovers he has nothing to say and sits quietly, for days or weeks, until the muse hits, until the moment is right, until all the craziness in his life has died down.

Why then, is writer’s block endemic?

From an interesting blog post, complete with bonus link to Ira Glass interview.

]]>