Just like certain clothes never seem to go out of style, there are algorithms that prove their mettle year after year, decade after decade. Chief among these is the Fourier Transform – a mathematical brilliance that is as resilient as it is useful.

The basic problem solved by the Fourier Transform is this – given a stream of data captured over time, how can we find the patterns that lie within that data. One basic example is a sound file, such as an MP3. An MP3 file is composed of thousands of “volume” samplings every second. A sound that our ears perceive as high-pitched is actually a sound that goes from high to low volume very quickly. A low-pitched sound goes from high-to-low volume more slowly.

Before I am attacked for a scientific indiscretion, the volume I’m referring to is technically called “amplitude.” The actual volume of the sound depends on the difference between the maximum and minimum amplitudes, while the pitch depends on how quickly the amplitudes go from high to low.

The beauty of the Fourier Transform is that the algorithm can take a stream of this data, and tell us the actual sounds that come from all of those volume samplings. For instance, the Fourier transform will tell is if the sounds are high-pitched or low-pitched, or a combination of both. In mathematical terms, the Fourier transform will tell us the “frequencies” behind a data series – or the repeating patterns that occur most often. By analyzing the frequencies, we can even perform more advanced tasks – such as speech recognition, or music recognition.

Fourier Transforms can also be applied to more than one dimension. For instance, they are exceptionally useful in image compression. Research has found that our brains often don’t look at the small-frequency details of an image. By analyzing a photo and deleting those small-frequency data points, the file size can be reduced significantly.

Using traditional methods, the Fourier Transform takes a long time to compute. In order to make the algorithm useful, it needs to run very quickly so that it can analyze the sound and data samples that are constantly changing. In order to accomplish this, an approximate version of the Fourier Transform has been developed called the FFT (Fast Fourier Transform). Instead of providing absolutely every underlying frequency, the algorithm gives us the most noticeable frequencies it finds within a data sample.

The most popular FFT algorithm, the Cooley-Tukey, was published in the 1960′s by IBM. Although nobody noticed at the time, the original creator of the technique was actually Gauss in the 1800′s, as part of his work in analyzing asteroid trajectories. Thus continues the lamentation of inventors too far ahead of their time – his name still has not been appended to the algorithm.

While FFT’s have been behind much of the technology we take for granted each day, its prominence as a fountain of innovation in new technology and science is far from over. Artificial Intelligence makes the Fourier Transform its cornerstone, and interacting with our harmonious environment requires both analyzing and reproducing the natural wavelengths that make up our world. As far as the Fourier Transform has taken us, the truly exciting advancements are yet to come.

Written by Andrew Palczewski

*About the Author*

Andrew Palczewski is CEO of apHarmony, a Chicago software development company. He holds a Master's degree in Computer Engineering from the University of Illinois at Urbana-Champaign and has over ten years' experience in managing development of software projects.

Google+