Autotune 411

Most people in this generation are familiar with autotune and what its use is for. But for those ones who haven’t get the scoop yet, the most accurate definition would define it as tune enhancing software that corrects the pitch of a melody or any musical line. 

Let’s crack it down this way: the western music system is composed of twelve notes separated from each other by intervals of halftones also known as semitones. Generally, most of the songs we hear use a small portion of these notes, they’re called scales

Imagine the track is in the tonality D mayor, for any tonality we only consider seven out of those twelve notes as valid. Going back to D major, these seven valid notes would be D-E-F#-G-A-B-C# I’ll give a brief explanation about the functions of these notes in the D major scale:

D (this is the fundamental, our tonal center).

E (Supertonic).

F# (Very important, this is the Mediant, the mediant note defines whether the tonality is major or minor).

G (Subdominant).

A (Dominant).

B (Submediant).

D# (Leading).

Even if you don’t have any music training and due to the number of times we repeatedly hear the same scales over and over inside a musical context, our brain is programmed to understand and detect when something is out of place. When a singer goes out of the pitch they are just singing a note that doesn’t belong to that specific scale. That note might be one semitone lower, one semitone higher or it can also be in the middle since between two semitones there’s a theoretically infinite interval of frequencies. 

What autotune does is allow us to graphically visualize in which place each note is being analyzed and gives us the possibility of taking that note where it belongs to.


Believe it or not, the origin of autotune is based on the technology used to find oil. The responsible is Andy Hildebrand, the creator of the software worked for Exxon Mobile in the field of interpreting seismic data trough sound waves which are sent to the ground where it is believed that there may be oil and the information returned is interpreted in order to recognize where the better place to drill is.

Nevertheless, one of the difficulties this process had was on separating the fundamental frequency from the harmonics of a sound. For this endeavor, a mathematic model named autocorrelation is used to distinguish the presence of a fundamental frequency even if it is masked by noise or harmonics.

What’s actually funny about the whole thing is that Hildebrand had the idea of using autocorrelation in audio, when after retiring the oil activity he had a date with a woman who proposed him during dinner to make a “box” that allowed her to sing in tune. With the knowledge, he had from his previous doings and adding his musical backgrounds (he’s actually a professional flutist) he devotes himself to develop what we know now as the autotune software.


In order to talk about who was the very first person using autotune we must understand that a lot of people use it more as an effect rather than a tool or fixer for deficient vocal performance.

For a better comprehension of how the software works as an effect is necessary to know that different from a musical instrument like a piano, human beings don’t attack notes instantly, this means that humans don’t produce a C when they have to sing a C, they drag the note until they get there, this happens in a very short frame of time and it’s what makes it sound human.

Autotune as an effect tries to make that attack of the note to be as faster as possible and emulate the same note attack that a piano or a synthesizer would do.

Contrary to popular opinion, Cher wasn’t the first one using autotune. The first ones where the duo “Roy Vegas” to whom Cher contacted to help her produce the single ‘believe’ which would relaunch Cher’s career and also put autotune into the pop culture’s map.


The use of autotune has been the target of lots of debates on whether if its use is ethical or not. But to be honest, the music industry didn’t find out about the scam and deception last week, is common to think that all past was better, but truth is that in the past there were also artists that also changed the conception of music by using contraptions and gadgets they had available to seem better than they really were. 

Also, the enhancing and fixing for vocal performance has always existed to a greater or lesser extent. Some of the methods that were used back then were the vocal harmonizers or the Fairlight synthesizer among others.

Debating about if the use of enhancing software has to do mostly on where we stand our grounds and what kinds of philosophies or biases one might have on recorded music.

The reality bite is that recording is meant to be a product for mass consume destined to mere entertainment and in some cases to stay as registration for posterity, which doesn’t have to necessarily be 100% loyal to a live performance, it just has to sound the best it can.

Leave a Comment

Your email address will not be published. Required fields are marked *