The human hearing system is an extraordinary thing.  We are told it is the first of our senses to develop while we are still in our mother’s womb and there are stories that it is the last of our senses to stop when we die.  (Not sure how we can prove that…).  We hear sounds which range from the proverbial pin dropping to the A.380 Airbus roaring over our heads as it takes off and flys.  To do anything at all in Sound it’s crucial to understand how hearing works.  I didn’t expect to be studying this, but it makes absolute sense that it is a core part of the Acoustics syllabus.

How does this work?   The diagram at the top of the page is an illustration of a cross-sectional view of the ear and the hearing system.

The short, and simple, version of how the ear works is that sound waves the outer ear helps us to place the original location of a sound, be it ahead or behind, above or below us. It also helps to funnel and focus sound waves on their way to the middle ear and auditory canal. The auditory canal then focusses these sound waves onto the Ear Drum (Tympanum). the drum resonates with the movement of the sound waves and is linked to the three smalles bones in the human body – the Hammer, the Anvil and the Stirrup  (Malleus, Incus and Stapes) which then attach to a fluid-filled structure, called the cochlea, of the inner ear at a point called the oval window. The fulcrum/lever design of the ossicles actually amplify the movement transmitted from the ear drum to the oval window of the cochlea by a factor of 15 times or so. It is in the cochlea that the vibrations transmitted from the eardrum through the tiny bones are converted into electrical impulses sent along the auditory nerve to the brain. The inner ear, which is surrounded by bone, also contains semicircular canals, which function more for purposes of balance/equilibrium than hearing.

The most fascinating aspect of perception takes place in an area of the cochlea called the basilar membrane. The cochlea is a tapered tube, which circles around itself like the scroll on a violin. The basilar membrane divides the tube lengthwise into two fluid-filled canals, which are joined at the tapered end. The ossicles transmit the vibration to the cochlea where they attach at the oval window. The resultant waves travel down the basilar membrane where they are “sensed” by the approximately 16-20,000 hair cells (cilia) attached to it. These hair cells poke up from a third canal called the organ of Corti. It is the organ of Corti that transforms the stimulated hair cells into nerve impulses. Because of the tapered design of the cochlea, waveforms traveling down the basilar membrane peak in amplitude at differing spots along the way according to their frequency. Higher frequencies peak out at a shorter distance down the tube than lower frequencies. The hair cells at that peak point give us a sense of that particular frequency. It is thought that a single musical pitch is perceived by 10-12 hair cells. Due to the tapered shape of the cochlea, the distance between frequencies follows the same logarithmic distance as our perception of pitch (e.g., the placement of octaves is equidistant). This arrangement is responsible for the fact that a lower frequency at an equal or higher amplitude can mask a higher frequency, but under most circustances a higher frequency of equal or lower amplitude can’t mask a lower one (masking is acutally a very complex phenomenon influenced by the pitch and amplitude relationship of the tones inside or outside the critical band, 

(The masking phenomenon is used to advantage in reducing the amount of bandwidth and digital memory necessary to produce near-high-fidelity music in compressed perceptually-coded audio formats such as mp3’s. Frequencies that would normally be masked are eliminated from the coding altogether, since theoretically they would not be heard or missed.)

As we age so the sensory ability of the hearing system deteriorates.  An analogy is that the cilia hairs in the basilar membrane are like blades of grass.  eventually, after decades of being able to srping back to their normal position they wear out and lose their spring.   This results in a deterioration of the frequency range you can hear.  Typically, by age 50 the sensitivity of hearing has dropped from 20Khz to 10Khz or so. 

Unsurprisingly the human hearing system has evolved to be at it’s most effective over the same frequencies which which speech occurs.  A Female voice frequency range covers fairly upto 350 Hz to 17KHz, with its fundamental frequency is 350Hz to 3KHz and Harmonics is 3KHz to 17KHz. While a Male voice covers a Frequency range of 100Hz to 8KHz with its fundamental is 100Hz to 900Hz and Harmonics is 900Hz to 8KHz.

 

Hearing Frequency Range

Starting with the main frequency range, it is the frequency range of human hearing, which is responsible for the perception of speech. It covers the frequencies from 300 to 3000 Hz. The range of frequencies in which the intelligibility and the recognition of the tuning characteristics are concerned is between the above mentioned frequency. This frequency range is used for voice communication in telephony and is the range the human ear is the most sensitive.

But not all frequencies are heard in the same way at different volums or levels of loudness.

The Fletcher Munson Curves show that human hearing does not respond equally to all frequencies.  This matters when:

  • You’re mixing or mastering music. The loudness at which you perform this task will impact on the balance of frequencies of the spectrum you hear. Some listening at a lower loudness will not have the same auditory response.
  • In a noisy environment your hearing will be less sensitive to certain frequencies

 

Within the first year of this course, it’s the first of these which we learned most about.

When listening to music through your studio monitors or headphones as the actual loudness changes, the perceived loudness our brains hear will change at a different rate, depending on the frequency.  Here’s what this means:

    At low listening volumes – mid range frequencies sound more prominent, while the low and high frequency ranges seem to fall into the background.

    At high listening volumes – the lows and highs sound more prominent, while the mid range seems comparatively softer.

Yet in reality, the overall tonal balance of the sound remains the same, no matter what the listening volume.

The graph of the Fletcher Munson Curve simply illustrates this concept with specific visual data. The challenge this phenomenon gives us as mixing engineers is that our objective is to achieve a characteristic good mix which is achieving the ideal balance of frequencies most pleasing to the listener. But how can we do this when the perceived balance of frequencies changes as the volume/loudness changes?

Let’s say you are working on the EQ of a mix, and as you listen back at a low volume, you think the lows and highs could use a boost. So you boost them…and it sounds great.

The next day…

You listen back at a high volume.  Now the lows and highs are too much, so you cut them.  And you are right back where you started in the beginning.  That’s the Fletcher Munson curves and our hearing system in action.  As these are fundamentals of acoustics there is no single ‘right’ answer to the volume/loudness at which music should be mixed.