Noise and music, what is the difference? In this article we will have a look at some of the fundamental characteristics of sound. It will help us to understand and appreciate what makes a sound musical and distinguishes it from plain noise. Especially for computer musicians this knowledge is relevant. Synthesizing (entirely new) musical sounds and/or manipulating existing ones is a fundamental aspect of composing electronic music.
Sound is made up of sound waves that are propagated down into a structure in your ear where physical vibrations are translated into signals that travel to your brain. Got it? But why are sounds from different sources and objects perceived so differently? To understand this, we need to look at the fundamental characteristics of sound.
The sounds we hear are actually sound waves, vibratory disturbances in the atmosphere and objects around us. A good way to illustrate is to throw a stone in a calm pond and see how circular waves are eminating from where the stone hit the water. Throw a bigger stone and the waves are higher and longer too. Sound waves are similar and travel in a similar fashion, but do not go up and down like the waves in the pond. Sound waves are regions in the air where molecules are compressed and regions where molecules are spread apart. We call these regions compressions and rarefactions respectively.
Have a look at the illustration below:
Do you see it says 'oscillating molecules'? The thing is that the molecules oscillate back and forth. They do not travel from left to right into your ear. That would be wind.
Now, the above illustration is excellent, but the behavior can be illustrated more scientifically using a sine wave. See below:
As you can see, the highs correspond to the compression regions and the lows with the rarefaction regions. From now on, we'll use waveforms to descibe sound waves, but remember that in real life sound waves are actually oscillating molecules.
Music versus Noise
Now that we know what sound is, we can have a closer look at noise and music. Noise and music are both sounds, but when do we characterize a sound as being noise and when do we say it's music?
One way to approach this is to examine the waveforms of noise and music. The waveform of noise is erratic with no order and no pattern while the waveform of musical sound is regular, ordered and periodic. See the waveform below:
Does not look so smooth and regular right? If such a waveform hits your eardrums you'll most likely call it noise and not music. Now have a look at the following waveform:
This is the waveform of a plucked guitar string. It is more regular, ordered and periodic. You will call it music and not noise.
Having said the above, whether something is considered noise or music is also a matter of taste (and culture). And noises are definitely used in music and can be perceived musical if for example used in a rythmic framework.
But the message is not that noise is bad and music is good. We're merely examining the sound characteristics of what most people call noise and what they call music. Somehow, there is an aesthetic appeal to regular, ordered waveforms and most instruments are designed to produce such sound waves.
Parameters of a musical tone
To understand why one musical tone is not like the other, there are a few fundamental characteristics you should know about. That's pitch, intensity and tone quality.
Pitch refers to how high or low the note is in the overall pitch landscape. Actually, pitch is our perception of a sound's wavelength. A tone with a high pitch has a shorter wavelength than a tone with a lower pitch. How does that translate to our waveform? See below:
The first waveform has less cycles per timeframe compared to the second waveform. It has a longer wavelength. Therefore, the first sound has a lower pitch than the the second sound.
Pitch is actually referred to as frequency, which is the number of cycles (or sound waves) per second. The unit for this is Hertz, usually abbreviated to Hz. A 440 Hz tone means that its sound wave has 440 cycles per second.
Different instruments typically operate in different frequency ranges. A sub bass has a lower pitch than an acoustic guitar for example. The acoustic guitar in turn has a lower pitch than a flute (typically).
Even though different instruments may operate in different frequency ranges, a note - let's say - A on one instrument should preferrably sound like note A another instrument, regardless of pitch. This is not a given, which is why symphony orchestra's tune up before a concert.
Intensity is the volume of a musical tone, so how loud or soft the sound is. You remember that the pitch is dependent on the length of the sound waves right? Now, the intensity depends on the height of the sound waves, which is also referred to as the wave amplitude. See the illustration below:
Sound waves with a high amplitude are perceived louder as sound waves with a low amplitude.
Tone Quality (Waveform)
Imagine two tones with the same pitch and the same intensity coming from two different instruments - let's say a piano and a violin. As you know, they sound completely different. Therefore, there must be something more than pitch and intensity right? Good thinking, because there is. It is referred to as the tone quality or tone color or timbre.
Now, to understand this it is important to realize that a single muscial tone is usually more complex than our single sine wave. Even though we perceive a musical tone as a singular event, it is composed of more than one sound wave with a single pitch and intensity. In fact, it is a highly complex blend of mutliple sound waves. See below:
As you can see, one musical tone can consist of multiple sound waves with varying pitch. These are called modes of vibration. The first mode of vibration is also referred to as the fundamental frequency, first partial or first harmonic. It is very important as it determines the pitch of the note.
Now, the other partials do not change our perception in terms of pitch, but they do affect the tone quality or timbre. You could also say that all these partials change the shape of the overall waveform. Remember that we started out with a nice and smooth looking sine wave. In reality however, due to the different partials sound waves assume quite different shapes. See the illustration below:
It is a simplification as many more partials/harmonics are needed, but with the right blend of vibrations a whole new waveform arises. In the illustration you can see that the resulting waveform is actually a square wave. And a square ware sounds different than a sine wave. Do you get the picture?
Most instruments are designed to produce sounds that are rich in partials. And not just random partials, but partials with frequencies that represent whole number multiples of the fundamental frequency. Such partials are called harmonics.
To illustrate, if the fundamental frequency (harmonic) is 110 Hz, then the second harmonic would be 220 Hz, the third 330 Hz, etc.
This really is a fundamental characteristic of musical tones and working with harmonics is daily routine for computer musicians, though you may not be aware of it. Through an EQ, for example, you can attenuate or suppress harmonics. That is exactly what happens when you want to make a tone sound brighter by boosting high frequencies. What you are actually doing is manipulating the tone's harmonics.
Also, through the use of oscillators and filters computer musicians can build waveforms (like squares, triangles and sawtooths) full of harmonics and thereby either simulate existing instruments or produce completely new sounds. This is what we call synthesis. The possibilities are endless!
That's it for now. I hope this brief walk-through has given you a basic understanding of what sound is and in what way noise and music are different. Obviously, we have only scratched the surface, but even a basic understanding is very useful in my opinion.