Timbre
Sound is nothing more than a vibration of airpressure in our ears. The ancient Greek, notably Pythagoras, already discovered that there is some system in those vibrations. They found out that if you have two strings of even length and even tension you can play two note chords by holding a small piece of wood at a certain point on a string, a different point for each string. They also discovered that at certain combinations of points the sound of the two strings melted together into one sound with a different timbre. This they felt a very beautiful phenomena and an example of how two things can become one greater thing if they are in harmony. So they called these intervals harmonics. The timbre of a sound is largely defined by the harmonics which are present in the sound. However some prefer to call harmonics overtones, which is exactly the same thing. In the workshop they are called harmonics, like the ancient Greek did.
When considering the timbre of a sound the so called spectrum of the sound is important. This spectrum is nothing more than a graph representing the frequencyrange on the horizontal axis and the relative volume of a frequency component or harmonic on the vertical axis at a given moment in time. The horizontal axis has a logarimic scale, the distance between 100 Hz and 200 Hz is equal to the distance between 200 Hz and 400 Hz or the distance between 400 Hz and 800 Hz. If we have a single pitched sound of 150 Hz there would be a vertical line at the 150 Hz position. The height of the line corresponds to the volume of the 150 Hz sinewave that forms the fundamental of the sound. If the waveform is not a sinewave, there will also be lines at 300 Hz, 450 Hz, 600 Hz, 750 Hz, etc. These lines stand for the harmonics in the sound, 300 Hz is the second harmonic, 450 Hz the third harmonic, etc. A scientist can comfortably read the volumelevels of each harmonic in such a spectrum graph.
Natural sounds change their timbre over time, so a graph that shows how the frequency components change over time could be handy. In such a graph, which is called a sonogram, the horizontal axis is used for time, the vertical axis for frequency and the colour or darkness of a point represents the volume. These kind of graphs can be used in additive synthesis, and a scientist can read for every frequencycomponent, which is basically a sinewave of a given frequency, how loud it should be at a given moment in time.
An example of a sonogram of a sawtooth waveform with a slowly rising frequency. Clicking the picture will play the corresponding sound coded as lofi MP3.
The dark curve on the bottom of the sonogram is the fundamental wave and the curve shows how the frequency rises in time. The second line is the second harmonic, etc. The darkness of the lines shows the relative volumes of the frequency components.
You can download the freeware program to make these sonograms yourself, along with a very handy freeware oscilloscope program for your computers soundcard at
http://www.mda-vst.com/. Click on the Wavetools link. Especially if you want to deepen your knowledge of synthesizing sounds, the oscilloscope program, written by Paul Kellett, comes in very handy. Monitoring your Modulars output by the oscilloscope program through your PC's soundcard input reveals a lot of the inner workings of the modules.
Synthesis methods
The Nord Modular is capable of doing three basic types of synthesis, additive synthesis, subtractive synthesis and synthesis through waveshaping. Following is a brief explanation of the basics of these types of synthesis.
Additive synthesis
The 'simplest' sound is a so called sinewave or sinetone. It is derived from a circular movement. E.g. if you take a wheel, paint a dot somewhere on the rim of the wheel and let it rotate around its axis, the vertical position of the dot, measured from the horizontal line that goes through the axis of the wheel, will produce a sinewave. But wheels don't occur in nature, humanity had to invent the wheel, and accordingly pure sinetones are not found in nature. So humanity had to invent the sinewave generator as well, more or less an electronic wheel.
In the nineteenth century people already discovered ways to do calculations on waves and those calculations haven't changed much. Some important people in those days where Helmholtz and Fourier. Helmholtz showed that every pitched sound consists of a fundamental sinewave and a number of sinewaves at harmonic intervals. Fourier discovered the math needed to extract the energy or percentage of presence of each possible harmonic in a sound. The result of this calculation can be a graph representing the so called soundspectrum. He also figured out how to do the reverse, to calculate a wave from an arbitrarily drawn soundspectrum graph. This has the potential of analyzing a sound and artificially reproduce it. However this involves controlling a lot of parameters and very many calculations. So only until fast and powerful computers became available has this become a practical method for synthesizing musical sounds . And still it is a cumbersome method due to the many parameters, maybe hundred values a second for each harmonic, and that might be a hundred harmonics for a low note. Which means feeding the program a hundred times a hundred values a second as input to the calculations.
So, an obvious method to synthesize sounds is by adding 'pre-generated', harmonically related sinewaves to a fundamental sinewave. This method is called additive synthesis. The first attempt to do this took place in the end of the nineteenth century by Thaddeus Cahill. This american inventor designed and build the first electronic musical instrument. It consisted of an array of big alternating current dynamos. Such a dynamo generates a sinewave and its pitch is defined by the speed it rotates. By adding the sinewaves from different dynamos the system was able to generate a small variety of static timbres.
Now those dynamos were huge, the assembly looked more like a big factory than a musical instrument. Cahill's aim was to connect speakers on different locations to the "music factory" and have the customers pay for the music played by the musician who operated the system. It has been in operation for a couple of years, but had to close eventually.
A later, much more successful attempt to use additive synthesis, was the Hammond tonewheel organ. In 1934 Hammond replaced the dynamos by tonewheels, with the rim of the wheel formed in a way it represents the waveform. Electronic pickups were used on the rim, and their signals were switched by a keyboard and mixed with variable resistors to get some control over the timbre. The Hammond organ has a specific type of sound of its own, mainly due to the physical limits of the amount of harmonics the organ can handle.
Subtractive synthesis
Instead of adding all the harmonics you need for a specific sound, one can also use a sound that contains all harmonics of a certain pitch and filter away the harmonics you don't need. A waveform that contains all those harmonics is the so called sawtooth wave. Its a signal that relatively slowly rises from a minimum value to a maximum value and when it reaches the maximum value it immediately falls back to the minimum value. To remove the unwanted harmonics we use a device called a filter. There are many types of filters and their names suggest which harmonics or part of the sound spectrum is removed.
Subtractive synthesis is very simple to implement and fits very well in a synthesis model where we want only three basic parameters to control the synthesis: a sawtooth generator that defines the pitch, a filter that defines the timbre and a controllable amplifier that defines the loudness.
A very important moment in the evolution of subtractive synthesis is the introduction of the first analog modular synthesizer by Robert Moog around 1966. Moog standardized all signal levels and controlling voltage ranges in a way that any output could more or less predictably control any input by introduce the so called 1 Volt/Octave standard. Raising the controlling voltage on a control input by 1 Volt would increase the working frequency of the particular module by an octave. Output voltages were chosen with a level that would span the possible control range of the inputs. This meant in essence that you could modulate anything by anything in a very convenient way. You will find that the Nord Modular has adopted this principle, but with the added stability and precision of a digital system.
Synthesis through waveshaping
In this type of synthesis we can change the timbre of a sound by distortion of a basic waveform. The type of distortion and the amount applied define the resulting timbre. A well known example of this type of synthesis is known by the simple two letter abbreviation FM, which stands for frequency modulation.
Waveshaping distortions can work on two different aspects of a waveform, it can either change the momentary input-value by applying some non-linear function to the value. An example is routing the signal through an overdrive circuit. Or it can work by compressing and expanding the waveform in time in a rhythm which is harmonically related to the frequency of the waveform, as is done with FM.
One could imagine the difference between these two basic forms as one working in the vertical direction (value) and the other in the horizontal direction (time displacement).
This all sounds very complex but in practice its very easy to use. One either routes the signal from an oscillator through a module with the desired effect or one connects a signal to the appropriate control input on an appropriate oscillator. By playing with the knobs on the modules one can simply tweak the resulting sound to ones liking.