Sie sind auf Seite 1von 38

AN INTRODUCTION TO DIGITAL AUDIO I think the time has come for me to start this tutorial, which will be similar

to the Synthesis tutorials, both in terms of how I approach a tutorial (see 'tagging') and in terms of a part every month. Being a site dedicated to Sound Font (SF) development and Sound Design, it seems only apt that I offer these tutorials. Normally I reserve these tutorials for my one-on-one sessions or for groups that I tutor. However, what has prompted me to start these tutorials is the introduction of the Emulator X software from Emu/ Ensoniq. Since I contract to Emu for SF development projects and will have libraries released for this product shortly, I felt the timing would be right to start a general sampling tutorial that then branches into product specific sampling. These tutorials will be useful to anyone who wants to start sampling and, although I will be concentrating on Emu hardware samplers, I will transcribe the terminology to include, most of the major manufacturers of samplers, what terms they use and how they approach sampling. I will also be conducting this tutorial with software samplers in mind as the technology has advanced so much since tape based samplers to that of the digital domain and virtual environment, that if I ignored the role of the software sampler, I would be negligent as a tutor. Once we have covered the basics of sampling, you will be able to apply the information you have learned to the sampler of your choice and start sampling straight away. We will then delve deeper into all the functions of a sampler and how to use them, and what all the terminology means in todays confusing world of endless product releases on multi platforms (Macs and PCs). Through the course of this tutorial, I will be adopting my usual dialogue technique, whereby I have a conversation with you about the topic in question. I find this to be the best method of tutoring. Sure, there are some excellent essays or resource documents on this subject, but I find that dialoguing the tutorial not only makes it more personal, but also works. Instead of the usual approach adopted by some tutors of listing a whole book of definitions and expecting you to keep referring back to them to understand what is actually being conveyed, I will be providing information that is relevant to that point in the tutorial as we smoothly move along, at your pace. Expect screen shots, graphs, figures, tables and, most importantly, working examples, to accompany the test. I strongly advise that you follow the Synthesis tutorials in tandem to this tutorial as sound design is relevant to sampling and most of the terminology here has already been explained in the Synthesis tutorials. When you bear in mind that todays samplers have

built in synthesizers, it becomes more apparent that a working knowledge of what is synthesis is crucial if you want to create your own sounds, or create and programme the sounds that you have sampled. This we will cover extensively later. Ok, enough of the rant. Lets kick. You will find that at the end of this part of the tutorial I will list, in order, the terms and definitions I have used. These are not for reference purposes but for the technique I use in all my tutorials, that of tagging. This makes it so much easier to remember the content in this these tutorials. After I have finished this part of the tutorial, you can just look at the list and it will remind you of the content, instead of having to first read a bunch of terms then try to fit it in, as some tutorials do. Maybe they work for others, but for me, this is how I like to remember things. I like a good story, like anyone else does, and isnt it easier when you hear a story and the next day someone just mentions a couple of the parts of the story and you suddenly recall the whole story? Well that is how I remember something that has been told to me, so that is how I teach to others. INTRODUCTION I am not going to go into the history of sampling as there are other very resourceful sites that will cover this with more passion. I feel that what you want from this tutorial is exactly that, a tutorial, one that teaches you how to sample and what to do to then sculpt the samples into working sounds. However, I do feel that it is important that you understand he difference between analogue sampling and digital sampling and the processes that take place when you sample. Let me first explain what sampling is. Sampling is simply recording a sound. What is important is, how we sample, where we sample to and how do we use the data/sound once it has been sampled. I am glad that you had this tutorial and best of luck with the future. No, just kidding. There are times when you will have to put up with my little character traits. I am receiving intense treatment, however Actually, it is that simple. When you sample, you are simply recording audio, a sound. This sound we call analogue, because it is a continuous waveform. This term, analogue, will become apparent in a minute. Now let me explain what a sampler does. A sampler, be it a hardware sampler or your computer with sampling software, simply records the audio,

then plays it back. Thats the most simplistic explanation. However, todays samplers do far more than that. They record, edit, store, assign and playback. That is again simplistic but all you need to know for now. For me to list all the functions of a sampler and the processes it carries out would be boring and take page after page of script. Also, different manufacturers or different samplers have different functions so it is better to explain the general theory of sampling and the general functions and terms used. Before we launch into sampling there are some things that you do need to know. I will explain these as we go along but not expect you to fully understand or remember each and every thing I explain. However, what is crucial is that you understand the processes that take place when you sample. What is of even more importance is that for me to go to the next section of this tutorial, Preparing to Sample, you will need to know certain things to be able to prepare to sample. This is very important as the settings on the sampler and the way you set up your equipment to sample and the functions you need to use to sample are crucial, otherwise, you will have no idea what I am talking about. So, let us begin with a brief explanation of the processes involved when you actually sample into a sampler. DIGITAL AUDIO In the old days, sampling consisted of recoding the audio onto magnetic tape. The audio, (analogue), was represented by the movement of the magnetic particles on the tape. In fact, a good example is cutting vinyl. This is actually sampling because you are recording the audio onto the actual acetate or disc by forming the grooves. So, the audio is a continuous waveform. That is analogue sampling. Simple huh? Now let us look at how we sample using todays technologies. Whether we are using a hardware sampler, like the Akais, Rolands, Yamahas, Emus etc, or software samplers on our computers, like Kontakt, EXS24, NN-19 etc, there is a process that takes place between you recording the analogue waveform (audio) into the sampler and the way the sampler interprets the audio and stores it.. This process is the conversion of the analogue signal (the audio you are recording) into a digital signal. For this to happen, we need what we call an analogue to digital converter (ADC) and for the sampler to play back what you have recorded and for you to hear it, the process is reversed but with a slightly different structure and process, and for that to happen we need a digital to analogue converter (DAC). That is simple and makes complete sense. Between all of that, there a few

other things happening and with this diagram (fig1) you will at least see what I am talking about. Fig1

The sampler records and stores the audio as a stream of numbers, binary, 0s and 1s, on and off. As the audio (sound wave) is moving along the ADC records snapshots (samples) of the sound wave, much like the frames of a movie.. These snapshots (samples) are then converted into numbers. Each one of these samples (snapshots) is expressed as a number of bits. This process is called quantising and must not be confused with the quantising we have on sequencers although the process is similar. The number of times a sample is taken or measured per second is called the sampling rate. The sampling rate is measured as a frequency (please see Synthesizer Tutorial part1) and is termed as kHz, k=1000 and Hz= cycles per second. These samples are measured at discrete intervals of time. The length of these intervals is governed by the Nyquist Theory. The theory states that the sampling frequency must be greater than twice the highest frequency of the input signal in order to be able to reconstruct the original perfectly from the sampled version. Another way of explaining this theory is that the maximum frequency that can be recorded with a set sample rate must be half the sample rate. A good example at this point would be the industry standard cd. 44.1 kHz means that the number of times a sample (snapshot) per second is taken equates to 44,100/second. I know this all seems a bit deep but dont worry I will give you the laymans definition at the end. Ok, now lets look at Bits. We have talked about the samples (snapshots) and the numbers. We know that these numbers are expressed as a number of bits. The number of bits in that number is crucial. This determines the

dynamic range ( the difference between the lowest value of the signal to the highest value of the signal) and most importantly, the signal to noise ratio (S/N). For this you need to understand how we measure loudness. The level or loudness of a sound is measured in decibels (dB), this is the unit of measure of the replay strength ( loudness) of an audio signal. Named after this dude Bell. Please read Synthesis part1 for a deeper explanation. The other measurement you might come across is dBu or dBv, that is the relaionship between decibels and voltage. This means that decibels referenced to .775 volt. You dont even need to think about this but you do need to know that we measure loudness (level) or volume of a sound in decibels, dB. Back to bits. The most important aspect of bits is its resolution. Let me explain this in simpler terms. You often come across samplers that are 8 bit, Fairlight CMI or Emulator 11, or 12 bit, Akai S950 or Emu SP1200, or 16 bit, Akai S1000 or Emulator 111 etc..You also come across sound cards that have 16 bit or 24 bit etcEach bit refers to how accurately a sound can be recorded and presented. The more bits you have (Resolution), the better the representation of the sound. I could go into the electrical pressure measurement at an instant definition but that wont help you at this early stage of this tutorial. So, I will give a little simple info about bit resolution. There is a measurement that you can use, albeit not clearcut but at least it works for our purposes. For every bit you get 6dBs of accurate representation. So, an 8 bit sampler will give you 48dB of dynamic range. Bearing in mind that we can, on average, hear up to 120dB, that figure of 48dB looks a bit poor. So, we invented 16 bit cd quality which gives us 96dB dynamic range. Now we have 24 or even 32 bit sound card and samplers (24 bit) which gives us an even higher dynamic range. Even though we will never use that range, as our ears would implode, it is good to have a bit. Why? Well, use the Ferrari analogy. You have 160mph car there and even though you know you are not going to stretch it to that limit (I would), you do know that to get to 60mph it takes very little time and does not stress the car. The same analogy can be applied to monitors (speakers), the more dynamic range you have the better the sound representation at lower levels. To take this resolution issue a step further: 8 bits allows for 256 different levels of loudness to a sample, 16 bit allows for 65,536. So, now you can see that 16 bits gives a much better representation. The other way of looking at it is: if I gave you 10 colours to paint a painting (copy a Picasso) and then gave you a 1000 colours to paint the same painting, which one would be better in terms of definition, colour, depth etc.? We have the same situation on computer screens and scanners and printers. The higher the resolution

the clearer and better defined the images on your computer, or the better the quality of the scanned picture, or better the resolution of the print. Fig2. As you can see from the figure below. The lowest bit resolution is 1 and the highest is 4. The shape of the highest bit resolution is the closest in terms of representing the shape of the audio signal above. So the higher the bit resolution the better the representation. However, remember that because we are dealing with digital processing and not a continuous signal, there will always be steps in our sIgnal in the digital domain. Fig2

Now lets look at signal to noise ratio (S/N).This is the level difference between the signal level and noise floor. The best way to describe this is by using an example that always works for me. Imagine you are singing with just a drummer. You are the signal and the drummer is the noise (ha.ha). The louder you sing or the quieter the drummer plays the greater the signal to noise ratio. This is actually very important in all areas of sound technology and music. It is also very relevent when we talk about bit resolution and dynamic range. Imagine using 24 bits. That would allow a dynamic range of 144 dB. Bearing in mind we have a limit of 120 dB hearing range (theoretical) then the audio signal would be so much greater than the noise floor that it would be almost noiseless. A good little example is when people resample their drums, that were at 16 bit, at 8 bit. The drums become dirty and grungy. This is why the Emu SP1200 is still to highly prized. The drum sampler beat box that gave us fat and dirty drum sounds. Lovely.

Now, lets go back to sample rates. This is very important so wake up. I dropped in a nice little theorem by Nyquist to cheer you up. I know, I know, I was a bit cold there but it is a tad relevent. If the sampling rate is lower or higher than the frequency we are trying to record, and does not conform to the Nyquist rule, then we lose some of the cycles due to the quantisation process we mentioned earlier. Whereas this quantisation is related to the input voltage or the analgue waveform, for the sake of simplicity, it is important to bear in mind its relaionship with bits and bit resolution. Remember that the ADC needs to quantise 256 levels for an 8 bit system. These quantisations are shown as steps, the jagged shape you get on the waveform. This creates noise or alias. The process or cock up is called aliasing. Check Fig3. Fig3

To be honest, that is a very scant figure but what it shows is that the analogue to digital conversion, when not following the Nyquist rule, leaves us with added noise or distortion because cycles will be ommitted from conversion and the result is a waveform that doesnt look too much like our original waveform that is being recorded. To be even more honest, even at high sampling the signal processed will still be in steps as we discussed earlier about quantisation and the way the digital process processes analogue to digital. So how do we get past this problem of aliasing? Easy. We use anti-aliasing filters. On Fig1, you see that there are 2 filters, one before the ADC and one after the DAC. Without going back into the Nyquist dudes issues, just

accept the fact that we get a great deal of high frequency content in the way of harmonics or aliasing with the sample rate processing, so we run a low pass filter that only lets in the lower frequnecies and gets rid of the higher frequencies (above our hearing range) that came in on the signal. The filter is also anti-aliasing so it smoothes out the signal. What is obvious is that if we are using lower sampling rates then we will need a filter that is a steeply sloped frequency band (agressive). So, it makes sense to use higher sampling rates to reduce the steepness of the filter. Most manufacturers put an even higher sample rate at the output stage so the filter does not need to be so aggressive (please refer to upsampling further on in this tutorial). The other process that takes place is a process is called interpolation.. This is an error correction circuit that guesses the value of a missing bit by using the data that came before and after the missing bit. A bit crude. The output stage has now been improved with better DACs that are over sampling, and additionally a low order analogue filter just after the Dac at the output stage. The DAC incorporates the use of a low pass filter (anti imaging filter) at the output stage. Now lets have a look at an agressive form of alias called foldover. Using Nyquist again, man that dude gets around. A sampling rate of 44.1 kHz can reproduce frequencies up to 22.05kHz (half). If lower sampling rates are used that do not conform to the Nyquist rule, then we get more extreme forms of alias. Let us put that in simple terms and let us take a lower sampling rate and for the sake of this argument, let us halve the usual 44.1 kHz. So, we have a sampling rate of 22.05 kHz. We know, using Nyquist, that your sampler or sound card cannot sample frequencies above half of that, 11.025 kHz. Without the use of the filter, that we have already discussed, the sampler or sound card would still try to record those higher frequencies (above 11.025 kHz) and the result would be terrible as the frequencies would now be remarkedly different to the frequencies you were trying to record. So, to solve this extreme form of alias, manufacturers decided to use a brick wall filter. This is a very severe form of the low pass filter and, as the name suggest, only allows frequencies at a set point through, the rest it completely omits. However, it tries to compensate this agressive filtering by boosting the tailend of the frequencies, set by the manufaturer, to allow it to completely remove the higher frequencies. However we have now come to a new improved form of DAC called upsampling. You hate me by now, dont you?

An upsampling digital filter is simply a poor over oversampled digital reconstruction filter having a slow roll-off rate. Nowadays, DAC manufacturers claim that these DACs improve the quality of sound and when used, instead of the brick wall filters, the claim is genuine. Basically, at the DAC stage the output is oversampled, usually 8 times, this creates higher frequencies than we had at the AC stage, so to compensate and remove these very high frequencies, a low order analogue filter is added after tha DAC and just before the output. So we could have an anti-aliasing filter at the input stage and a an upsampling DAC with a low order analgue filter at the output stage. This technology is predominantly used in cd players and, of course, sound cards, and any device that incorporates DACs. I really dont want to get into this topic too much as it really will ruin your day. At any rate , we will come back to this and the above at a later date when we examine digital audio in more detail. All I am trying to achieve in this introduction is to show you the process that takes place to convert an analogue signal into digital information, back to analaogue at the output (so we can hear it: Playback) and the components and processes used. The clock . Digital audio devices have clocks that set the timing of the signals and are a series of pulses that run at the sampling rate. Right now you dont need to worry too much about this as we will come to this later. Clocks can have a definite impact in the digital domain but are more to do with syncing than the actual digital processes that we are talking about in terms of sampling. They will influence certain aspects of the process but are not relevent in the context of this introduction. So we will tackle the debate on clocks later as it will become more apparent how important the role of a good quality clock is in the digital domain. Dither Dither is used when you need to reduce the number of bits. The best example, and one that is commonly used, is when dithering down from 24 bits to 16 bits or 16 bits down to 8 etc... A very basic explanation is we add random noise to the waveform when we dither, to remove noise. We talked about quantisation earlier in this tutorial and when we truncate the bits (lowering the bit resolution), ie in this case we cut down the least signifact bits, and the fact that we are always left with the stepped like waveforms in the digital process, by adding noise we create a more evenly flowing waveform instead of the stepped like waveform. It sounds crazy, but the noise we add results in the dithered waveform having a lower noise floor. This waveform, with the noise, is then filtered at the output stage, as outlined earlier. I could go into this in a much deeper context using graphs and digrams and talking about probability density functions(PDF) and

resultant square waves and bias of quantisation towards one bit over another. But you dont need to know that now. What you do need to know is that dither is used when lowering the bit resolution and that this is an algorithmic process, ie using a pre determined set of mathematical formulae. Jitter Jitter is the timing variation in the sample rate clock of the digital process. It would be wonderful to believe that a sample rate of 44.1 kHz is an exact science, whereby the process samples at exactly 44,100 cycles per second. Unfortunately, this isnt always the case. The speed at which this process takes place usually falters and varies and we get the wobbling of the clock trying to keep up with the speeds of this process at these frequencies. This is called jitter. Jitter can cause all sorts of problems and it is best explained, for you, as: the lower the jitter the better the audio representation. This is sometimes why we use better clocks and slave our sound cards to these clocks, to eradicate or diminish jitter and the effects caused by it. I will not go into a deep explanation of this as, again, we will come to it later in these tutorials. For now, I think you have more than enough to cope with. This introduction is simply there to explain the processes that take place when we sample digitally. As promised, the laymans explanation. For us to sample we need to take an analogue signal (the audio being sampled) , filter and convert it into digital information, process it then convert it back into analogue, then filter it and output it. The reasons why I have tried to give you a brief outline in what this actual process entials is that it is a good way to understand what is entailed and this is not only important for you to know but invaluable when it comes to choosing the right sound card, clock, converters or what happens when you sample at a chosen sample rate or when you are changing the bit resolution. The list of benefits is actually endless and knowing, at the very least, the processes involved in this tutorial, you will have an invaluable source of information to help you in the ensuing chapters. The list, as promised: Sampling Analogue Continuous waveform Analogue to digital converter ADC

Digital to analogue converter DAC Nyquist Theroy Bits Dynamic range Signal to noise ration S/N dB Resolution Steps Quantisation Alias Anti aliasing filter Steeply sloped frequency band Upsampling Oversampling Low order analogue filter Low pass filter Anti imaging filter Foldover Brickwall filter Slow roll-off rate Clock Dither Noise Truncate

Probability density functions Jitter Preparation and Process Last month we touched on the digital process. This moth we are going to talk about the preparation, the signal path, dos and donts and what some of the terminology means. If you have arrived here, to these tutorials, then it means that you have a very basic understanding of what sampling is, if not, go back and read Part 1. What you probably do not want is a history of sampling and how it came to be. There are some excellent sites that take care of these topics and what I want to do, as I know how keen you are, is to get you sampling. But first, you have to understand that there is a process involved and if you adhere to that process and the factors involved then you will have an easier time in understanding how to sample, and also, how to get the best results from your sampling techniques. The most important part of the sampling process is the preparation. If you prepare properly, then the whole sampling experience is more enjoyable and will yield you the optimum results. Throughout this tutorial, I will try to incorporate as many sampler technologies as possible, and also present this tutorial side by side, using both hardware and software samplers. This is very important, as there are certain criteria that are different to each method. Hardware sampling has its own restrictions and advantages, and software samplers also have their own restrictions advantages. So let us start with the signal path. Signal, being the audio you are recording and path, being the route it takes from the source to the destination. The signal path is the path that the audio takes from its source, be it a turntable, a synthesizer etc, to its final destination, the computer or the hardware sampler. Nothing is more important than this path and the signal itself. The following list is a list of guidelines. Although it is a general guide, it is not scripture. We all know that the fun of sampling is actually in the breaking of the so called rules and coming up with innovative ways and results. However, the guide is important as it gives you an idea of what can cause a sample to be less that satisfactory, when recorded. I will list some pointers and the go into more detail about each pointer.

The more devices you have in the signal path, the more the sample is degraded and coloured. The more devices in the path, the more noise is introduced into the path, and the headroom is compromised depending on what devices are in the path. 2. You must strive to obtain the best possible S/N (signal to noise ratio), throughout the signal path, maintaining a hot and clean signal. 3. You must decide whether to sample in mono or stereo. 4. You must decide what bit resolution and sample rate you want to sample at. 5. You need to understand the limitations of both the source and destination. 6. You need to understand how to set up your sampler (destination) or sound card (destination) to obtain the best results. 7. You need to understand what it is that you are sampling (source) and how to prepare the source for the best sampling result. 8. If you have to introduce another device into the path, say a compressor, then you must understand what effect this device will have on the signal you are sampling. 9. You must understand what is the best way to connect the source and destination together, what cables are needed and why. 10. You need to calibrate the source and destination, and any devices in the path, to obtain the same gain readout throughout the path. 11. You need to understand the tools you have in the destination. 12. Use headphones for clarity of detail. Now, all that may look like a lot but its not, they are all relevant to each other. Basically, the whole process of sampling is about getting the audio from the source to the destination, keeping the audio signal strong and clean, and being able to listen to the audio in detail so you can pick out any noise or other artefacts in the signal. I expect you to know about S/N ratio, bit resolution, sample rates, headroom and noise floor. I have covered these in Part 1 of these tutorials. In most cases you can record directly from the source to the destination without having to use another device in the path. Some soundcards have pre amps built into their inputs, along with line inputs, so that you can directly connect to these from the source. Hardware samplers usually have line inputs, so you would need a dedicated pre amp to use with your microphone, to get your signal into the sampler. The same is true for turntables. Most turntables need an amp to boost the signal. In this instance you simply use the output from the amp into your sampler or soundcard

1.

(assuming the soundcard has no pre amp input). Synthesizers can be directly connected, via their outputs, to the inputs of the hardware sampler, or the line inputs of the soundcard. As pointed out above, try to minimise the use of another device in the path. The reason is quite simple. Most hardware devices have an element of noise, particularly those that have built in amps or power supplies. Introducing these in the signal path adds noise to the signal. So, the fewer devices in the path, the less noise you have. There are, as always, exceptions to the rule. For some of my products, I have resampled my samples through some of my vintage compressors. And I have done it for exactly the reasons I just gave as to why you must try to not do this. Confused? Dont be. I am using the character of the compressors to add to the sample character. If noise is part of the compressors character, then I will record that as well. That way, people who want that particular sound, influenced by the compressor, will get exactly that. I have, however, come across people who sample with a compressor in the path just so they can have as strong and pumping signal as possible. This is not advised. You should sample the audio with as much dynamic range as possible. You need to keep the signal hot, ie as strong and as loud as possible without clipping the soundcards input meters or distorting in the case of hardware samplers. Generally, I always sample at a level 2 dBu below the maximum input level of the sampler or soundcard, ie 2 dBu below 0. This allows for enough headroom should I choose to then apply dynamics to the sample, as in compression etc. Part 1 of these tutorials explains dynamic range and dBs, so I expect you to know this. I am a vicious tutor arent I? He, he. My set up is quite simple and one that most sampling enthusiasts use. I have all my sources routed through to a decent quality mixer, then to the sampler or my computers soundcard. This gives me great routing control, many ways to sample and, most important of all, I can control the signal better with a mixer. The huge bonus of using a mixer in the path and as the heart of the sampling path is that I can apply equalisation (eq) to the same source sample and record multi takes of the same sample, but with different eq settings. This way, by using the same sample, I get masses of variety. The other advantage of using is a mixer is that you can insert an effect or dynamic into the path and have more control over the signal, than just plugging the source into an effect unit or a compressor. Headphones are a must when sampling. If you use your monitors (speakers) for referencing, when you are sampling, then a great deal of the frequencies get absorbed into the environment. So, it is always hard to hear the lower noise or higher noise frequencies, as they get absorbed by the environment.

Using headphones, either on the soundcard, or the sampler, you only hear the signal, and not the environments representation of the signal. This makes finding noise or other artefacts much easier. The decision of sampling in mono or stereo is governed by a number of factors, the primary one being that of memory. All hardware samplers have memory restrictions, the amount of memory being governed by the make and model of the sampler. Computer sampling is another story entirely, as you are only restricted by how much ram you have in your computer. A general rule of thumb is: one minute of 44.1 kHz (audio bandwidth of 20 kHz using Nyquist theorem, which I covered in Part 1) sample rate audio, in stereo, equates to about 10 megabytes of memory. Sampling the same sampling rate audio in mono gives you double the time, ie 2 minutes, or takes up 5 megabytes of memory. So, depending on your samplers memory restriction, always bear that in mind. Another factor that governs the use of mono over stereo is, whether you actually need to sample that particular sound in stereo. The only time you sample in stereo is if there is an added sonic advantage in sampling in stereo, particularly if a sound sounds fuller and has varying sonic qualities, that are on the left and right sides, of the stereo field, and you need to capture both sides of the stereo field. When using microphones on certain sounds, like strings, it is often best to sample in stereo. You might be using 3 or 4 microphones to record the strings, but then route these through your mixers stereo outputs or subgroups to your sampler or soundcard. In this case stereo sampling will capture the whole tonal and dynamic range of the strings. For those that are on stringent memory samplers, sample in mono and, if you can tolerate it, a lower sampling rate. But make sure that the audio is not compromised. At this point, it is important to always look at what it is that you are sampling and whether you are using microphones or direct sampling, using the outputs of a device to the inputs of the sampler or soundcard. For sounds like drum hits, or any sound that is short and not based on any key or pitch, like instrument or synthesizer sounds, keep it simple and clean. But what happens when you want to sample a sound from a particular synthesizer? This is where the sampler needs to be set up properly, and where the synthesizer has to be set up to deliver the best possible signal, that is not only clean and strong, but one that can be easily looped and placed on a key and then spanned. In this case, where we are trying to sample and create a whole instrument, we need to look at multisampling and looping.

But before we do that, we need to understand the nature of what we are sampling and the tonal qualities of the sound we are sampling. Invariably, most synthesizer sounds will have a huge amount of dynamics programmed into the sound. Modulation, panning, oscillator detunes etc are all in the sound that you are trying to sample. In the case of analogue synthesizers, it becomes even harder to sample a sound, as there is so much movement and tonal variances, that it makes sampling a nightmare. So, what do we do? Well, we strip away all these dynamics so that we are left with the original sound, uncoloured through programming. In the case of analogue synthesizers, we will often sample each and every oscillator and filter. By doing this, we make the sampling process a lot easier and accurate. Remember that we can always programme the final sampled instrument to sound like the original instrument. By taking away all the dynamics, we are left with simpler constant waveforms, that are easier to sample and, more importantly, easier to loop. The other consideration is one of pitch/frequency. To sample one note is okay, but to then try to create a 5 octave preset presentation of this one sample would be a nightmare, even after looping the sample perfectly. There comes a point that a looped sample will begin to fall out of pitch and result in a terrible sound, full of artefacts and out of key frequencies. For each octave, the frequency is doubled. A way around this problem is multisampling. This means we sample more than one note of the sound, usually each third or fifth semitone. By sampling a collection of these notes, we can then have a much better chance of recreating the original sound accurately. We then place these samples in their respective slots in the instrument patch of the sampler or software sampler, so a C3 note sampled, would be put into a C3 slot on the instrument keyboard layout. Remember, we do not need to sample each and every note, just a few, that way we can span the samples, ie we can use a C3 sample and know that it can still be accurate from a few semitones down to a few semitones up, so we spread that one sample down a few semitones and up a few semitones. These spread or zones are called keygroups. Emu call these zones and Akai call them keygroups. Where the sample ends, we put our next sample and so on, until the keyboard layout is complete with all the samples, this saves us a lot of hard work, in that we dont have to sample every single note, but also gives us a more accurate representation of the sound being sampled. However, multisampling takes up memory. It is a compromise between memory and accurate representation that you need to decide on. There are further advantages to multisampling, but we will come to those later. For sounds that are more detailed or complex in their characteristics, the more samples are required. In the case of a piano, it is not uncommon

to sample every second or third semitone and also to sample the same notes with varying velocities, so we can emulate the playing velocities of the piano. We will sample hard, mid and soft velocities of the same note and then layer these and apply all sorts of dynamic tools to try to capture the original character of the piano being played. Like I said, we will come to this later. An area that is crucial as that of calibrating. You want to make sure that the sound you are trying to sample has the same level, as shown on the mixers meters, as the samplers meters or the soundcards meters. If there is a mixer in the path, then you can easily use the gain trims on the mixer, where the source is connected to, to match the level of the sound you want to sample, to the readout of the input meters of the sampler or the soundcard. If there is no mixer in the path, then you need to have your source sound at maximum, assuming there is no distortion or clipping, and your samplers or soundcards input gain at just below 0dBu. This is a good hot signal. If you had it the other way around, whereby the sound source level was too low and you had to raise the gain input of the sampler or soundcard, you would then be raising the noise floor. This would result in a signal with noise. The right cabling is also crucial. If your sampler line inputs are balanced, then use balanced cables, dont use phono cables with jack converters. Try to keep a reasonable distance between the source and destination and if you have an environment with RF interference, caused by amps, radios, antennae etc, then use shielded cables. I am not saying use expensive brands, just use cables correctly matched. Finally, we are left with the tools that you have in your sampler and software sampler. In the virtual domain, you have far more choice, in terms of audio processing and editing tools, and they are far cheaper than their hardware counterparts. So, sampling into your computer will afford you with many more audio editing tools and options. In the hardware sampler, the tools are predefined. Next month we will discuss these tools and how to use them, the functions that the tools provide and where and when to use them. Until then, set up different sampling paths and listen to how the sound sounds. Remember, this is what you will be sampling. So, make sure that you find the optimum configuration and path for your signal, because, when the time comes, I want you to be ready to enter the wonderful world of sampling.

he tools Welcome back! This month we are going to get deep into the subject of tools, the tools that your sampler comes housed with. I will also be touching on the subject of the tools that come with audio editors. In essence, most of these tools are shared between the hardware world and the software world, so it is important to try to cover all the variables here. However, most hardware samplers will not have the variety or complexity of the tools available in the digital domain. In the software world, not only do the audio editors come with an arsenal of editing tools, but the net allows us to download a variety of tools, in the way of plugins, and incorporate them either as stand alone or within the audio editing software. This months tutorial is going to concentrate on the basic and general tools available for the sampling process and will not focus on the more detailed or esoteric tools that are adopted to further hone the sample. So, lets start right at the input stage of the sampler or sound card. We have already covered the topic of attaining a clean and hot signal. Now, we need to cover the tools available to actually sample a sound, and the tools available after you have sampled a sound. Most samplers will allow you to sample in a number of ways. But first, it is important, and sensible, to create a location for the samples. On computers, it is always good practice to create a section on your hard drive for audio. You can then create folders for your samples and have them in categories, for example, if you are sampling bass sounds, have a folder named Basses, for drums have a category named Drums and then assign sub categories and name them relative to what you are sampling. So, for Drums, you c could have sub categories for kicks, snares and hi hats etc. This makes filing (archiving) of the samples, and even more importantly, the searching for a sample, much easier. On hardware samplers, it is pretty much the same. You create a bank and name that and within that bank you create presets, which house the samples. On Emu samplers, the sampler creates a default preset on startup. This makes life easier. Most samplers have this facility. Now let us look at the different ways of sampling that certain samplers provide.

I am going to concentrate on Emu Ultra samplers for this tutorial and, later, I will focus more on the Emulator X soft sampler from Emu. For the sampler to begin sampling, it needs to know a few things. 1. Source analogue or digital, 44.1 kHz or more. Pretty self explanatory as it is asking you to choose the source and the sample rate. Some samplers will have the option that will allow for digital recording as well as analogue. There are advantages to using digital recording modes but there are also disadvantages, but we will come to these later. Sample rate, however, is important. If you own a Sound Blaster card and it only operates at 48 kHz, then sampling at 44.1 kHz is not helpful at all. The other advantage of sampling at a higher rate is for precision and clarity in the representation of the sound you are sampling. Please read Part1 if any of this is confusing you. The disadvantage of higher sample rates is that they will eat up memory. In the virtual world (computer), it is now more common to sample at 24 Bits and 96 kHz (24/96) or 44.1 kHz (24/44.1). However, these parameters are dependant on the sound card you are using. If 24/96 is not supported then you cannot sample at those values. Input This is for selecting mono or stereo for the sampling process. Length You can predetermine the length of the sample you want to record. Maybe you only need to sample 3 seconds of a sound. Setting 3 seconds as the length automatically stops the sampler recording after 3 seconds of sampling. Dither Used when recording digitally. I have explained Dither in detail in Part1, so read it again if you are still unsure of what it means. Monitor Gives you the option of having it on or off. Setting it to on allows you to listen to the sound being sampled while it is being sampled. Gain Here you can adjust the input gain (volume/level) of the sound (signal) being sampled. If the signal is too loud and is distorting or clipping, you can adjust the level by using this function. Trigger Key This is one of the methods of sampling that I mentioned earlier. You can set the trigger key to any key on the keyboard, say C4, and when you hit C4 on your keyboard the sampler activates (gets triggered) and starts to sample. Arm This puts the sampler into standby mode and when it hears a signal, it starts to sample. This is usually used in conjunction with threshold. The threshold sets the level at which you want the sampler to be triggered when in arm mode. The real advantage of this is to eliminate noise. If you set the threshold above the noise level and then play the sound, the sampler will only start to record at the threshold

2. 3.

4. 5.

6.

7.

8.

level setting, in this case, above the noise, as the noise is below the threshold level. The threshold/arm combination is also useful when you want to sample a sound that is above the general level of the piece of audio being sampled, an example of this would be to sample a loud snare that is above the rest of the audio piece. If you set the threshold to just below the level of the snare, the sampler ignores everything below that level and automatically records the snare. 9. Force or Manual This simply means that you press a button to start the sampler recording. Those are the general functions available on most samplers, to do the actual sampling/recording. Now we need to look at the tools available when you press stop or complete the recording of the sound. When you press stop, a new page appears and you are given a bunch of options. Here are the general options that are offered to you. 1. 2. Dispose or Keep This just means you can either dump the sample, if it was no good, or keep it. Place This allows you to place the sample anywhere on the keyboard you want and within this option you will have a range you can set. The range is displayed as Low and High. Lets say I sample a C3 bass note off a synthesizer, I can then place it at C3 on my keyboard and set the low to A2 and the high to D#3. I have not placed the note and set it a range on my keyboard. This saves me loads of time and effort in having to do it later. This placing and range setting is stored in the preset, so, in effect, I am creating and building my preset as I am sampling, instead of having to sample all the notes then go back into the preset and start placing and setting ranges. Much easier. With a drum loop, you can do the same thing and by setting the range, it gives you different pitch choices of the drum loop as the sample pitches down when setting the low range value, and pitches up when setting the high range value. For single drum shots, I would place and set the ranges at the placed note. So, a kick would be placed on C1 and the low range value would be set to C1 and the same for the high range value. I now have a kick on C1. Truncate Some samplers have auto-truncate and manual truncate. Truncate, also called trim or crop, is a function used to cut data before and after the sampled data. This can cut/delete space or sound before or after the sample or can be used to cut/delete any portion of the sample. Auto-truncate, simply removes everything before and after the sample. NormalizeorNormalise This is a topic that has ensured some fiery debates and I doubt it will ever get resolved. Basically normalising a

3.

4.

sample means that you raise the volume of the sample to the peak of the headroom. If, for example, you were normalising a sample to 0dB, then that means the process takes the highest peak/s in the sample data and raises them to 0dB, in this way, the loudest peak hits 0dB. This is called Peak (or absolute) normalising. By raising the highest peak you also raise the entire sample data, this has the disadvantage of raising the noise floor as well, as all data is raised till the peak hits 0dB.To normalise an audio file to ensure a certain level of perceived loudness, you need to normalise to an RMS (or relative) value of dB, rather than peak. RMS is, roughly, the average volume over a given time, rather than just the highest peak/s. It calculates the average peaks and raises those to 0dB. The disadvantage of RMS normalising is that by raising the average data peaks, you incur clipping, not always, but usually. So, in this instance, a good normalising plugin will compress or limit at the same time as normalising, so the levels do not exceed 0dB and thus, prevent clipping. I have always maintained that if you have a strong signal, with good dynamic movement, that does not clip and stays just under 0dB, then you are far better off than normalising to 0dB. Of course, there are instances where normalisation can be your friend, but in most cases, it can cause additional side effects that are not needed. These include, killing any headroom that was there, raising the noise floor so noise is also now more pronounced and evident, and to top it all off, you can get roundness in the shape of the peaks and even slight distortion or phasing. So, use this function sensibly. Now you have your sample recorded, placed, truncated, normalised etc, you need to look at the tools available to edit and process the sample. By selecting edit sample you are presented with the sample and a host of tools you can use to edit and process the sample. Let us look at these briefly and then, when we come to the bigger topics, we will get a little more in depth. 1. 2. Zoom +- This is like a magnifying tool that allows you to zoom in, or magnify, a portion of a sample. Start End Size Here, you have the start and end of the sample represented in cycles and, in some samplers, in time. The size tells you the size of the sample. This might not seem important now but the size of the sample is important when working out and retuning the sample or changing the sample rate. Dont worry, we will tackle that later. Loop This is a crucial function and is the essence of what a sampler really does. The whole concept of looping is actually a simple one,

3.

4.

5.

whether its for memory saving or for creating sustained instrument sounds, the process is invaluable. What is difficult is how to find good loop points, and there are a number of reasons why this can seem complex. Firstly, unless the shape of the sample at the beginning, during and end of the loop matches up in level, shape and phase, you will have problems in finding a clean loop point. The most common enemy here is click. The best way to avoid clicks is to find what we call the zero-crossing point. This is where the samples shape crosses over from the positive axis to the negative axis. At the point where the shape crosses the axis, we have a zero point. Looping at zero points eradicates the problem of clicks. But, if the shape and level dont match up well, you will still get a click. So we are still left with a problem. What does this tell us? It tells us that the sample length being looped must be consistent, both in terms of shape and level, but also in terms of length. Too long a sample loop length and you encounter modulation. Why? Because the sample has an attack and decay. If you start your loop point too close to the attack and your end point too near the decay, you are then left with a shape that starts high and drops to a lower level, this causes the loop to modulate or wobble up and down. The opposite is also true. All sounds have a harmonic structure and if your loop length is too small then the harmonics of the sound are compromised, since you are looping a very small instance of the sample, you are, in effect, cutting the harmonics up. This will give you an unnatural loop in that it will sound very synthesized. Thats ok if you are sampling synthetic sounds but not if you are trying to loop a natural instrument sound. The final problem you are faced with is pitch. If you loop the wrong are of the sample, then it might not be in the right pitch of the original signal that was being sampled. A C3 string note will not stay at exactly C3, but move through the harmonics, so if you looped the wrong harmonic, the sample might show up as C3 +3 or worse, ie it is 3 cents off the right pitch. You need to select the most consistent part of the sample to attain the right loop points and loop length. This, unfortunately, takes practice and experience. This leads me subtly to the next function. Auto Correlation Some samplers provide this function when you are looping. Basically what this function does is, after you have set your loop points, it searches for the next best loop point that it thinks will give you the best loop. Not always accurate but useful to use if you are completely off target. However, we do have another weapon at our disposal if the loop still throws up a click. Crossfade Looping This technique involves fading out the end of the loop and overlapping it with a fade-in of the start of the loop, and it's a facility provided by virtually all samplers. By fading in these points,

you get a smoother transition on the loop points, start and end. I only recommend using this when you have got really close to finding the right loop point and length, as it is a nice little tool and is just a polisher and not a tool to remedy bad loop points and lengths. If you had a very bad loop and it was glitching heavily, then using this tool would only make the sound unnaturally modulated, without any consistent shape. So, its not for error correction but for polishing off the tiny click that might be barely audible. 6. DC offset Any waveform that isn't symmetrical around the zero axis has a DC offset. DC offset is when there is too large a DC (direct current) component in the signal, sometimes visible as the signal not being visually centred around the zero level axis. DC offsets do not affect what you actually hear, but they affect zero crossing detection and certain processing, and it is recommended that you remove them. Thats the technical, but short, definition. Basically, always remove the DC offset on a sample. This will help you find zero point crossings. This is a whole debate in itself and there are arguments raging on both sides of the fence and arguments based around the algorithms used in DC offsetting tasks. You dont need to even think about getting involved in this debate. What you do need to know and do is to remove the DC offset on a sample and you are usually given a tool in the menu option to do this. The DC offset removal is actually called the DC filter, for those who want to know. Try experimenting, as always. I think that is enough for this month. I dont want to mess your heads up too much. Use the tagging method and, as always, experiment. Next month, we will go into deeper editing tools and sample processing techniques. But for you to fully understand and enjoy these tutorials, you must practice.

The Process Good to hook up with you all again. I trust that you have read all the parts leading up to this and that you have understood the content? This month we are going to look at the overall process of sampling and break it down into simple examples, with advice on the signal path and techniques for recording the audio signal. I find that the most common hurdles that beginners face is that of understanding how to use their samplers, how to hook all the devices up to each other, and how to then manage the samples. The best way of tackling these sub topics is to give you important pointers and guides, and from

there, you should be able to perform the task of sampling in a coherent and ordered fashion. Sampling is not about just recording a piece of audio, it is about organisation, management and following a protocol that ensures the best results. If these criteria are not adhered to, then you will always struggle and, more often than not, be totally disheartened by the process and results. Practice is the answer, but to be effective, one needs to follow procedure, otherwise bad habits will develop and breaking those habits becomes harder and harder with time. Whether you are sampling in a hardware environment or software environment, the methodology is the same. You need to have a temporary location for your samples, for editing and processing, and a final destination for the samples you want to keep. For this we have to create directories. Within those directories we need to create sub-directories. This ensures a simple way of locating samples and makes for a neater and logical layout. So, in the case of soft sampling, ie in a computer, we need to create folders with sensible names. In the case of percussion, it makes sense to name the main folder Drums. We can then create sub-folders within the main folder and name those, for example, we could create folders with names like Kicks, Snares, Hi Hats and so on. We can then create another main folder and name that Loops. We can then create sub-folders and name those in accordance to BPM (Beats per minute) or genre specific or both. An example would be Hip Hop, sub-folder 60-85 BPM etcThis makes life so much easier. We can continue this method and create more folders for instrument samples or loops. You get the picture? Organisation is crucial and order is paramount. The same applies for hardware samplers. There exists, in all hardware samplers, naming and filing options. This method of archiving should be done prior to any sampling to ensure that you have a trouble free way of following the process and retrieving the data at any time. We now come to the path. As discussed in earlier parts of this tutorial, the signal path is the most important aspect of sampling. Keeping the signal clean and strong minimises the noise element and ensures the best dynamic range. But this is always the area that beginners struggle with. The reason for this is the lack of understanding of gain structures and the devices in the chain. Let me make that simpler to understand. Most beginners make rudimentary errors when sampling because they do not understand the nature of the sound they are sampling or the equipment being used in the signal path. The most common errors are that of recording a distorted signal, due to too high a gain, recording too low a signal, which results in adding noise when the sample is then normalised or the gain increased, or

encountering hum because they had to use a preamp to boost the turntable signal to be able to sample it, or when everything is absolutely right, there is still noise or hum or any artefact that cannot be traced. Of course, there are more errors than that, but these are the most basic and yet the most common, so maybe we should tackle these problems before we continue. So, to help you understand and setup your devices a bit better, I would like you to take note of the following as scripture. These hints and definitions will help you enormously, so please try not to just follow them, but to understand them. 1. Using a turntable Most turntables that are stand alone will require a preamp to boost the signal so that you can record an acceptable level. Some turntables, particularly those that are housed in hi-fi units, will have an amp built in, but for the more pro decks, or DJ turntables, a preamp is required. The choice of preamp is crucial. I could go into some very deep explanation about capacitance, hum, LF noise and impedance etc but that would confuse you at this stage. What I will say is that the following will save you great heartache and make life a great deal easier. Years back, the RIAA (Recording Industry Association of America) established what are known as compensation standards. The resulting RIAA preamp has been built into every hi-fi and stereo amp with phono or turntable inputs since then. In the event that you are using a turntable, connected to a mixer or stand alone, that does not have a built in RIAA preamp, then you would need to get one. Now, this is where the technical heads sometimes have a fiery debate. Do you apply RIAA equalisation at the preamp stage or after using software applications? Take my word for it, always apply the RIAA equalisation at the analogue stage, at the preamp, and not later. This will ensure a good strong dynamic signal with ample headroom. I could go into explaining why to you, but apart from serious boredom, you might not understand, yet, why this is the case. 2. Cables If I had a penny for every time the question of cables comes up, I would be one rich dude. There are a few things that are crucial about cables and let us also put to bed the ridiculous analogy of Expensive cables are better than cheaper cables. This is simply not true, and if you actually took the time to make your own cables from component parts, you would realise how cheap it

actually is to make your own quality cables. In fact, I will write a tutorial on this soon, along with how to build your own pop-shield. Both are crucial DIY projects that, would save you money, and are fun. Balanced A balanced line requires three separate conductors, two of which are signal (+ and -) and one shield/earth. You can usually determine these by looking at the connection. They will have 2 black rings and the plugs are referred to as TRS (tip, ring and shield). Sometimes, and not always correctly, referred to as stereo jacks. Unbalanced An unbalanced cable runs two connectors, a hot (+) and an earth. By the way, I am being very simplistic here as there are many variations to balanced, unbalanced, TRS, coax, etcWhat is important is that if your equipment is balanced, then use balanced cables throughout the path, if the path has balanced and unbalanced, then simply use balanced to unbalanced cables and vice versa. The advantage of using balanced cables is one of noise reduction. Finally, if connecting balanced outputs/inputs with unbalanced cables, you can end up with signal levels that are 6dB lower than they should be. This is essentially because only half the signal is being transferred. So it always pays to match your cables. You will find that a lot of cables are unbalanced. Guitar jack cables, speaker cables and microphone cables being the most common. Shielded cables can also afford better protection against RF (radio frequency) noise. Match your cables. Say that to yourself a hundred times Even better, switch to balanced cables, throughout the path, if possible, that way you reduce noise and cable length does not become such an issue. This has subtly led me onto the debate of length. This is, again, dependant on the type of cable and connectors. Generally, as a rule, you can use unbalanced cables with no worries at all, up to 5 metres. Balanced can go even further, 10 metres. However, these figures are not gospel. Now we will deal with connectors. This is another area that is rife with preferences and arguments. So, I will sum up both the cable and connectors in one statement. I make my own cables but if I have to buy, then I buy Van Damme, and for connectors, I always use Neutrik connectors, Cannon and

Switchcraft follow. My recommendation is, build your own cables. This saves money and teaches you a thing or two. 3. Ground loops, hums, power surges and other nasty artefacts Without going into too much detail as to what factors cause the above, I would rather propose a solution. You now have a little more insight into why certain cables can filter noise better than other, along with connectors and cable lengths and cable matching. What we now need to look at is how to prevent earth loops and surges and even hums. Most equipment needs to be earthed in some fashion and the very nature of our planet and the national grid system means we will have power surges and spikes in our mains. Add to that mains hum, or equipment hum from non earthed equipment, and you are confronted with a multitude of problems that can all be resolved with a simple and inexpensive solution. Nowadays, there are a number of companies that build power surge protectors in terms of mains switches, isolators for maintaining a constant pre defined current, power distributors for maintaining and distributing current to a number of devices and UPS systems (uninterruptible power supply) for protection against power downs, cuts and outages. Simply put, you want to protect your equipment against power surges, spikes, shutdowns, etc. So, the simplest answer is to buy a power distributor that connects to all your equipment in the way of kettle plugs and sockets, a surge protector in the way of a simple mains switch breaker, found at any shop that sells plugs and the like, and thats pretty much it. Emo and Furman make good power distributors and protectors and they are cost effective. Many companies make UPS systems and they can start at a very cheap bracket and go into a hefty price range, the latter being for serious users like hospitals and the like. A simple UPS system can not only protect your system against power cuts, surges, spikes but also act as a distributor for your equipment, and not break the bank either. Most commonly used when you have a computer running in your studio, and a number of other devices, that relies on a constant feed. This way, if there is a power cut in your area, the UPS will have a battery charge backup and will continue to function, allowing you to back up your data on a computer instead of having it all wiped out by the power cut. Personally, I have an Emo power distributor that affords me 12 kettle sockets which connect to my gear that cost me 70, and a surge protector plug set that cost me 8 from my local Maplin. If you have serious mains issues, then seek the correct help and, if possible, have an isolator specifically for your studio. If you require a UPS system, then there are a

number of cheap manufacturers on the net, APC being one of the most noted. Make sure to match the power and get a True-Online type. Seek them and be happy. Bear in mind that your turntable my cause ground hum so some type of grounding is required. With the latest Emu sound cards, notably the 1820M, there is a dedicated turntable input with a ground lug. That, to me, is one serious cost effective way of having a sound card and a preamp with grounding, all in one unit. 4. The sound card Probably the most confusing and wrought with obstacles is the subject of sound cards. Which one to buy, how to hook it all up, what connections, how to assign the ins and outs, analogue or digital, adapt or optical, what sample rate? All the above can be daunting for the beginner, but it can be made easy if you understand a few very basic concepts about what the sound card is and how it functions. I am not going to go into an explanation on how a sound card works, or what happens to the audio when it enters the computer via the sound card. In fact, I am not going to tell you much about this. Why? Because if you dont know, then you havent read part 1 of this tutorial, so go back and read it. I do not want to waste time here covering the same ground and slowing everything down because some people like to skip the technical babble. Read part 1 and understand digital audio and the processes involved. What I will guide you on is how to best optimise your sound card for sampling, and what you need to know in regards to connectivity and the preparation within your computers settings. As always, the goal here is to get as hot a signal as possible into the computer without noise or distortion, or to compromise the headroom. I expect you to know all these terms. Some people like to sample digitally as opposed to analogue sampling. Remember that we are in the computers domain here and not external hardware sampler territory. This is all about connection, so it makes sense to set your sound cards inputs to match the incoming signal. If you are using any of the digital inputs, ADAT, SPDIF etc, then you need to select those as your inputs from the sound cards control panel or software in the computer. If you are using the analogue inputs, then you need to select these from your computer. I always recommend a hot signal at source, for example the

turntables preamp, after selecting the highest gain value without any distortion, you need to match the input signal by adjusting the sound cards input gains, either from your sound cards control panel, or physically, by adjusting the trims or knobs on the sound card itself, assuming it has any. Check your meters in the software application that you are using to record into. Remember that in the digital domain anything above 0dB is clipping, it is not the same for the analogue world, where you have some play or headroom in the signal boost. Try to keep your signal a couple of dB below 0, that way you have left enough headroom should you wish to process the sample. If you have a dead on 0dB recording, and if you apply compression or any dynamics that boost the gain, the sample will clip. Keep it sensible. The other area we need to touch on is the operating level. Most pro gear operates at a nominal +4dBu and often with balanced interfaces. Most consumer or semi-pro gear uses a -10dBV operating level, and often with unbalanced interfaces. But the two levels are not interlinked or dependant. You can have +4 unbalanced, or -10 balanced. These levels are measured as dBu (.775V), dBV (1V), so you can see that there is a difference in the referencing. I do not expect you to understand this as of yet, but if you want to delve into it a bit deeper, then read my Synthesis tutorials. However, you might come across certain products that are set to nominal operating levels; in this instance the gain staging is important. 5. Matching levels It is imperative to understand how to calibrate the signal path for optimum signal to noise ratio (S/N) and to also get a true reading, so that your levels show the same legending. Basically, what all this means, is that you need to be able to see the same level readouts on your hardware and software, so that you are dealing with a known quantity. It is pointless if you have different gain readouts across your signal path. So, what we need to do here is to calibrate the system. In fact, it is essential to do this anyway, so that when you are mixing or producing, your levels are true. By calibrating your system and showing a true value across the path, you are then in a stronger position to be able to apply dynamics that might be dependant on numerical data as opposed to the ear concept, that of hearing. So, let us start at the source and finish at the destination. In this instance, the source will be the turntable, microphone or synthesizer and the destination will be the software application that you are using to sample with. For the sake of explanation, I will assume that you are using a mixer. Without a mixer, the calibration is much simpler, so I prefer to take a harder example and work off that.

The steps to follow are quite simple, and make total sense. 1. Connect the source to your mixer and attain unity gain. Unity gain is a subject that is, yet again, hotly debated by tech heads. Basically, it means to align your sound to a fader and meter readout of 0. That is very simplistic and probably means nothing to you, so I will explain in more practical terms. Let us assume that you are connecting a synthesizer to channel 1 on your mixer. You first turn the volume knob on the synthesizer to 75%, some say crank it all the way to 100%, but I prefer to leave a little room in the event that I might need to boost the signal. Now, you set your mixers fader on channel 1 to 0 and the trim post or gain pots to 0. All you now need to concentrate on is the trim/gain knob. Turn this clockwise until the meter peaks at 0dB. If you do not have VU meters on your mixer, then check the LED for that channel and make sure it does not peak beyond 0dB. If you do not have an LED for individual channels, then use the master LED for the main outs, BUT make sure that every channel but channel 1 is muted. The reason for this is that live channels will generate a certain amount of gain or noise, even if there is no signal present, and that when you sum all the channels together, then you might get a tiny amount of gain or noise at the resultant master outs. Actually, as a general rule, when you are not using a channel, mute it, this makes for a quieter mixer. Purists will say that peaking just past 0dB is better, but that is not the case. The reason is that mixers will sum the channels to a stereo master and even if all your faders were at 0dB, the master fader could exceed the 0dB peak. For analogue mixers, that is not a problem as there is ample headroom to play with. For digital mixers, that equates to clipping. You have now achieved unity gain. Your fader is set to 0dB and your channels gain/trim knob controls the gain. On some mixers, you will actually see the letter U on gain/trim knobs, helping you to identify the unity location. In essence the knob should be at U, but that is not always the case. The second method of attaining unity gain is to do the following: Mackie mixers have a U on their trim knobs, so if you set this knob to U and your fader to 0dB, then adjust the synthesizer volume till the meter peaks at 0dB, then you have attained unity gain. I have a Mackie mixer and I always end up a couple of dBs past the U setting on the trim knobs. Dont let this worry you. What you must try to achieve is unity gain. Ok, so we have now set unity gain for the source and the channel input on the mixer, cool. Now we need to calibrate the mixer to the sound card.

2. Now check your master outs on your mixer. I am not talking about the control room outs that are used for your monitors but the master out faders. These will be a stereo pair. A point to make here, before we carry on, is that most people will use subgroups as the outs to the sound cards inputs. What I have done so far is to avoid the issue of subgroups or ADAT connections because I want you to understand the straight forward signal path, and that most users have a simple mixer with limited, if any, subgroups. However, treat the explanation for the master outs as if it were for the subgroup outs. At the end of the day, they are just outputs, but the beauty of subgroups is that they can be outputted to different devices and even more important, they can have different processors like gates or compressors on each subgroup, and by assigning a channel to a subgroup, you are able to have variety in your signal path. I have 8 subgroups on my mixer and I have a different compressor inserted on each one, but I have all 8 subgroups going out and into the 8 ins on my soundcard. I can then assign a number of channels to any subgroup and use any of the compressors on them, or just have 8 outs nice and clean. The other advantage of having subgroups is that you have additional eqs that you can use. Remember that the example I am giving here, of my setup, is purely for sampling purposes as I am not sampling 8 outs at the same time. I am sampling either a mono channel or a stereo channel and the subgroups afford me further editing and processing options. For recording purposes, I would assign my subgroups differently, but we will come to that in my new tutorial about mixing and production. For now, we are only concerned with sampling. Back on topic: Make sure your master outs are set to 0dB. We now have unity gain from source, all the way to the destination. What you should now be getting on your meters is 0dB at channel 1 and 0dB on the master outs. 3. The sound card settings are the one area that most people have problems with. They set their sound card faders, or gain/trim knobs, at 0dB and wonder why their levels are either coming in too low or too high. If you read part 1 of this tutorial, you will understand a little more about the processes that take place within a digital domain and the A/D input stage. All you need to concern yourself with is to have unity gain right through the signal path. So, quite simply, adjust the sound cards faders until your meters read 0dB. Open up the software application that will be doing the recording, pass a signal through the source to the destination (the application) and check the

meters within the software application. There should be no, or very little, difference on the readout. I cannot tell you how many home studios, and even pro studios, I have been to where the signal path is not calibrated and levels are all over the place. Not attaining a calibrated path results in bad mixes, confused recordings and total frustration at not being able to understand why or what is wrong with your setup. It is also important to mention that the minute you introduce any device into this path, you will need to calibrate to compensate for the new intruder. Compressors are the real culprits here. I will end this months tutorial off with a little information on the subject of noise. Almost all devices will produce noise, all at varying degrees. Whether it is hiss, hum or just general unwanted noise, you are left with a situation whereby you want that clean signal, noise free. The more devices you introduce into the path, the more noise is generated. Even mixers have an element of noise, generated from their circuitry. The tried and tested trick is to use noise gates or noise filters to cut out the unwanted frequencies. Some high end mixers will have gates built into the channels for this very purpose. You can insert a noise gate on the master outs and adjust the parameters till you eliminate the unwanted frequencies. A gate is exactly that, a gate that opens at a specified level (threshold) and shuts (release) when set to shut. You need to set the threshold to just above the noise and set the gate to stay open for infinity or a decay time that suits you. The gate will only let signals above the threshold pass through. You have parameters such as hold, release, ratio and attack. I do not want to go into this subject in detail as I will be covering it more fully in my other tutorial, Production and Mixing. This is purely a tip to help you to maintain a clean and strong signal path. Until next month, I want you to try to calibrate your system, think a little more about your cabling, but most of all, and as always, enjoy yourself. After all, this is music and music technology, and that is why you are reading this.

CREATIVE SAMPLING

Well, we have now reached the end of the beginners section on Sampling. It has been an interesting experience, but it does not end there, as I will be completing e-books for the intermediate and advanced stages. This month I would like to end with a few hints and tips for creative sampling. Most sampling enthusiasts usually sample a beat, audio piece or riff when they sample. Your sampler is so much more than that, and offers a wealth of tools that you rarely even knew existed, as they are kept so quiet, away from the in your face tools. This tutorial aims to open your eyes to what you can actually achieve with a sampler, and how to utilise what you sample. This final tutorial is the real fun finale. I will be nudging you to sample everything you can and try to show you what you can then do to the sample to make it usable in your music. First off, let us look at the method. Most people have a nightmare when it comes to multisampling. The one obstacle everyone seems to be faced with is how to attain the exact volume, length of note (duration) and how many notes to sample. The easy method to solve these questions in one hit, is to create a sequence template in your sequencer. This entails having a series of notes drawn into the piano roll or grid edit of your sequencer. You can actually assign each and every note to be played at a velocity of 127 (maximum volume), have each note the exact same length (duration) and you can have the sequencer play each and every note or any number of notes you want. The beauty of this mehtod is that you will always be triggering samples that are at the same level and duration. This makes the task of looping and sample placing much easier. You can save this sequence and call it up everytime you want to sample. Of course, this only works if you have a sequencer and if you are multisampling. For sampling the source directly, as in the case of a synth keyboard, it is extremely useful. Creative Sampling The first weapon in creative sampling is the change pitch tool. Changing the pitch of a sample is not just about slowing down a Drum and Bass loop till it becomes a Hip Hop loop, a little tip there that some people are unaware of.

It is about taking a normal sound, sampling it then pitching it right down, or up, to achieve a specific effect. Let us take a little trip down the pitch lane. Sample the following and pitch the sample right down. You can achieve the pitch down effect by using the change pitch tool in your sampler, assigning the sample to C4 then using the C1 note as the pitched down note, or time stretch/compress to maintain the pitch but slow or speed the sample. There is a crucial distinction here. Slowing down a sample has a dramatic effect on the pitch and works great for slowing fast tempo beats down to achieve a slower beat, but there comes a point where the audio quality starts to suffer and you have to be aware of this when slowing a sample down. The same is true for speeding a sample up. Speed up a vocal sample and you end up with squeaky vocals. Time stretching/compressing is a function that allows the length of a sample to be changed without affecting the original pitch. This is great for vocals. Vocals sung for a track at 90 BPM can then be used in a track at 120 BPM without having to change the pitch. Of course, this function is as good as the software or hardware driving it. The better the stretching/compressing software/hardware is, the better the result. Too much of stretching/ compressing can lead to side effects, and in some cases, that is exactly what is required. A flanging type of robotic effect can be achieved with extreme stretching/compressing, very funky. A crucial function to bear in mind, and always perform, is that when you pitch a sample down, you then need to adjust the sample start time. Actually, this is a secret weapon that programmers and sound designers use to find the exact start point of a sample. They pitch the sample right down and this makes it much easier to locate the start point. You will often find that a sample pitched down a lot will need to have the start time cropped, as there will be dead air present. This is normal, so dont let it worry. Simply check your sample start times every time you perform a pitch down. Here are a few funky things to sample. Crunching, flicking, hitting paper Slowly crunch a piece of paper, preferably a thicker crispier type of paper, and then sample it. Once you have sampled it, slow it right down and listen to the sample. It will sound like thunder claps. If you are really clever you can listen to the sample as you slow it down, in stages, until you hear what sounds like a scratch effect, before it starts to sound like thunder claps.

SCSI dump the samples into your computer, use Recycle or similar, and dump the end result back into your sampler as chopped segments of the original sample (please read chopping samples and Recycle tutorial). Big sheets of paper being shaken or flicked from behind can be turned into thunderous noises by pitching down, turning up and routing through big reverbs. Spoon on glass There are two funky ways to do this. The first is with the glass empty. Use an empty glass, preferably a wine glass, and gently hit it with a spoon. Hit different areas of the glass as this generates different tones. You can then slow these samples down till you have bell sounds, or keep them as they are and add reverb and eq to give tine type of sounds. The second way of doing this is to add water to the glass. This will deaden the sound and the sample will sound a lot more percussive. These samples make for great effects. Lighting a match Very cool. Light a match, sample it and slow it down. You will get a burst effect or, being clever, use the attack of the match being lit sample and you will get a great snare sound, dirty and snappy. Tennis ball against wood Man, this is a very cool one. Pitch these samples down for kick and tom effects. You can get some really heavy kicks out of this sample. Actually, ball hitting woody type of surfaces make for great percussive sounds. Finger clicking Trim the tail off the sample and use the attack and body of the sample. You now have a stick or snare sound. Pitch it down and you will have a deep tom burst type of effect. Or, use the sample of the finger click, cut it into two segment, the first being the attack and the body, the second being the tail end. Layer them together and you have a snare with a reverse type of effect. Hitting a radiator with a knife

Great for percussive sounds. Pitched down, you get percussive bells, as opposed to bells with long sustain and releases. Also, if you only take the attack of this sample, you will have a great snare sound. Kitchen utensil These are the foundation for your industrial sounds. Use everything. First, drop them all on a hard surface, together. Sample that and slow it down a bit and you will have factory types of sounds. Second, drop each utensil on a hard surface and sample them individually. They make for great bell and percussive sounds. Scrape them together and sample them. Slowed down, they will give you great eerie industrial sounds and film sound effects. Metallic sounds, once pitched down, give more interesting undertones, so experiment. Hitting a mattress with a piece of wood This will give a deep muffled sound that has a strong attack. This makes for a great kick or snare. Slowed right down, you will achieve the Trancey type of deep kick. Blowing into bottles This gives the nice flute type of sound. Pitched down, you will get a type of foghorn sound. Blow into it short and hard and use the attack and body, you will achieve a crazy deep effect when pitched down. Slamming doors Slam away and sample. Thunderous sounds when pitched down. The attacks of the samples make for some great kicks and snares. Aerosol cans Great for wind and hi hats. Slowed down, you will achieve wind type sounds. Used as pitched up, you get cabasa type of sounds. Run through an effect and pitched higher, you will achieve a hi hat type of sound. Golf ball being thrown at wall A snare sample that is great in every respect. Kept as is, you get a cool snare. Pitched up and you get a snappier snare. Pitched down, you get a deep tom, kick or ethnic drum sound. Toys

Sample toys, preferably the mechanical and robotic ones. The amount of sample variations you will get will be staggering. These mechanical samples, once pitched down, make for great industrial sounds. Pitched up, they can make some great Star Wars type of sounds. Simply chopped up as they are, make for great hits, slams and so on. Factories and railway stations Take your recorder and sample these types of locations. It is quite amazing what you will find and once manipulated, the samples can be so inspiring. Toilets, sinks and bathtubs. Such fun. Water coming out of a tap, pitched down can be white water. Water dripping can be used in so many ways. Splashing sounds can be amazing when pitched up or down. Dropping the soap in a full bath and hitting the side walls of the bathtub when empty or even full, can create some of the best percussive sounds imaginable. Radio Sample your radio, assuming it has a dial. The sounds of searching for stations can give you an arsenal of crazy sounds. Pitched down you will get factory drones, swirling electric effects and weird electro tom sounds. The sound palette is endless. I think you get the picture by now. Sample everything and store it away. Create a library of your samples. Categorise them, so that they are easy to locate in the future. Now let us look at what you can do to samples to make them interesting. Reverse is the most obvious and potent tool. Take a piece of conversation between a man and a woman, sample it and reverse it and, hey presto, you have the Exorcist. Layer a drum loop with the reversed version of the loop and check it out. Cool. Pitch the reversed segment down a semitone or two to create a pseudo doppler effect. With stereo samples of ambient or melodic sounds, try reversing one channel for a more unusual stereo image. You can also play around with panning here, alternating and crossfading one for the other.

Try sampling at the lowest bandwidth your sampler offers for that crunchy, filthy loop. This is lo-fi land. Saves you buying an SP1200..he..he. Try deliberately sampling at too low a level, then using the normalising function repeatedly to pump the volume back up again. This will add so much noise and rubbish to your sample that it will become dirty in a funky way. You can take a drum loop and normalise it continually till it clips heavily. Now Recycle the segments, dump them back into your sampler, and you have dirty, filthy, crispy Hip Hop cuts. A sample doubles its speed when it's transposed up an octave. So try triggering two versions of a sampled loop an octave apart, at the same time. With a percussive loop you'll get a percussion loop running over the top of the original. Use effects on a loop, record it to cassette for that hissy flavour, then, resample it. Recycle the whole lot and drop the segments back into your sampler and you have instant effects that you can play in any order. Layer and crossfade pad samples so that one evolves/morphs into another. Take a loop and reverse it. Add the reversed loop at the end of the original loop for some weirdness. Multi triggering a loop at close intervals will give you chorus or flange type of effect. Try it. Have the same loop on 3 notes of your keyboard and hit each note a split second after the other. There you go. I could go on for pages, but will leave the more advanced stuff for later. Experiment and enjoy yourself. Sampling is more than recording a dig. It is a very powerful tool and methodology for letting your creative juices run riot. I hope these tutorials have helped you. Take care

Das könnte Ihnen auch gefallen