Shockwave-Sound Blog and Articles
Depth and space in the mix, Part 2

Depth and space in the mix, Part 2

by Piotr Pacyna

< Go to part 1 of this article

So, how to start?

With a plan. First off, I imagine in my head or on a sheet of paper the placing of individual instruments/musicians on the virtual stage and then think how to “re-create” this space in my mix. Typically I’d have three areas: foreground, mid-ground and background. Of course, this is not a rule. If we make a raw rock mix with a sparse arrangement and in-ya-face feel we don’t need much of a space, on the other hand, in a dense, multi-layered electronica the depth is crucial.

So, I have divided all my instruments into, say, 3 spatial groups. Then, in my DAW, I set the same colour for every instrument belonging to the certain group, what is wonderfully handy – I immediately see everything at a glance.

The tracks that I usually want to have close are drums, bass, vocals. A bit deeper and further I’d have guitars, piano, strings. And then, in the distant background I’d have synth textures or perhaps some special vocal effects. If there are string or brass sections in our song, then we need to learn about placing the orchestra instruments first in order to reproduce it. Surely this is the case only if we are aiming for realism.

But sometimes we don’t necessarily need the realism, especially in electronic music. Here almost anything goes!

Going back to our plan…

No matter whether we struggle for realism or not I suggest to start planning from pinning down which element will be the furthest – you need to identify the “back wall” of the mix. Let’s assume that in our case it is a synth pad. From this point, any decision about placing instruments closer or farther away has to be based on our back wall.

At this point we have to decide what reverb will we use. There are basically two ways of thinking. Traditionalists claim that we should use only one reverb in the mix, not to give misleading information to the brain. In this case we have the same reverb on each bus (in terms of the algorithm), changing only the settings – especially pre-delay, dry/wet ratio and EQ. Those of a more pragmatic nature believe that it’s not always the realism that matters, especially in electronic music and only the end result counts. Who are right? Well, they both are.

I’d usually use two, maybe three different reverb algorithms . First would be a short room type of reverb, the second, longer, would be Plate, and the third and farthest would be Hall or Church. Thanks to using the sends from individual tracks I can easy decide how far or close the instrument will sit on our virtual stage.

Do not add a reverb to each track, the contrast will allow you to enhance dimension and imaging even more. If you leave some tracks dry, the wet ones will stand out.

Filtering out the highs from our returns not only sinks things back in the stereo field, but also helps to reduce the sibilants – reverb tends to sort of spread them out in the space, what is very irritating. An alternative method of getting rid of sibilances from reverb is to use de-Esser on the sends.

Compression and its role in creating a depth

Can you use compression to help creating the depth? Not only you can, but you must!

As we all know, the basic use of the compressor is to “stabilize” the instrument in stereo field. It means that compressor helps to keep the instrument the same distance away from the listener thorough the whole track. To put it another words – its relative volume is stable. But of course we don’t always need it to be. This is particularly important for instruments placed back on a sound stage, because otherwise these will not sound clear. Now, how the compression can help us here? As we all know, the unwritten rule says that the gain reduction should not exceed 6 dB. This rule works for instance for solo vocals. The bigger reduction can indeed “flatten” the sound. Yet this is not necessarily the case when it comes to backing vocals or, generally, instruments playing in the background. Sometimes these are getting reduced by 10 dB or even more. In a word – everything what is further away from the listener should be compressed heavier. The results may surprise you!

There is one more thing I advise to pay attention to – two basic work modes: RMS and Peak. PEAK mode is “looking” at peaks and reduces the signal according to it. What sound does it give? In general – more squeezed, soft, sometimes even pumping. It’s useful when we want the instrument to pulse softly rather instead of dazzling the listener with its vivid dynamics. The RMS mode causes the compressor to act like the human ear and not focusing on signal peaks that often have little to do with the perceived loudness. This gives a more vibrant, dynamic and more natural sound. It works best if our aim is to preserve the natural character of the source (and that’s often the case for example with the vocals). RMS mode gives a lively, more open sound, good for pushing things to the front on our sound stage.

The interesting fact is that built-in channel compressors in SSL consoles are instantly switchable between Peak and RMS modes. You can find something similar in the free TDR Feedback Compressor from Tokyo Dawn Records.

Delay

Another very popular effect is delay. It is, one might say, a very primitive form of reverb (as reverb is nothing more than series of the very quick reflections).

As you may remember from the earlier part of this article, I mentioned the pre-delay parameter in reverb. You can use it in pretty much the same way in delay plugin to create the sense of depth in the mix. Shorter pre-delay times will make instruments sound further away from the listener, longer times will do the opposite. But you can of course use the delay in many different ways. For instance – very short reflection times with no feedback can also thicken and fatten the sound nicely. Try it!

The thing I like the most in delay is that it gives the mix a certain context of space. The music played in an anechoic chamber would sound really odd to us, as we hear all sounds in a context already from birth (the situation is of course no different with the music). No matter if you listen to a garage band, a concert at the stadium or in the club – context of the place is essential to an appreciation of space in which the music is playing.

Now, how to use all this knowledge in practice

And now I will show you how I use all of this information in practice, step-by-step.

1. The choice of reverbs.

As I said before, the first we have to consider if we aim for realism or not.

I always struggle when it comes to reverb. Like, what the best sound settings for what instrument/sample. Should I use a Hall or a Plate? Should I use an aux or use it as an insert. Should I EQ after or before the reverb etc. I don’t know why, but reverb seems to be the hardest thing for me to understand and I wish it was not.

And then comes another big question. How much reverb should be applied to certain tracks? All decisions made during the mixing process are based on what makes me feel good. One good advice is to try monitoring your mix in mono while setting reverb levels. Usually, if I can hear a touch of it in mono it will be about right in stereo. If I get it too light in stereo, the mix will sound too dry in mono. Also – concentrate on how close or distant the reverbated track sounds in the context of the mix, not on how soft or loud the reverb is (a different perspective).

2. Creating the aux tracks including different reverb types.

3. Organizing the tracks into different coloured groups.

 

At the top of the session I have a grey coloured group – these are the instruments that I want to have really close and more or less dry: kick, bass, hihats, snare, various percussion loops. I have Room reverb going on here, but it is to be felt, not heard.

Then I have the blue group. These are the “second front” instruments with Hall or Plate type reverb on them.

And then I have the background instruments, the back wall of my mix. Everything that is here is meant to be very distant: synth texture, vocal samples and occasional piano notes.

4. Pre-delays, rolling off the top, the bottom, 300Hz and 4500 Hz.

My example configuration would look like this:

  • Room: 1/64 note or 1/128 note pre-delay, HPF rolling off from 200 Hz, LPF from 9 kHz
  • Plate: 1/32 note or 1/64 note pre-delay, HPF rolling off from 300 Hz, LPF from 7 kHz,
  • Hall: no pre-delay, HPF rolling off from 350 Hz, lowpassing is usually quite low, in the 4k – 5k zone (remember the air absorbs high frequencies much more than it absorbs lower ones).

5. Transients


The distance eats transients. And attenuates the direct sound, the first arrival of the initial transient. But the reverberation picks up and amplifies the steady, tonal part of the sound. The distant sound is much less transient-laden, far smoother, far more legato, far less staccato, less “bangy” and “crunchy,” than close-up sound. It is also harder to understand the words at a distance. That’s why I often compress the longest reverb to flatten or to get rid of transients. Set a fast attack if you want there to be less of a transient at the start, and the parts to be squashed more. I also use a transient designer (such as freeware FLUX Bittersweet) and move the knob anticlockwise
to soften the attack a little.

[Example.mp3]

  • Foreground: drums, percussion, bass and saxophone.
  • Mid-ground: piano, acoustic guitar.
  • Background: synth pad, female voice.

Summary


For a long time I had the tendency to put way too much reverb on everything. You know, I thought I would get the sense of depth and space this way, but I was so wrong… Now I know that if we want one track to sound distant, another must be very close. The same goes to volume and every other aspect of the mix – to make one track sound loud, others need to be soft and so on.

There are some more sophisticated methods that I haven’t tried myself yet. Like a smart use of compression for instance. Michael Brauer once said: “I’m using a lot of different sounding compressors to give the record depth and to bring out the natural room reverbs of the instruments”.

Some people also get nice results by playing around with Early Reflections parameter in reverb. The closer a sound source is to boundaries or large reflective objects within an acoustic, the stronger the early reflections become.

Contrast and moderation – I want you to leave with these two words and wish you all a successful experimenting!



About the author: Piotr “JazzCat” Pacyna
is a Poland based producer, who specializes in video game sound effects
and music. He has scored a number of Java games for mobile phones and, most
recently, iPhone/iPad platforms. You can license some of his tracks here.

Depth and space in the mix, Part 1

Depth and space in the mix, Part 1

by Piotr Pacyna

“When some things are harder to hear and others very clearly, it gives you an idea of depth.” – Mouse On Mars

There are a few things that immediately allows one to distinguish between amateur and professional mix. One of them is depth. Depth relates to the perceived distance from the listener of each instrument. In amateur mixes there is often no depth at all; you can hardly say which instruments are in the foreground and which in background, simply because all of them seem to be the same distance from the listener. Everything is flat. Professional tracks, in turn, reveal an attention to precisely position individual instruments in a virtual space: some of them appear close to the listener’s ear, whilst others hide more in the background.

For a long time people kept telling me that there was no space in my mixes. I was like: guys, what are you talking about, I use reverbs and delays, can’t you hear that?! However, they were right. At the time I had problems understanding the difference between using reverb and creating a space. Serious problems. The truth is – everyone uses reverbs and delay, but only the best are able to make the mix sound like it was three-dimensional. The first dimension is of course panorama – left/right spread that is. The second one is up/down spread and is achieved by a proper frequency distribution and EQ. The third dimension is the depth. And this is what this text is going to be about.

There are three main elements that help to build the depth.

1. Volume level

The first, the most obvious and pretty much self-explanatory is Volume Level of each instrument track. The way it corresponds to the others allows us to determine the distance of the sound source. In a situation where the sound is coming from a distance, its intensity is necessarily smaller. It is widely accepted that every time you increase the distance twice the signal level is reduced by 6 dB. Similarly, the closer the sound you get, the louder it appears.

It is a very important issue that often gets forgotten…

2. Time the reflected signal needs to reach our ears

The second is the time taken by the reflected signal to reach our ears. As you all know, in every room we hear a direct signal and one or more reflected signals. And if the time between these two signals is less than 25-30ms, then the first part of the signal gives us a clue as to the direction of the sound source. If this difference increases to about 35ms or more, the second part of signal gets recognized by our ears (and brain) as a separate echo.

So, how to use it in practice?

Due to the fact that the PAN knobs are two-dimensional and keep moving from left to right, it’s easy to fall into the trap of habitually and set everything in the same, dull and obvious way – drums here, piano there, the keys here… as if the music was played in a straight line from one side to the other. And we all know that is not the case. When we are at the concert we are able to hear a certain depth, “multidimensionalism” quite brilliantly. It is not hard for us to say, even without looking at the scene, that drummer is located in the back, guitarist slightly closer to the left side, and the singer is in the middle, at the front.

And although the relative loudness of the instruments is of great importance for creating the real scene, it’s the time the signal needs to arrive to our ears that really matters here. These very tiny delays between certain elements of the mix get translated by our brain into meaningful information about position of sound in space. As we know, sound travels at a speed of approximately 30cm per 1 millisecond. So if we assume that in the case of our band the snare drum is positioned 1.5m behind the guitar amps, then snare sound reaches us 5ms later than the signal from the amplifier.

Let’s say that we want to make the drums sound like they were standing at the end of the stage and near the back wall. How to do that? When setting reverb parameters remember to pay attention to ‘pre-delay’. This element allows us to add a short delay between the direct signal and the reflected signal. It somehow separates the two signals, so we can manipulate the time, after we’ll hear the echo. It’s an extremely powerful tool in creating a scene. Shorter pre-delay means that the reflected signal will be heard almost immediately after the appearance of the direct signal; actually the direct and the reflected signal will hit our ears almost at the same time. And longer pre-delay, however, moves the direct signal away from the reflective surface (in this case the rear wall). If we set a short, few ms delay to the snare, longer one for the guitar or even longer for the vocals, it would be fairly easy for us to catch the differences. Vocals with a long pre-delay sound a lot closer than the snare drum.

We can also play along with pre-delay when we want to get a real, natural piano sound. Let’s say we place the piano on the left side of our imaginary stage. When sending it to a stereo reverb let’s try to set a shorter pre-delay for the left channel of the reverb, because in reality the signal would bounce back from the left side of the stage (from the side wall) first.

[Pre-delays.mp3]

First we have a dry signal. Then we are in a (quite big) room, close to the drummer. And then we are in the same room again, but this time the drummer is located by the wall, far from us.

3. High frequency content

The third is the high-frequency content in the signal. Imagine that you are walking towards the concert in the open air or a pub with live music. What frequency do you hear most of all? Of course the lowest. The closer to the source of music we are, the less dominant “basses” are. This allows us to conclude that the less high frequencies we hear, the further the sound source is, hence a fairly common practice that helps to move the instrument to the background is a gentle high frequencies rolling off (instead of bass boost) by LPF (low pass filter).

I often like to additionally filter the returns of reverbs or delays – the reflections seem to be more distant this way, deepening the mix even more.

Speaking of bands, we should also pay attention to frequencies somewhere around 4-5kHz. Boosting them could “bring up” the signal to the listener. Rolling them off will of course have the opposite effect.

“It is totally important when producing music that certain elements are a bit more in the background, a bit disguised. It is easiest to do that with a reverb or something similar. In doing that, other elements are more in focus. When everything is dry, in the foreground, it all has the same weight. When some things are harder to hear and others very clearly, it gives you an sense of depth. And you can vary that. That is what makes producing music interesting for us. To create this depth, images or spaces repeatedly. Where, when hearing it, you think wow, that is opening up so much and the next moment it is so close again. And some times both at the same time. It is like watching… you get the feeling you need to read just the lense. What is foreground and background, what is the melody, what is the rhythm, what is noise and what is pleasant. And we try to juxtapose that over and over again.” (Mouse on Mars)

Problematic Band

All modern pop music has one thing in common: it is being recorded at a close range, using directional microphones. Yes, you’re right, it’s typical for near-field recording. This is the way how most instruments are recorded, even those you don’t normally put your ears to – bass drum, toms, snare, hihat, piano (anyone puts his head inside the piano to listen to music?), trumpet, vocals … And yet even musicians playing these instruments (and for sure the listeners!) hear them from a certain distance. Musicians too – it’s important. That’s the first thing. And second – the majority of studio and stage microphones are cardioid directional close-up mikes. Okay, these two things are a quite obvious, but you’re wondering what is the result? It turns out that we record everything with the proximity effect printed on tracks! Literally everything. In short, the idea is that directional microphones pick up a lot of the wanted sound and are much less sensitive to background noise, so the microphone must handle loud sounds without misbehaving, but doesn’t need exceptional sensitivity or very low self-noise. If the microphones get very close to the sound source – within ten or so microphone diameters – there’s a tendency to over-emphasise bass frequencies, but between this limit and 100 cm maximum limit, frequency response is usually excellent.

Tracks with the proximity effect printed sound everything but natural.

Everyone got used to it and even for the musicians their instruments recorded from a close distance sound okay. What does this mean? That all of our music has a redundant frequency hump around 300Hz. Some say that it’s rather 250Hz, others that 400Hz – but it’s more or less there and it can be
concluded that almost each mix would only benefit from taking a few dB’s off (with a rather broad Q) from the low-mids.

Rolling off these freq’s will make the track sound more “real” in some way and it’s also actually something common on the sum. The mix gets cleaned up immediately, loses its muddyness and despite the lower volume level it is louder. Low mid appears to not contain any important musical information.

And this problem affects not only the music being recorded live – the samples and sounds are all produced as customers want it and it means they are “compatible” with the sound of the microphone. So it is worth to get familiar with that issue even if you produce electronic music only.

The bottom line is: if you want to move the instrument to the back – roll off the freq’s around 300 Hz. If you want to get it closer, simply add some extra energy in this range.

Continue to part 2 of this article >



About the author: Piotr “JazzCat” Pacyna
is a Poland based producer, who specializes in video game sound effects
and music. He has scored a number of Java games for mobile phones and, most
recently, iPhone/iPad platforms. You can license some of his tracks here.

Drum tips for music producers

Drum tips for music producers

by Piotr Pacyna

Here are a few tips for anyone thinking about spicing up his drum parts. Some of them are for more advanced producers, while some of the others will be suitable for less advanced readers. However, this article is not for beginners. You have to know at least the basics of MIDI programming, EQ or compression.

1. Humanization and de-quantization

For most of us drum humanization means only two things – “dequantization” of the notes and randomizing velocity values (within a specified range of course). Usually we just take the various drum notes and slightly offset them by milliseconds. We think about it in realistic terms and our thinking goes like this: when the drummer sits down behind the drums and starts playing, does he hit each drum in perfect timing? Of course not. The liveliness is what gives the drums their flavor. And so on, and so on…

This is all true, but it’s not enough.

We can’t really make any drum track sound “real” this way. All we can get is an impression of a lumpy, raunchy drummer who has no control over what he plays. It’s actually not that bad when it comes to punk music, but imagine that you make a jazz drum beat – such cheap humanisation tricks will simply not work.

There is a sort of workaround. When you’re recording your track, consider playing your hi-hats or snares directly from your MIDI keyboard without quantizing them. You want to be very careful doing this, as you still need to be close to the correct rhythm, but sometimes having some variation gives your drums life.

But it’s still not enough…

The thing that can really change your drum programming is drawing attention to the so-called natural rhythmic tendencies of a live musician. It is worth remembering that every musician:

  • naturally tends to slow the tempo down when he plays quietly,
  • speeds up when playing louder,
  • tends to slow down when playing more sparse rhythms,
  • speeds up when playing busy grooves.

Most musicians try to eliminate these tendencies in the process of learning to play with a metronome, but it’s impossible to get rid of them completely. They are always present (and it’s good!). The best musicians are simply able to control them. But they are also perfectly aware of the fact that total elimination of the natural tendencies is not necessary.

Keep all of this in mind when programming the drums- you can really benefit from that. For example, if the song contains a quiet section or a part in which the drum beat stops, keeping the tempo steady may even sound unnatural.

If your audio sequencer allows changing tempo within a project, I encourage you to experiment – in the softer part try to slow down the tempo from 1-2 ticks (ballads) to even 10 BPM (fast songs) and then, when the groove kicks again, back to the original tempo. If you don’t overdo it, it will result in much more natural sound than keeping exact the same tempo throughout the whole song.

 

2. Overheads


Electronic music producers often underestimate the role of overhead microphones.

Those mikes are used in sound recording and live sound reproduction to pick up ambient sounds, transients and the overall blend of instruments. They are used in drum recording to achieve a stereo image of the full drum kit, as well as in orchestral recording to create a balanced stereo recording of full orchestras.

In the real world drummers often record their tracks in a special drum room where microphone feedbacks make a certain atmosphere. Our brain interprets it as a “real” sound.

Live acoustic drums sound impressive. Sound engineers have a lot of trouble isolating each instrument from the others. They use separate mikes to capture each drum, various microphone settings and placements and play around with special plexiglass walls/partitions. Full separation is not possible though and thanks to the fact, that one instrument gets into the other’s mic, your brain tells you that you listen to live drums. It’s not easy to get such an effect using samplers and programmed drums.

So, how can you emulate overhead drum mics in your DAW?

You can try to run the drums through three carefully tweaked reverbs.

I send kick, snare, hats and loops to AUX 1 bus and highpass it to get frequencies above 3 kHz only. Then I apply a drum room type reverb – this way the high frequency band sounds as it could sound recorded through overhead mic.

On AUX 2 I use high- and lowpass filters and filter out freqs below 400 Hz and above 2.5 kHz. Then I apply the same drum ambience reverb. This allows me to get the sound of the middle band typical for snare mic feedback.

On AUX 3 bas I’m dealing with bass. Again, I use LPF to remove freqs above 300 Hz. And then I apply drum ambience reverb.

[OverheadsON.mp3]
[OverheadsOFF.mp3]

In the first example there is a normal reverb applied to each track, and that’s pretty much what everybody is doing. It’s ok, but something is missing. And the second example is the same drum track with “overhead” track blended with the original signal.

Remember that you can always come back to each channel and set a different send level for every AUX.

What else one can do with the overheads track to make it more realistic? You can use tips from my previous article about creating space and depth in the mix. For example – by using various tricks that I’m writing about you’ll be able to make our “overhead mics” sound like it was a bit more distant from the listener than “close” mics.

http://www.shockwave-sound.com/Articles/G04_Depth_and_space_in_the_mix_part_2.html

You will learn from it, among other things, how the following issues help to make the instrument sound close, in-your-face or “deep” in the mix:

  • pre-delay parameter in reverb,
  • high frequency band,
  • proximity effect (around 300 Hz),
  • PEAK and RMS modes of the compressor.

What else?

 

Sidechain the overheads


A range of cool effects can be created by putting a compressor / gate on the overheads channel and keying it from the kick, snare or hi-hats. Key and frequency-conscious gate from the snare that emphasize the airy overhead signal with each snare hit; or compress the overheads via the kick to draw the sides in with each kick hit.

What else?

 

Overhead processing


Applying a stereo widening plug-in to the channel overheads can work wonders for the sense of breadth. Two things to avoid with overheads, though, are overly heavy EQ boosts above 10kHz and compression (though a touch of compression can be effective for a vintage-style sound). And if your overheads unexpectedly sound weird in any sense alongside the close mic channels, do not forget to try flipping the phase.

 

3. Tracker groove!


Today you can find a swing parameter on almost every DAW and on most drum-related electronic instruments. Swing is a function that applies most easily to a quantized beat. The percentage of swing that you apply moves certain hits of your rhythm “off the grid” just enough to create a swinging feeling in the drums. Most devices offer very subtle to very extreme settings. It’s worth noting that swing functions apply differently on different instruments and programs. For MPC-style swing, Akai’s hardware is hard to beat. But Propellerhead’s Reason does come loaded with groove templates that emulate the Akai MPC 60 (as well as numerous other machines.) Ableton Live also offers groove quantization that can read imported audio, MIDI, and groove template files. Native Instruments’ Maschine platform offers extensive swing settings that can be applied to groups in your project as well as individual sounds.

However, there is another, less known yet very interesting way to create a swinging funky groove – using a tracker type program. Unfortunately, you need to learn something about trackers first. It may be difficult for those who never had any experience with them, but from the other hand many of today’s producers took their first musical steps playing around with Amiga Protracker in the early 90s. And those who are not familiar with the topic can easily find suitable tutorials.

Trackers use a very peculiar speed system that is based on ‘ticks’ rather than BPM. The ‘tick’ is a subdivision of pattern row. The 6 speed would make each row last for 6 ticks’ worth of time, while 7 would be slower, with each row lasting 7 ticks. To make things even more messy – there are also a BPM-based settings in trackers, but let’s ignore it for now and focus only on speed here. The “F” command alters the speed of the song. By quickly doing it one will get some kind of “swing” feel. Try dry ratios as 8 and 4 or 3 and 5 for more pronounced swing.

The question is how to incorporate the tracker-made groove of into our DAW?

It’s pretty simple.

Some trackers (eg. Renoise or ModPlug) allow to use VST Instruments and effects, so you can actually produce the whole song in the tracker.

Another way is to export the MIDI file eg ModPlug Tracker. Yet another is using XM2Midi that converts the resulting file format MIDI tracker.

         

 

4. Some EQ Tips


Some producers share the opinion that if there’s a need to reach for an EQ while mixing the drums, it means that something went wrong: the microphones have been set or selected badly, drums were not tuned properly etc. With the huge variety of samples available today, it’s really difficult to indicate the methods that will work in any situation. However, analyzing carefully the sound of a typical drum kit, you can easily identify key problems for each instrument.

Let’s start with the kick drum. Although most of its energy lies in low frequencies, it often covers almost the entire audible spectrum. The freqs responsible for the deepest bass and powerful sound are located in 30-100 Hz range – and if there is no really low bass in your mix, it is usually the kick that covers this area. Be careful with the high pass filters here – sometimes setting the cut off frequency few Hz’s too high can make the kick loose its punch. The actual “hit” of the kick drum is most noticeable between 100 Hz to 200 Hz – these frequencies are responsible for the “thump”. Using a narrow bandpass filter or… a tuner (read below) to find its most distinctive frequency, identify kick’s note value and then check if it does not conflict with the bass line. Fix the problem if needed. Then you need to have a look at the 200-1000 Hz range. Too much energy here muddies up the mix fairly quickly. But if your bass drum occupies mostly 250-300 Hz, this doesn’t necessarily have to be a reason to worry: this is how the warm, soft kick drums from the 70s soul records sound like. It is also worth checking whether your bass drum has a “click” somewhere around 1000 Hz, because that’s what makes it more present on small laptop or portable radio speakers. Then, between 1000-4000 Hz there is an attack that actually determines the character of the kick drum. One good trick is to boost 2.5 kHz freqs a bit; this way we can add more presence to the sound, without changing the overall character of it. If we find the sound too dark, try to add few db’s between 4-8 kHz. Frequencies above 8 kHz usually bring very little or nothing to the sound of the bass drum – in most cases all we have here is only noise and the best we can do is apply a low pass filter to make more room for, say, hihats.

The snare sounds best; rich and full that is, when frequencies 120-250 Hz are somewhat emphasized. Everything below this range can be EQ’ed out with high pass filter. If, however, we believe the snare sound is too powerful, we can always set the cut-off frequency above 120 Hz. Or even higher. In club music there is often a clap instead of a typical snare drum and it hits together with a kick, therefore a lighter, brighter sounds work best. A general rule is: the slower the pacing is, the deeper and longer the tail of the snare should be. Beware of 300-400 Hz range, which is responsible for so called “boxy” sound. Unlike the kick drum, the snare attack is located slightly higher, typically in 2-5 kHz range and this is where one should look for that resonant, crisp snare sound, a bit similar to branch cracking. The 5-10 kHz range is responsible for the brightness – I’d recommend the check it if the snare is overpowering the hats. Pay also attention to frequencies above 10 or 12 kHz – too much energy here results in messy sound of drums.

It’s worth noting that toms and congas are often treated similar as snares – it all depends, however, of their pitch and character. Just remember that the base sound of toms is usually located slightly lower than snares. Try to make some cuts around 300 Hz first and boost 5 kHz range. For the fullness of sound look around 100 Hz.

Hats usually don’t need the low end at all. Boosting the lower frequencies at around 150-300 Hz makes sense if you need to emphasize the sound of the drum stick. If hihats sound sharp, annoying or simply unpleasant, the easiest way to fix it is to find the problem freqs (usually located somewhere between 1-5 kHz) and simply remove it using a narrow EQ. What is a real challenge is to shape the sound of hihats in top end – it’s always a matter of taste and artistic vision. If you are aiming for a bright, airy sound you can play around with 8-12 kHz and above. Remember that this range is also extremely important for vocal tracks, though! Sometimes it’s also beneficial to play with 15-16 kHz. But do not go insane with the top end – too much 10k+ ends up with a very amateur sound.

Many pop producers start the mixing process with the drums and work on them until they get a full, dynamic sound. It’s hard to imagine a club banger without a solid rhythm base, right?

And what if, despite our best attempts, we can’t get satisfactory results? Well, perhaps we should look for another set of samples then…

 

5. Tuning


When I listen to my old mixes, I’m sometimes actually shocked how detuned the snares or kicks are. They often don’t fit the key of the song at all. Figuring out the drum tuning few years later was like a discovering a new world for me.

I usually just use a spectrum analyzer with a high resolution – with most drums you can fairly easily see where the fundamental resonant frequency is. Often times I use all sorts of misc. sounds that are detuned and lowpassed to layer under the kick and under certain circumstances it can be crucial to tune those sub hits to whatever other sound they’re adjacent to, frequency-wise, so they aren’t atonal or causing dissonance from phase interaction. It’s something you don’t notice on average nearfield monitors. But that dissonance becomes very apparent when listening with a subwoofer.

Here’s C-Tuner from C-Plugs. Simple, good and free.

Using an EQ, you bring out the frequency corresponding to a note. You can use a calculator, such as “ToneCalc” – or just do an online search for frequency / Hz / tone calculator.

This is NOT a golden rule – not all kicks or drums need tunning, only the ones that display a drone note, usually it’s longer kicks or resonant kicks that have this quality. Simple test: if you are having difficulty putting a bass line to your track because things are sounding out of tune it might be because the resonant note of your kick is clashing with the bass note. In this case tune your kick to the root key of your song, it will be low enough to not get in the way of dissonant tones

A really cool effect I learned when pitching toms is to duplicate the tom, shift it up an octave (and a 5th or 7th (if you like)) and use a pitch envelope to modulate down (usually at a 1/16 or 1/8 note interval). It gives the sense of “tightening the head” while you “strike” the tom. The more prevalent you make the effect, the more 80’s it sounds and the less prevalent, the more “real” it sounds. The prevalence factor is how much you choose to modulate the pitch of the 5th and/or 7th layer and of course how loud it is compared to the original.

Another thing worth consideration is using an autotuner. It can work well for more than just vocal work. Try it!

6. Pultec trick


It’s an EQ trick based on a common usage of the overlapping bands on the Pultecs, where the cut is narrower than the boost, but both bands are centered at the same freqs. So what you’d do is boost a little at 100Hz, then cut at 100Hz as well. The result is a wide boost with a notch in the middle to de-emphasize the center freq, so you end up with these two boosts above and below 100 with a shape that can’t be replicated by pushing two different freqs.

Why cut & boost at the same time? Either you want more or less – so what is the purpose of using both at the same time? Or is it that the ATTEN lowers everything above the selected freq & vise versa on the low EQ?

In theory, yes, cut/boost would tend to cancel each other out, while in the case of the Pultec the reality is a bit different. But in a cool way.

The venerable Pultec EQP-1A Program Equalizer and it’s sibling the MEQ-5 Mid Band Equalizer when used together (Pultec Pro) provide a well-rounded EQ palette. This combination is still standard fare in recording studios and was once widely used in mastering sessions. The first Pultec (EQP-1) was first introduced in 1951. Through many iterations the basic design remained through the late 70’s/early 80’s. Every Pultec was hand made to order. The build quality and design of all the Pultec products was unparalleled.

Unique simultaneous boost and cut. Dial in dangerous amounts of boost with incredibly musical results. Smooth, sweet top end character. Artifact-Free EQ even at high boost settings. The Pultecs are known as magical tools that improve the sound of audio simply by passing signal through them; but who wants to leave it at that?

 

Cool trick:

In the documentation supplied with hardware version of the EQP-1A, it is recommended that both Boost and Attenuation not be applied simultaneously to the low frequencies because in theory, they would cancel each other out. In actual use however, the Boost control has slightly higher gain than the Attenuation has cut, and the frequencies they affect are slightly different. The EQ curve that results when boost and attenuation are simultaneously applied to the low shelf is difficult to describe, but very cool: Perhaps the sonic equivalent of a subtle low-midrange scoop, which can add clarity. A great trick for kick drums and bass instruments.

I’m using NastyLF from Bootsy, which, again, is good and free. Unfortunately it’s not developed anymore and you have to spend some time digging the Internet to find it. Try to set cut and boost frequency to 40-50 Hz – it gives a nice “oomph” to the kick drum. See the examples below.

[PultecON.mp3]
[PultecOFF.mp3]



About the author: Piotr “JazzCat” Pacyna
is a Poland based producer, who specializes in video game sound effects
and music. He has scored a number of Java games for mobile phones and, most
recently, iPhone/iPad platforms. You can license some of his tracks here.


Producing MIDI music for mobile phones / cellphones Part 2

Producing MIDI music for mobile phones / cellphones Part 2

by Piotr Pacyna

Go back to part 1 of this article

6. Controllers

According to the most common opinion one should use only those controllers that are absolutely necessary. Well, it is true, but not quite the whole story. Yes, some old devices are unable to read anything other besides Patch Change and the Volume controller, but all those new ones give us more possibilities. So, what controllers do I use?

Well, the top my each MIDI track looks basically the same:

Program Change – Patch number
Controller 7 – Main Volume
Controller 10 – Panning
Controller 11 – Expression

The first two need no explanation. And when it comes to Panning… Well, I’m not even sure if this controller affects the sound in any way, but I always use it. Some years ago I had the opportunity to work for one of the biggest guys in ringtone business and one of the requirements was to pan all the instruments centrally (Pan = 64). I believe there must be some secret reason behind it, but they never revealed it to me.

And the next controller – Expression. Now, this is fun. Once I had a problem with some Motorola phones (e.g. E398) – no matter how high the Volume controller was set, the sound was quiet. Way too quiet. I had no idea how to fix it and started to fool around with the controllers and it turned out, surprise surprise, that unless you set the Expression to maximum value (127), you would have the abnormally quiet sound.

Pitchbend. Use it deliberely and only on better devices (see section 10. to find out what I exactly mean by that). The older ones either do not support it or start behaving in an uncontrolled way. Also – keep in mind that Pitchbend messages increase the file size and the size is something that definitely matters here.

A little tip. Remember to reset the Pitchbend right after each part. It’s important to tweak diligently, as you will use this in many parts of the song. Otherwise the track will be becoming gradually more and more detuned, especially if the song will be played in a loop.

Occasionally I play with CC 1 (Modulation) and 64 (Damper Pedal). This can lead to very interesting effects, but beware of overdoing it. Especially with the pedal effect, because it has an unrestrained appetite for polyphony. So make sure to reset it frequently thorough the song.

Recording from Sony Ericsson K300i. You’ll clearly hear the Pedal on the melody, but pay attention to the background pad – it’s one of those bugged sounds; it’s subtle, but adds a zest to a track.

7. Quantization, note lengths and looping

There is one funny thing about quantization on mobile devices. Everyone knows that they should quantize, but no one knows why. I didn’t know that either and my first songs were not quantized at all – why not giving the music more human feel, I thought to myself. And yeah, the level of emotion the clients expressed was indeed very human. They were pissed at me!

Unquantized notes tend to overlap each other and it causes troubles with the polyphony, increasing it to a ridiculously high value. Sometimes it occurs as a note stealing while sometimes the song slows down and then starts to speed up again until it reaches the original tempo. It’s horrible! That’s very much the case for some LG U-series and Sharp GX-series hand devices.

For this reason it’s good to shorten all the notes a bit after quantizing them. Shortening for 64th note is enough. The point is to prevent them from being “glued” together. I set all the drum notes to 32nd and if there are any fast parts with lots of short notes even to 64th.

And looping. “The song does not loop properly” – this is an example of another comment that I kept on hearing from clients almost as often as swear words!
To get a decent loop you should keep the outro of the song as simple as it’s only possible. Avoid any instruments with long release times, such as strings, pads or cymbals. The very simple endings with, for instance, kick drum and bass work best. The easiest and most effective way to check the track looping is to take just the last few bars of the song, set it as a ringtone and see how it works.

8. Chorus, reverb and delay.

Ofcourse I don’t mean the actual effects here, as unfortunately there is no DSP effects to enhance the inherently weak GM patches that are built-in in cell-phone synthesizers. You’ll need a sort of workaround to achieve reverb, chorus, or delay.

Many of our readers began their musical careers in the early 90’s with tracker programs such as Noise- or ProTracker. They will probably smile with sympathy now, as we use basically the same old tricks. For instance, you can copy the melody track to a different channel and detune the two using Pitchbend to create a chorus effect. You’ll get a pretty neat chorus by tuning one of the tracks up some cents and the other down exact the same amount of cents. Expand those values a little bit if you want to have more intense chorus. Luckily you don’t have to bother with mono compatibility here (in contrary to real stereo sound), so you’re free to play around. Practically all patches, perhaps aside from acoustic pianos, can benefit from this technique. 

If you want to have reverb you simply copy the melody line to a new channel, time-shift the second line by, say, a 32nd note and reduce the second channel’s volume to about quarter that of the original (remember to use CC#7 instead of velocity!). Experiment with 64th note for sort of short, room reverb. Repeat the whole process and increase the second channel volume for longer reverb time. Using a breathy patch like a flute or ocarina for the time-shifted channels can add the airiness typical of reverb tails. And remember the general rule, that also applies here. Reverb is best when you don’t notice it when it’s on, but you miss it when it’s off.

Delay can be created in quite a similar manner. Again, copy the original track to a blank channel. Then time-shift the second line by a quarter note, eighth note or triplets and reduce the second channel’s volume to taste. If you want to increase the “feedback” of the delay, simply repeat this process multiple times. You can use the same patch for all delayed copies, but it’s more interesting to take a different one and try to emulate Low Cut or High Cut filtering.

[Reverb_delay_and_chorus.mp3]


Recording from Nokia 6300.
00:00 – the melody with no effects,
00:17 – chorus effect,
00:34 – reverb and delays.
         

9. Useful tools.

There are many tools for mobile music producers. I will describe only my favorites, without which I can not imagine my work.

Beatnik (commercial)

I’ve already mentioned it in section 1 of this article, when I was writing about the realtime polyphony. I use it mostly for editing the SP-MIDI (Scalable Polyphony) information – I believe that anyone who even had a brush with ringtone production knows that format. The subject is vast, complicated and far beyond the scope of this article. But if there is a demand for it, I’ll consider writing another text devoted to that matter. With Beatnik we also have an access to LED and vibrator and we can use, for instance, kick drum or bass notes to control the vibrating motor of the device. You can easily make an illusion of bass that blows the pants off! One interesting tip. I always put the basic instruments on the same channels – bass on channel 2, melody on 4 and the main background on channel 3. When we know on what channels are the key instruments, we can set the SP-Midi priorities much faster.

ATS-MA2 (freeware)

Simple, very useful program to convert MIDI files to SMAF MA-2 (MMF extension).
Preparing a MIDI to convert and the nuances of the SMAF format itself is another broad topic that could be expanded in the future article.

ATS-SMAFPhraseL1 (freeware)

A small app to convert MIDI files to 4-channel SMAF Phrase (SPF extension). Making a 16- channel MIDI file to sound good with only 4-channel is a challenge… One might say what’s so complicated about it? Just scoop out the essential channels, right? Haha, no! And it’s like walking on a tightrope, really. Another subject that should be given more attention.

PSM Player (freeware)

Very nice tool that I use mainly for global volume changing. It saves a lot of time while making different volume versions that I’ve described in section 5. Thanks to PSM Player I don’t have to dabble with editing every channel of every MIDI file, but just go to Setting -> Volume -> Volume and set it for, say, 50%. And voila, that’s all!

XM 2 MIDI (commercial)

A great program to convert XM (Fasttracker) modules to MIDI. If making a tracker module from scratch is easier for you than making a MIDI file, this little tool is an absolute must! Of course you need to put some extra work into such a converted MIDI, but all in all it’s an amazing utility.
Alcatel Multimedia Conversion Studio (freeware)

This one has many options, but amongst others allows one to convert MIDI files to a very rare Alcatel format called SEQ (MSEQ extension). Although I produce SEQ files occasionally, I use AMCS on a regular basis. Why? Well, for me it plays a similar role as Avantone Mix Cubes do in the real music world; every little mistake gets magnified and amplified. The old Alcatel devices, for which AMCS was designed, offered an extremely limited selection of patches and poor polyphony. I believe that if something sounds good there, it will sound good everywhere. Even on washing machines.

10. Real hardware testing.

And the last point, which I believe is the most important of all. Testing the files on real devices is crucial and absolutely necessary, as emulators are far too often unreliable. Therefore, my suggestion is to get a list of currently popular phones and buy them – it will be an investment that quickly pays for itself. Of course you don’t need to have them all. I keep having just 10-15 of the most popular at the moment. Furthermore, all major companies employ QA Testers, who are checking every game on hundreds of devices with the patience of a Benedictine monks, and if there is a problem with the sound somewhere, they immediately get to you. They send you over a recording of problematic portion of the song and you have to figure out what’s wrong. Usually the things get fixed fairly quickly. But it happened once or twice that I couldn’t help, time was running out and we ended up having a game without the sound on this particular phone.

I deliver each polyphonic composition in the following formats:

SP-Midi

It is most important, the most extensive format designed for the best phones. This is where I allow myself to go a little bit crazy with reverb and delay, use additional percussion instruments or go low with the bass. In short – here I do everything that I’ve described as “used only on better phones.”

Alcatel + LG + Sagem

Prepared for the oldest, weakest mobile phones and also used in case of memory problems on all others. Features:

  • Files are significantly smaller in size than SP-Midi,
  • Less than 16 voices polyphony
  • Only the absolute basic instruments
  • Only the most necessary controllers
  • Drum part is simplified right down to its essentials, less instruments compared to SP-MIDI
  • Chorus, reverb and delay are absolute no-no’s
  • A tempo rate rounded to the nearest value and no tempo changes during the song
  • No patch changes on the track – only one instrument on each channel.
  • Sony Ericsson

Sometimes people ask me what is my favourite mobile phone sound-wise. I’m not sure. But if I was pushed into a corner I guess I would have to say it’s Sony Ericsson. Sometimes I get carried away by its sound. MIDI files need some tweaking before they will sound good on these phones, though.

Siemens

MIDI files designed for best possible playback on Siemens phones. They are sort of beast to bridle…

MMF

SMAF MA-2 files.

SPF

4-voice SMAF Phrase files.

OTT

Monophonic (1-voice) files for the old Nokia series 30 phones. This article deals only with the polyphonic music, so I’ve decided not to get into 1-channel music production. Especially that it’s almost as complicated as polyphony stuff!

Closing words

Rob Hubbard, the famous C64 composer, once said, that music programming in assembler code was like writing music with boxer gloves on, with two hands tight behind your back, trying to use your big toe. I think something similar can be said about making music on the cell phones. In both cases there are plenty of restrictions and we have no choice but to be creative. But hey, being creative is always a joy!

I hope this article will help you to save some time and will be a starting point for making your own discoveries. Because there’s still a lot to discover. And I absolutely do not consider myself an all-knowing expert on the subject; phones surprise me all the time and I constantly learn something new. Besides that, remember there are still new models being released and it means exciting new opportunities on the one hand and the problems and bugs on the other. Watch this space!

About the author: Piotr “JazzCat” Pacyna is a
Poland based producer, who specializes in video game sound effects and
music. He has scored a number of Java games for mobile phones and, most
recently, iPhone/iPad platforms. You can license some of his tracks here.

Producing MIDI music for mobile phones / cellphones, Part 1

Producing MIDI music for mobile phones / cellphones, Part 1

By Piotr Pacyna

I’ve produced game soundtracks for mobile phones for over 9 years now. When I was starting out I thought that all I had to do was simply to make a MIDI file. A piece of cake! But I was so wrong…

 

The feedback that I was receiving from the clients showed me how much. They were often saying that the music was sounding good on the computer, whilst on the mobile device it sounded terrible or even wasn’t playing at all. And I’m not sure what is worst. So I became forced to experiment – I was laboriously trying every single instrument, controller and effect combined with trial and error problem solving. Why does a certain effect work, while the other doesn’t, why some devices refuse to play some instrument, while on the other ones it works fine etc. Years of work. Of course I didn’t learn everything from my own research only. Looking back I am so happy that I was given the opportunity to meet people who taught me many things and shared their experience with me (thanks to all of them!).

After several years, through a focused effort, I managed to develop a system that apparently works, as requests for adjustments are rare now. I thought it would be worthwhile sharing some tips and knowledge that I have picked up along the way.

Just a little word of caution. It is impossible to prepare the MIDI file that plays perfectly on all phones, albeit some time ago I thought it was possible. Now I know it’s an utopia. There are plenty of models and differences relating not only to the GM patches, but also to the speaker. It is possible to avoid most common mistakes and make the MIDI sound acceptable on most phones, however.

And one last thing. The text requires the basic knowledge of the MIDI format and sequencers. It’s by no means a guide for beginners who want to just start their mobile career, it’s rather for those who are already familiar with the topic and have some experience.

1. Polyphony

It would seem that these days, when majority of mobile phones are capable of playing a truly large number of MIDI voices at the same time you’re allowed to go a little bit crazy and don’t have to care for polyphony at all. Well, nothing more wrong.

Bear in mind that you are working with typically small speakers that have limited bandwidth, and if the arrangement is too busy you‘ll hear nothing but one big noise. At the beginning of my career… I’m not a careerist person, so I’m hesitant to call it a career, but, say, in my past I had a tendency to make rather complex arrangements – piano, organs and guitars, trumpet and synth lines, not to mention sweeping pads in the background. I was proud and kept playing such a MIDI track on the computer, enjoying a massive wall of sound and imagining an overwhelming enthusiasm of the game players! But the very next morning the producers were saying that my track is so great that after 10 seconds everyone wants to turn off the sound immediately. Oh, how I hated them then! But hey, they were damn right.

I had to face the painful truth that on the cell phones the tracks with a simple, three-element arrangements work best. These elements are: 1) rhythm, 2) background, 3) melody, where each instrument occupies its own frequency range. You can of course have many instruments in the background, but do not use the same octaves as the bass and melody are using. And do not let all of them play at the same time.

So – what polyphony value is safe, how many voices can a MIDI track have to sound decent on the cell phones? Well, there are no strict rules. In any case, I try to limit the realtime polyphony to 16 voices. What is realtime polyphony and how to check it? This needs a brief explanation. There are different tools for these purposes – such as the Nokia Suite, but the problem is that most of them do not show the actual polyphony, they simply show maximum voice usage per channel, in other words – they count how many notes are triggered at exact same time. So – if we have a part with very fast and short notes of, for example, String Ensemble 1 (Program Change 49) we’ll get the information, that there has been used only 1
voice. And that’s not right. All the string notes release and eat up much, much more voices.

That’s why we need a secret superweapon. It is Beatnik Mobile Sound Builder.

1 – Here we see maximum voice usage per channel determined via file analysis (this is the “fake” polyphony described above).

2 – And here we have a realtime display of voices used by the renderer per channel during audition (and this is the realtime polyphony, yikes!)

3 – This is another extremely handy feature – it shows the maximum value of the realtime voice usage per channel. And then, at the bottom, you see a cumulative polyphony value.

4 – In this row you can define the maximum voice usage.

5 – And here we have the values used to create an SP-Midi information.

Do I have to say that I love Beatnik?

Unfortunately, the program is no longer evaluated and the developer’s website does not exist anymore, so you have to ask Google nicely for help in getting it.

2. Choice of instruments.

As we all know, the General MIDI bank has 128 instruments + a set of percussion samples, but how many of them can we actually use? Through years of experiments and mistakes I have found that many phones, especially the older ones, have a very limited set of sounds and some instruments are being replaced by different, similar ones. I’ve also discovered there is a group of patches, that sound too quiet, too loud, or just poor on most devices. But in order:

RHYTHM


When it comes to drums and the channel 10 – above all, remember to use only Standard Kit.
         

Other kits such as Power Kit or Electronic Kit sometimes do not play at all. As for the sounds I suggest to stick with the basic ones, such as: (C1) Bass Drum, (C#1) Side Stick, (D1) Acoustic Snare, (D#) Hand Clap, (E1) Electric Snare, (F#1) Closed Hi Hat, (A#) Open Hi- Hat, (C#2) Crash Cymbal 1. Toms is something that I use rarely. Just every now and then. Not that they sound bad or something, it’s just because I don’t want to waste the precious polyphony for something that appears once or twice in the whole song. I also suggest to be careful with Tambourine (F # 2) – on certain LG models it tends to sound too loud and aggressive, no matter how low the velocity is. The same goes to hand percussion like congas and bongos. On few Samsung hand phones there is a strange, long and bassy reverb tail attached to them. Very unpleasant. And it’s better not to touch other percussion instruments. Yeah, they may sound fine on the newer devices, but the older ones won’t play them or, in the worst scenario, we get a piercing screech instead.

And what about the bass? We have quite a nice selection of bass patches in General Midi bank, but the one that works best on mobile phones is Synth Bass 2 (Program Change 40) – unlike the rest bass patches it’s perfectly audible on all devices which I had the opportunity to check. However, it is relatively rough, dominant sound. Therefore, if we need a delicate, more subtle bass, we can experiment with Acoustic Bass (PCH 33).

BACKGROUND

As for the piano, the most safe patch is Acoustic Grand Piano (PCH 1). Especially on old phones (see section 10). With other patches there is always a risk, For example, the Rhodes Piano sounds way too faint on some Siemens devices.

Organ sounds. Well, sometimes they play okay, while some other time they stand out from the tune because of their volume and their distinct timbre. (I would point out here the old Samsung phones and especially the SMAF format mentioned later in the article).

Acoustic Guitar. Again – the most safe is Acoustic Guitar Steel (PCH 26).

Rhythm guitar – Electric Guitar Clean (PCH 28). Distortion Guitar (PCH 31) is better for riff playing than Overdrive (PCH 30). Sometimes I use 2 layers of both patches and blend them to taste – when it works well, it’s a nice complex sound that adds depth and thickness.

Strings and pads. I have not noticed any major problems here. One thing that you should keep in mind is that you can’t use any EQ or reverb to move the instruments intro the background, all you can do is to play with the volume. But not quite. Older Sony Ericssons (such as K300 or K700) had an interesting bug – some instruments, for example orchestral strings, are blurred in strange way, as if they were filtered. Of course, it’s impossible to make an orchestral tune to sound good with these strings and you had to look for alternative patches, but with some creativity, you can make a good use of that bug. The bugged instruments sound
like they were EQ’ed fairly brutally with lowpass filter and drowned in long Hall type reverb. So you can combine a dry melody with them to create an illusion of depth. It sounds pretty neat. See section 6 for a suitable audio clip.

Unfortunately, the bug has been removed from newer models.

3. Octaves

There is one general rule here. Avoid the sounds that are too low and too high pitched. The threshold of security for bass is the C1 note – lower tones may be (but don’t have to) inaudible. The upper limit is around C6 – the high notes have the tendency to sound really fatiguing on cell phones, even at low volume and velocity settings. So it’s better just to avoid them.

One important thing for SONAR users. It starts counting with MIDI Note 0 as C0. You can change the “Base Octave For Pitches” on the Options -> Global. I have mine at -2 to match Cubase, where Middle C is C3 (and what is the most common value, by the way).

4. Time-shifting the notes

I once made an embarrassing discovery. On some phones (e.g. Samsung E700) one can hear a loud crack at the beginning of the MIDI file. Short, yet very annoying. What’s that? Well, on top of every MDI channel there are controllers (see section 6) with the Control Change # 7 (volume) being the most important in this case. The problem is that the controllers demand a bit of time to take effect, so you need to time-shift a little all the notes from the top of the track. Shifting by a 64th note will be fine. This way we give the controllers the necessary time to kick on and the crack disappears.

(Recording from Samsung E700. You will be able to hear the click at the beginning quite brilliantly.)

5. Velocity and Volume.

Another extremely important issue.

VELOCITY

A general advice – it’s much better to set the channel volume using the controller (CC # 7) instead of velocity. First off – some phones (e.g. the old Sagem models) simply ignore the velocity value. Second off – when the velocity is low, some other devices tend to play so quiet, that you can barely hear anything. And third off, and that’s even more important than anything mentioned above, you often need to change the volume of the whole MIDI song and it’s much easier and faster to do with the controller.

I always set the velocity to maximum value (127, that is) on practically everything. The exceptions are some elements of a drum kit – cymbals, crash, which I set to 50-75, otherwise they are way too loud and cover all the other instruments (like on Siemens phones). Sometimes I play with the velocity to achieve a sort of an echo effect, although more often I use the controller # 7 for this purpose.

VOLUME

There was a time, when the most frequently asked customer request were: “please make the MIDI file lower in volume by 50%” and “please make it louder by 50%”. When it became a plague I came to the conclusion that I had to do something with it. It turned out that some cell phones were simply louder than the “standard” was, while the other for a change, a lot quieter. So at some point I came up with the idea of providing each MIDI file in 3 volume versions: standard (100%), quieter by half (50%) and louder about half (150%).

Usually I set the volume controller to 85 for the melody and drums, for bass to 70, and to 50-70 for background instruments. In version 50% it is respectively: 42, 35, 25-35, and in the 150% version – 127, 105, 75-105. Of course, these values may slightly change when testing the file on the real phone (see section 10).

Anyway, ever since I started to prepare the 3 volume version, all the volume requests have gone out. As if by magic!

Continue to Part 2 of this article >


About the author: Piotr “JazzCat” Pacyna is a Poland based producer, who specializes in video game sound effects and music. He has scored a number of Java games for mobile phones and, most recently, iPhone/iPad platforms. You can license some of his tracks here.