A sound design tutorial by Paul Virostek
Why Record for Perspective?
I remember a time I first began editing when I was struggling to make a car door slam match the picture on film. I shifted the sound earlier, later, added and removed elements and it still didn’t fit. The editor who was mentoring me said:
If you’re trying too hard to make a sound fit, then you’re using the wrong sound.
He told me why: the car door sound effect should have been correct (it was the proper model and year), but it had been recorded inches away from the car. In the scene the camera was a few meters away from the car. This difference made the sound jarringly wrong.
In other words, no matter how you synchronize the sound with the picture, if the actual nature of the sound is wrong, it will never work. This taught me how important it is to use the proper sound:
• correct volume
• correct timbre
• correct perspective or apparent distance
Cheating the Effect
Of course, volume is easy to adjust. If the timbre is wrong, you can choose another sound from the same family. However if the sound’s perspective or apparent distance doesn’t match the picture, no matter what you try it will never completely fit.
Simply raising or lowering the volume of the sound may seem to make the sound closer or further away, but this is only a ‘cheat’. The match will be close, but will invariably seem subtly, disturbingly, off.
What About Using Reverb?
One common trick is to apply reverb to closely-recorded sounds to make them seem further away. Even the best reverb plug-ins cannot replicate perspective perfectly, however, and the result will sound slightly odd. How do we solve this problem? Read on.
What is Perspective?
Perspective describes how close a sound appears. Typically a sound’s perspective is described in four ways:
- Close (anything under 10 feet/3-4 meters)
- Medium Distant (roughly 10 feet/3-4 meters away)
- Distant (anything more than 10 feet/3-4 meters away)
- Background or BG (quiet and muted)
- MCU or Major Close Up (inches away from the microphone, although this term is being phased out in favor of Close)
A close/medium distance mic setup
Here are some examples of a smoke alarm recorded at various perspectives. NOTE: it may help to wear headphones to hear the perspective or ‘space’ or ‘room’ properly (and have the volume down, the sound is sharp):
Notice how the Distant alarm has more echo, even though it is slightly louder than the Medium Distant alarm? The difference between these recordings is how much ‘air’ or space is apparent in the recording. ‘Air’ is created by a) the space where the recording takes place (also known as ‘room’) and b) the amount of reverb.
An Example: Woof
Imagine a dog barking in a city alley. A close recording will have prominent barks, and very little of the echo of the barks in the alley.
The further the dog is from the microphone, the more ‘air’ or ‘room’ will appear on the recording. The dog will seem quieter since it is further from the microphone. We will also hear more of the barks reverberating or bouncing off the alley walls.
It is exactly this aspect of the sound that we want. This distance, or perspective, will make it match perfectly with medium distant camera shots.
A recording of the close dog and one of the distant dog, although they are the same animal, will sound completely different.
Me, between a close/medium mic setup
Which Recording Perspective is Best?
So which perspective do you choose if you are going to record a sound effect? The short answer: all of them.
With today’s digital multi-track recorders you can record all perspectives at once. Patricio Libenson, the recordist for Hollywood Edge’s Explosions library, told me his set up involved multiple microphones, all at different distances and angled mathematically to account for phasing. The result is an incredibly rich collection.
Let’s return to our dog in the alley. We can set up one mic at the end of the alley, and have another next to the dog, both plugged into the same recorder. When the dog barks, we’ll have recorded both perspectives at once.
Match the Recording to Your Project
If you have to choose one perspective over the others, consider the project you are working on:
- Multimedia or Radio – it is always best to record close perspective for these projects. The reason? Distance has little value when you won’t be using picture or visuals. Also sound designers like the immediacy and power of close effects.
- Film or Television – most film editors prefer their effects recorded Medium Distant. The idea is that most camera shots are typically Medium Distant or further. Also, in a pinch you can fake a close perspective by raising the volume. In a perfect world they would like to have a Close version available as well.
Unfortunately, most commercial libraries are recorded close. Imagine you are trying to use a close dog bark in a scene where the dog is across the yard. It won’t fit.
That’s why at Airborne Sound we record two perspectives: close and medium distant, even if it requires multiple takes.
When you record sounds to match the requirements for your project, you’ll find the sounds fit easier, and require less editing. And, of course, it just sounds right.
About the author: Paul Virostek travels worldwide recording the sounds of cities and cultures. He shares his collection at airbornesound.com, and writes about his experiences field recording, and sharing sound effects at jetstreaming.org. He is also the author of “Field Recording: from Research to Wrap – An Introduction to Gathering Sound Effects“, which was published in 2012.
Excuse Me, You’ve Got Some Sound Effects in
About using sound effects in music production and how the line between sound effects and music is blurring
by Kole Hicks
The use of certain elements we consider “Sound Effects” in Music is much more common than we may think. Whether it’s nature ambiances heard lightly in the background of a New Age track or the aurally unpleasant bang of a trashcan lid in Industrial music, our perception of what purely differentiates the line between Sound Effects and Music is rapidly blurring.
I recently became more aware of this progression earlier this year when tasked to compose an eerie / ethereal background track for a horror game. The piece most definitely had to set a mood and have direction, but never really intrude the player’s “consciousness” enough to have them recognize or become aware of “Oh hey there is music being played now.” So, in a way the music was to act in a role we may consider to be more common with sound design.
Now this practice in and of itself is not new, but the questions I asked myself while approaching this problem and the previously closed “doors” the answers opened up to me are new and unique enough to want to share my findings with you.
I. Approaching the Issue & Asking the Right Questions
Before I even attempted to do the traditional “sit down & start writing” phase, I tried to think of and answer all of the necessary questions that are unique with a piece like this. Should there be any thematic material… would it “get in the way”? How Dynamic can the piece be? Will I be using “traditional” instruments? What role will the mixing process play in this piece? Etc…
Asking and answering all of these questions were absolutely critical for taking an accurate first step towards fully expressing my intent with the piece. That is why I often take this step and recommend many others do as well (especially if you need to be very articulate with what you’re wanting to express).
II. Answering the Questions
Let’s go through the process of asking and answering a few questions unique to a piece of music like this.
First, lets look at “Will I be using traditional Instruments?”
Since there is no right or wrong answer to this question, I only felt compelled to organize/understand my instrumentation choices enough to justify their usage in the piece. So, I decided that my approach to this piece had to be one focused more on timbre/moods and that writing standard musical phrases easily identifiable as “music” by the human ear were off limits. At least initially, as I also decided that “sneaking in” the main theme from time to time would be okay (as long as it’s full introduction was gradual). However, for the most part, I “justified” the usage of some traditional musical instruments by challenging myself to use them in a unique way that wouldn’t immediately be perceived as a musical phrase by the listener. “Typical” sound design elements (impacts/crashes/scrapes/etc.) were also allowed, but must be organized in such a manner that they would have a perceived direction.
Which brings us to our next question… “What role will Form play in this piece?”
As I mentioned before, the line between what could only be considered Sound Effects and what could only be Music, is rapidly blurring. Impacts, soundscapes, and other “sound design elements” are being used so often in modern music that I believe the only clear distinction between the two is the way each one is structured.
This is not to say that Sound Effects can’t be organized in a way to tell a story, for they surely can, but rather the way in which we approach and organize our sounds for music is different. Repetition and imitation are two of the most common techniques used in music from almost anywhere in the world at anytime in history. When you’re lacking tonality, melody, and other “common” western musical constructs, more often than not we revert to repetition and imitation to structure our music (both for our sake and the listener’s ears). Often times, when your creating Sound Effects to picture, its not ideal to only use one punch/kick sound for an entire fight scene. However, I can also imagine the argument that the variety in those punch/kick sound effects, are the equivalent of musical imitation. So, perhaps the only real thing separating the difference between Sound Design and Music is our perception/preconceived notions of what each one “should” be.
With that said, I decided that the role of Form in this piece was to take these isolated sound ideas/motifs and repeat/imitate them in a manner that felt like it was going somewhere (The repetition/imitation itself not having to be structured, but perhaps more organic or improvised). Complex and strict forms like Sonata or even Pop wouldn’t accurately achieve this goal. So, it was determined that the form must be even more basic (remember we don’t want the listener to immediately recognize this as music). My solution was to introduce and eventually repeat/imitate these “themes/motifs” as they were applied throughout the changes in the dynamic curve.
Last but not least… “What role will the Mixing Process play in this piece?”
I feel very strongly about the role of Mixing in the Composition process, as it’s unavoidable in modern times. However, I’ll save the majority of what I have to say about this topic for a separate article.
As it applies to this question though, I determined that the subject
matter and piece itself needed “mixing forethought.” Simply
thinking about what pitches, rhythm, or articulation to use would not
be enough, so I went a step further and asked myself questions like…
“Is a High Pass Filter needed in this piece? If so, When and for
what Part(s)? How much distortion should be used on the guitar…
what pickup? Should I automate the reverb to the dynamic curve or keep
it consistent throughout the piece?
It’s through questions like these that some of my most creative
answers originated. When you become more aware of exactly what frequencies
you want expressed at a certain point in a piece of music or how you plan
to creatively pan each instrument, your music will immediately benefit
from the original answers you come up with.
I always like to say that if it affects the way your music sounds at
the end of the day then it’s a part of the Composition Process that
should be taken into consideration. That goes for Mixing and even your
state of mind prior to writing (make sure it matches the necessary mood
you want to express in the piece of music!)
III. Applying the Answers
Now that we have some unique answers to work with, it’s all about performing and capturing their essence. For instrumentation it was decided that everything is permitted, but most “standard” writing practices would not apply.
Bend a string of the guitar beyond its “comfortable point” and play your theme. Play the Piano with socks on your hands or breathe into the mic and apply massive reverse delay. Place a huge pillowcase over your mic/head and start to sing. Record your harp in the bathtub or pitch up/down kitchen pan impacts and organize them to build a triad.
The options available to you are only restrained by your ability to ignore the fear of “What will others think?” The Answer to “What is Music?” is growing every day with new ideas from creative composers willing to push the boundaries of sound and a more accepting audience that’s aching for something new/original. With that said, I’d like to wish all of you the best and keep composing fellow artists!
If you’d like to listen to piece of music I finished, click here and tell me where to send it.
the author: Kole Hicks is an Author, Instructor, and most prominently
an Audio Designer with a focus in Games. He’s had the pleasure of
scoring mobile hits like ‘Bag it!’, has provided audio for Indie PC
titles like ‘Kenshi’ and ‘Jeklynn Heights’, and was nominated for a 2012
GANG award for an article written exclusively for Shockwave-Sound.com
titled, “Mixing as Part of the Composing Process. Emotionally Evocative
Music & Visceral Sound Effects… Kole Audio Solutions.
By West B. Latta
Whether you’re a game developer, game player, or carry only passing interest, it is plain to see the growth and advancement of the video game industry over the past decade. Robust graphics systems, ample disc space, bountiful system memory, and dedicated DSP have all become increasingly common on today’s game platforms.
While this continues to drive the look, feel, gameplay, and sound of games, it can be said that, to a large degree, high-profile, large budget games have increasingly looked to film as their benchmark for quality. Achieving a true ‘cinematic’ feel to a game seems to be the hallmark of what we now consider ‘AAA’ games.
As game technology progresses, it is useful to look at not only the ways in which the technological aspects have improved, but also how design and artistic approaches have changed in relation to changing technology. With regards to music, what is it, specifically, about cinematic music that works so well? In this brief article, we’ll take a look at how changing technology has altered our perception and application of what music in games should be.
Where We’ve Been
In the early years of games, music was predominatly relegated to relatively short background loops, generated by on-board synthesizer chips and various systems of musical ‘control data’ that would trigger these pre-scripted musical sequences. While not unlike our use of MIDI today, these systems were typically proprietary, and learning the language and programming of these systems was no mean feat for a workaday composer.
And yet, these were the ‘iconic’ years for video game music – where the Super Mario jingle, the Zelda theme, and many other melody-heavy tunes were indellibly imprinted on the minds of a generation. The limitations of the sound systems in these consoles were, in themselves, a barrier to creating anything other than relatively simple, catchy tunes.
As we progressed into the mid and late 1990’s, technologies afforded us higher quality sounds – with higher voice counts, FM synthesis and even sample playback through the use of wavetable soundcards. Though the sounds were often highly compressed, the playback of real, recorded audio was a leap forward for home consoles and computer games. PCs and even some consoles moved to more MIDI-based or tracker-based musical systems, and so were somewhat easier to compose for than their earlier predecessors. Even so, musical soundtracks didn’t drastically advance beyond the simple, background loop modality for quite some time.
In the mid to late 1990’s, however, we began to hear a shift in game soundtracks. While simple backgrounds were still the norm, there was a sort of “mass-exodus toward pre-recorded background music”(1). there were a few higher profile titles that were afforded a greater percentage of budget, disc space, and system resources. This all added up to a slow, but perceptible shift toward the elusive ‘cinematic’ feel of film. I still remember watching the opening cinematic for Metal Gear Solid 2: Sons of Liberty and thinking to myself, “This can’t be a videogame!” The quality of the voice acting, the soundtrack – the entire game felt, to me, like a dramatic leap forward. This was but one example among many titles that set out to push the boundaries of audio in games.
During the past 10 years, we have seen rapid and dramatic changes in the technology, artistry, and application of music in video games. Disc-based game platforms came to the fore with the release of the Sony Playstation 2 and Nintendo Gamecube early in the decade, and higher powered consumer PCs became increasingly more affordable. As a result, we hear a definite shift in musical scores, with significantly longer runtime, more complexity, more robust instrumentation and arrangement, higher quality samples, and even CD-quality orchestral recordings.
Where Are We Now?
At present, we’re steeped in the current generation of gaming systems. Xbox 360, PS3, Nintendo Wii, and PC gaming have grown to include full HD video resolution and high-quality 5.1 surround sound. Low fidelity, synthesized or sample-based soundtracks have given way to fully arranged and orchestrated scores, recorded by world-class symphonies. While they haven’t yet become household names like Zimmer, Williams or Goldsmith, well-known game composers are highly sought after as developers continue to strive for a more cinematic feel to their games. Truly, some game soundtracks rival those of major motion pictures in quality, scope and performance. This trend has even given way to a small ‘video game soundtrack’ industry, with record labels devoted specifically to releasing and promoting game sountracks to the mass market via CD and digital download.
Moreover, the sound of classic and contemporary video games have increasingly gained mainstream popularity as the synthesizers of old platforms such as Gameboy, C64, and NES have made their way into popular music by some of today’s biggest musical artists. Likewise, game soundtracks are increasingly being presented to the public in unique ways. Bands such as The 1-Ups, The Minibosses and Contraband present re-arranged versions of old game tunes on live instruments, while live orchestras perform soundtracks via events such as Video Games Live.
While it is undeniable that the quality and scope of game music have, in some cases, grown to match that of film, it simply isn’t enough. Games are an interactive medium, and as such, the presentation of musical soundtracks must also be able to adapt to changing gameplay. To get a truly immersive experience, the music in games must change on-the-fly according to what is happening in the game, while still retaining a cinematic quality. Rigidly scripted musical background sequences can’t impart the same level of depth as music that truly matches the moment by moment action.
Surprisingly, adaptive and interactive music schemes have been used in games for longer than we realize. Even the original Super Mario Brothers music changed tempo as the player’s time was running out. Yet making highly interactive, high-quality, orchestral scores adds a layer of complexity seldom attempted by many game developers. Instead, many continue to rely on simple geographic and ‘event’ triggers for our accompaniment, rather than a truly adaptive music system.
While some developers have attempted to tackle this issue themselves, many of their solutions are proprietary. To go a bit deeper into interactive music, we will instead turn our attention to middleware developers. Firelight Technologies – makers of the FMOD Ex audio sytem, and Audiokinetic – makers of Wwise – the two premier audio middleware providers for today’s most popular AAA titles.
Firelight has taken a unique approach to dealing with interactive or adaptive music. Their FMOD Designer system allows two distinctly different approaches. Through their Event system, the composer can utilize multichannel audio files, or ‘stems’. This allows certain individual instruments or sections to be added or subtracted based on game states, or any other dynamic information fed into the FMOD engine such as player health, location, proximity to certain objects or enemies, etc. This technique was used to great effect in Splinter Cell:Chaos Theory, where, depending on the level of ‘stealth and stress’ of the player, different intensities of music would begin to brought in. This type of layering is often called a ‘vertical’ approach to music system design.
The second approach FMOD takes is through their Interactive Music system. This system takes a more ‘logic-based’ approach, and allows the designer to define various cues, segments and themes that transition to other cues, segments or themes based on any user-defined set of parameters. Moreover, this particular system allows for beat-matched transitions, and time-synchronized ‘flourish’ segments. In this way, a designer or composer might break down their various musical themes into groups of smaller components. From there, they would devise the logic that determines when a given theme, for example “explore” is allowed to transition to a “combat” theme. This segment and transition based approach is often referred to as a ‘horizontal’ approach.
A system of this kind was used in the successful Tomb Raider: Legend. For that particular project, composer Troels Folmann used a system which he devised called ‘micro-scoring’, crafting a vast number of small musical phrases and themes that were then strung together in a logical way based on the players actions throughout the course of the game. For example, the player may explore a jungle area with an ambient soundtrack playing. As they interact with an artifact or puzzle, a seamless transition is made to a micro-score that is specific to that game event.
Wwise is relatively new to game development, gaining popularity over
the past several years with its first major debut in FASA Interactive’s
Shadowrun. Since that time, Audiokinetic has rapidly enhanced their system,
and their interactive music functionality takes a ‘best of both worlds’
With Wwise, it is possible to have both multichannel stems as well as
a logic-based approach to music design. A composer can create a series
of themes with time-synchronized transitions based on game events or states,
while simultaneously allowing other parameters to fade various stems in
and out of the mix. This system incorporates both a horizontal and vertical
approach to music design, and it has resulted in an incredibly powerful
toolset for composers and audio designers.
The term ‘videogames’ now seems to encompass an entire spectrum of interactive
entertainment in all shapes and sizes: casual web-based games, mobile
phone games, multiplayer online games, and all manner and scope of console
and PC games. It seems impossible to predict the future of interactive
music for such a variety of forms, and yet we have some clues and ideas
about what might be next for those AAA titles.
First and foremost – we can be sure that the huge orchestras and big-name
composers aren’t going away any time soon. In fact, as more games use
the ‘film approach’ to scoring, it seems more likely that it will continue
to be the standard for what we consider blockbuster games. Fortunately,
tools like FMOD and Wwise have given composers and audio designers robust
tools to adapt and modify their scoring approach to be more truly interactive
with the game environment. As this generation of consoles reaches maturity,
we will yet see some of the finest and most robust implementations of
interactive music, I’m sure.
Even so, pre-recorded orchestral music – however well designed – will
still have some static elements that cannot be changed or made truly interactive.
Once a trumpet solo is ‘printed to tape’, it cannot easily be changed.
Yet the technological leaps of the next-generation of consoles may present
another option. It isn’t unreasonable to think that we may see a sort
of return to a hybrid approach to composing, using samples and some form
of MIDI-like control data. While at first this may seem like a step backward,
consider this: with the increasing quality of commercial sample-libraries
of all types, and the extremely refined file compression schemes used
on today’s consoles, it is possible to think that the next Xbox or Playstation
could, in fact, yield enough RAM and CPU power to load a robust (and highly
compressed) orchestral sample library. The composer, then, is hired to
design a truly interactive music score in a format akin to MIDI – note
data, controller data, as well as realtime DSP effects. This score, then,
would not only adapt in the ways we’ve described above (fading individual
tracks in and out, and logically transitioning to new musical segments)
– but because we have separated the performance from the sample data,
we would now have control over each individual note played. The possibilities
are nearly endless – realtime pitch and tempo modulation, transference
of musical themes to new instruments based on game events, and even aleatoric
or generative composing, which assures that a musical piece conforms to
a given set of musical rules, yet never plays the same theme twice.
Indeed, thesse possibilities and more are surely coming, and it is an
exciting time for composers, audio designers, and gamers alike. For now,
we can enjoy a new level of attention and awareness on game music. We
are treated to truly orchestral experiences, if not completely adaptive
and interactive ones. And yet, in the coming years, interactive music
technology will continue to mature, and we will assuredly hear more sophisticated
implementations of these technologies across the full spectrum of games.
I encourage you to listen closely to the games you or your friends play
over the next few years. The tunes you hear today are helping to shape
a musical revolution for tomorrow.
Footnote: 1 – Gamepro
– Next Gen Audio Will Rely On Midi
About the author: West Latta has been making strange noises for over 30 years. He has spent the last several years developing his craft in the game industry as composer, sound designer, and integration specialist. He is currently a Sound Supervisor for Microsoft Game Studios, as well as a freelance writer, composer and audio designer.
by Simon Power
Game of Thrones, Breaking Bad, Nashville, The Sopranos. American drama series are a huge influence on the way television looks, feels and sounds in contemporary entertainment. A big part of that enjoyment comes from their music: The soundtracks & scores. In this article we take a look at the music used in some of those series and find out what makes it such an essential part of the viewing experience for the boxset generation.
Game of Thrones (HBO)
music composed by Ramin Djawadi.
Based on the fantasy novels, A Song of Fire & Ice by George R.R.Martin, Game Of Thrones is set in the fictional continents of Westeros and Essos with storylines encompassing civil unrest, exile & the impending threat of a very, very long winter.
An important part of the mood setting in Game Of Thrones is its long title sequence and theme tune at the beginning of every episode. A mechanical 3 dimensional map unfolds alongside Djawadi’s evocative music. A rich orchestral theme featuring cantering Eastern style cinematic percussion, a solo cello and assortment of brass, string & woodwind instruments. This theme sets the tone for the narrative and returns in a variety of versions throughout the series.
GOT is heavy on complex dialogue which tends to govern the role of the music. Moody orchestral swells offer support to the atmosphere of the dialogue, helping define the importance of what’s being said, rather than overwhelming it.
During battle scenes the music comes to the fore, often heavy orchestration with deep resonant percussion.
Although it may be played on real instruments mixed with samples, there are very few obviously synthetic sounds to detract from the medieval feel of the series.
In fact, with its austere orchestral washes, the role of the music could be termed ‘transparent’, in as much as it’s sympathetic to the dialogue and offering a supporting role to the storyline.
Aside from the incidental score, the series also buddies up with a number of indie bands. The National, Sigur Ros and The Hold Steady giving a contemporary feel to the music palette. A good way to reach the Game of Thrones audience on another level. Offering a connection to the present day through familiar artists and bands.
Breaking Bad (AMC)
Music composed by Dave Porter
Music supervisor: Thomas Golubic
Walter White, a Struggling chemistry teacher, is diagnosed with inoperable lung cancer and turns to a life of crime in order to support his family’s future before he dies.
Hugely successful cult series Breaking Bad has an entirely different approach to its score and soundtrack with the music taking on a much more upfront role during the series five seasons.
There are a number of different ways in which the music appears in the show. Firstly there’s composer Dave Porter’s contemporary sounding synth based cues that appear during key moments such as scene setting, great drama, tenderness or suspense. Unlike traditional orchestral music, Porter uses synths and electronic sounds mixed with real instruments like guitar, piano and woodwind.
With a variety of arpeggios, swells, breakbeats and loops the score takes on a much more contemporary feel in line with producers like Trent Reznor, Brian Eno or Mogwai.
The second way that music appears is with published music tracks by established & unknown artists sprinkled through each episode adding an almost ‘music video’ feel to a scene, the music becoming any bit as important as the visuals it accompanies. Anything from 60’s Lounge jazz or Hip Hop, to indie rock or Mexican Mariachi music. The variety of music used is dynamic, eclectic and quite often full to bursting point with humour and irony. Take for instance the scene where we see meth addict & prostitute, Wendy S. going about her daily business to the jaunty overtones of The Association’s Everyone Knows It’s Windy!
Executive music producer, T-Bone Burnett
Managing producer, Buddy Miller
Nashville chronicles the lives of a variety of fictitious singers from Nashville, Tennessee as they deal with the ruthless cut throat world of County Music stardom.
Nashville is an example of a series where songs are being recorded and performed as the drama unfolds and are often lyrically and musically intertwined with the on screen drama. It’s an example of how music, visuals and narrative can be gelled together to appear almost seamless.
The added incidental music is of course Country flavoured as well A few bars of acoustic picking as we scan across the Nashville skyline. Or a well judged slide guitar lick in a minor chord to signify moments of melodrama. As a kind of self fulfilling prophecy, the soundtrack albums have become best sellers making the music almost as popular as it appears to be fictitiously in the series!
The Sopranos (HBO)
produced by David Chase
New Jersey mobster, Tony Soprano turns to psychiatry as he struggles to balance the conflict between his home life and his job as boss of a criminal organisation.
Perhaps the original series that kicked off the boxset generation back in the late 1990’s, The Sopranos was hugely influential with its bold portrayal of the American Dream turning into a spiralling nightmare. If you were a fan, you’ll remember the dust-ups, the shoot-outs, the car chases, the brutal assassinations. But it may surprise you to learn that there was no original music composed for the shows in its entire six series run.
The music choices were all carefully chosen popular songs that fitted the mood perfectly, often in complete opposition to the on screen violence or gory melodrama. This approach to scoring was a fairly new device on TV and was perhaps more in line with the feature films of Martin Scorsese who features end to end popular music in his gangster films like Casino, Good Fellows and The Departed.
One recurring use of music in The Sopranos was a well placed eclectic song playing as the end credits rolled out. Elvis Costello, Ben E. King, The Chi-Lites, Van Morrison. Even John Cooper Clarke’s Chicken Town featured in this highly coveted spot.
Then of course the show’s popular signature tune, Woke Up This Morning by Alabama 3. Chosen when producer David Chase heard it on daytime radio while driving to work.
So just within these few examples we have seen widely diverse ways of using music in a TV series. The supporting role of Ramin Djawadi’s orchestral score in Game of Thrones, Dave Porter’s synth based incidental music for Breaking Bad, Nashville’s total integration where the music becomes part of the show and The Soprano’s reliance on popular music to make up a memorable score.
There are of course many other examples. Mad Men’s heady mix of 60’s pop, Boardwalk Empire’s prohibition era Jazz and Blues. Even The Handsome Family’s eerie title track to True Detective.
All these and many more add flavour, depth and atmosphere to the excitement of American TV dramas enjoyed on TV’s & other devices around the world by a new breed of dedicated fans. The Boxset Generation.
the author: Simon Power has made over 50 short films and documentaries
for the music technology website Sonic State. He has also removed &
replaced copyrighted music on a number of commercial BBC releases. In
these articles he offers advice and tips about using music in your low
budget film and audio/visual projects. You can learn more about Simon
and his projects at his website, http://www.meonsound.com/