Shockwave-Sound Blog and Articles
Harnessing the Power of Sound: Behavior and Invention

Harnessing the Power of Sound: Behavior and Invention

Sound is a force of nature that has its own special and unique properties. It can used artistically to create music and soundscapes and is a vital part of human and animal communication, allowing us to develop language and literature, avoid danger, and express emotions.  In addition, understanding and harnessing the unique properties of sound has resulted in some surprising and fascinating inventions and technologies. Below are some interesting notes on the behavior of sound and some novel technological uses, both as a weapon and in contrast in medicine and health care.

 

The Speed of Sound
Sound exists when its waves reverberate through objects by pushing on molecules that then push neighboring molecules and soon. The speed at which sound travels is interesting as it behaves in the opposite manner of liquid. While the movement of liquid slows down depending on the density of the material it is trying to pass through, for example cotton as opposed to wood, sound actually speeds up when faced with denser material. For example,sound travels in Air (21% Oxygen, 78% Nitrogen) at 331 m/s, 1493, m/s through water, and a whopping 12,000 m/s through Diamond. 
This sound behavior is also evident in how quickly it can pass through the human body, which is generally around 1550 m/s, but passes much more quickly through skull bone at 4080 m/s which is much denser then soft tissue. Interestingly, the average speed through the human body is very similar to that of water, which makes sense because human beings are 90% made up of water.

Sound in a Vacuum

 


Not only does the density of objects increase the speed ofsound, sound needs material to be present in order to “make sound” inthe first place. Because, it exists when sound waves reverberate through objects. Without objects present, sound does not exist, such as is a vacuum. his makes as a vacuum is an area of space that is completely avoid of matter and therefore has no molecules. This video demonstrates the effect of a vacuum on sound. As the air is sucked out of the bell jar, the bell can no longer be heard.
Sound is in the Ear of the Earholder

 

For humans and animals, the perception of sound waves passing through their ears, depends on the shape of the ear, which influences that vibrations. The shape of an animal’s outer ears determine the range of frequencies that they can hear. Elephants have flat and broad ears, which allowthem to hear very low frequencies, which they use to communicate. Lower frequencies are associated with large surface areas, such as bass drums, so this makes sense. Mice have ears that are round, which allow them sensitivity to sounds that come from above. Again, this makes sense as they are tiny and close to ground and all threats would be coming from above: hawks wanting to
eat, cats hunting, humans screaming and jumping on chairs, etc. The tall ears of rabbits make them sensitive to sounds flying around horizontally, obviously so they know when to jump. Owls work their famous head pivot to create a precise sound listening experience while checking for prey and threats. Deer work to avoid predators with muscles in their ears that allow them to point in different directions.
Sound as a Weapon
 The Long Range Acoustical Device (LRAD) is a machine used to send messages and warnings over very long distances at extremely high volumes by law enforcement, government agencies, and security companies. They are used to keep wildlife from airport runways and nuclear power facilities. The LRAD is also used for non-lethal crowd control. It is effective in crowd control because of its very high decibel range which can reach 162. This exceeds the level of 130 decibels, which is the threshold for pain in humans. It is very precise and can send a “sound beam” between 30 and 60 degrees at 2.5kHZ and will scatter crowds that are caught within the beam. Those standing next to it or behind it might not hear it at all. But those who do report feeling dizzy with symptoms of migraine headaches.  This is called acoustic trauma and depending on the length of the exposure and it’s intensity, damage to the eardrum may result in hearing loss. Since 2000, the LRAD has been used in many instances of crowd control throughout countries in the world, and even on pirates attempting to attack cruise ships.
Almost humorously, high pitched alarms can also be used to deter teenagers from loitering around shops or engaging in vandalism and drug activity. The “teenage repellant” has been used throughout Europe and the US. Since teenagers have a higher frequency range of hearing than adults, it targets them specifically, while adults are spared the annoyance of the 17.4KHz emission. There are critics that state these devices unfairly target specific groups (youth) and are therefore discriminatory.
Sound Levitation
Sound levitation, or acoustic levitation, uses sound properties to allow solids, liquids and gases to actually float. It uses sound wave vibrations that travel through gas to balance out the force of gravity and creating a situation in which objects can be made to float. Dating back to the 1940s, the process uses ultrasonic speakers to manipulate air pressure and points in the sound wave that counteracts the force of gravity. A “standing wave” is created between a “transducer,” such as a speaker and a reflector. The balancing act occurs when the upward pressure of the sound wave exactly equals the force of gravity. Apparently the shape of liquid such as water can be changed by altering the harmonics of the frequencies that result in star shaped droplets.
 
 
In terms of practical uses of sound levitation, they do improve the development of pharmaceuticals. When manufacturers create medicines they fall into two categories called amorphous and crystalline. The amorphous drugs are absorbed into the body more efficiently than crystalline drugs. Therefore, amorphous are ideal because a lower does can be used so they are cheaper to create. So, during evaporation of a solution during manufacturing, acoustic levitation is because it helps prevent the formation of crystals because the substance does not touch any physical surfaces. Acoustic levitation, in others words stops substances from crystallizing, thus creating a much more efficient method of drug creation. In addition, sound levitation creates essentially a zero-gravity environment and is therefore an excellent environment for cell growth. Levitating cells makes sure that a flat shape is maintained which is the best for the growing cell to absorb nutrition. It could also be used to create cells of the perfect size and shape for individuals.Sound behaves in its own fashion and is a phenomenon that can be used in force and in healing. It taps into the physics of the natural world and through its interaction allows for all sorts of human invention. Surely, sound will continue to be researched and pursued as a powerful natural element to be used in a myriad of new ways.

 

Full Sail University and Point Blank School: Real Alternatives to Traditional Post-Secondary Education

Full Sail University and Point Blank School: Real Alternatives to Traditional Post-Secondary Education

Traditionally, post-secondary education in Western culture for years has been hinged on both parents and students desiring education that “rounds out” the student’s mind, ie a “liberal education.” This concept of the liberal education still commands the trajectory of many high-achieving students who graduate high school in the US, and gymnasium in Europe. The goal, as many still believe, is to then attend an expensive college or university to learn the higher concepts of Western academics: philosophy, literature, history, and the social sciences. However,
as the world changes based on obvious technological advances involving computing, the internet, and digital production, there are now new choices for those students and families who choose not to adhere to the beaten path.
Simply put: there are now more
choices to post-secondary education and one path is sound and music and media
production.  This is a good thing.

Two schools, one in the United States, and one in the United Kingdom, exemplify this forward motion into the future of post-secondary education by emphasizing education based on media production, and de-emphasizing traditional courses in Western philosophy, for example, Shakespeare. The two schools covered below offer real degrees or certificates and more importantly, practical career benefits upon graduation. They differ somewhat in their approaches, degrees, and course selection, but are similar in their intent: to provide pragmatic post-secondary education to talented and creative young adults who are intent on pursuing creative passions while earning real incomes after graduation.  The two schools covered below are Full Sail University in the United States and the Point Blank School in the United Kingdom and this article simply intends to lay out the presentations and claims of both for any student seeking a higher education in sound and media production.  The contrast between the two is rather striking. However, both are sound, no pun intended. Simply put, for readers heading toward a career in sound, music, and media, both schools are worthwhile, high quality, and have a track record of success among their alumni.

 

Full Sail University is located in Winter Park, Florida in the United States and bolsters an interesting and vibrant history. The inception of the school began in 1979, as a recording workshop called “Full Sail Recording Workshop” in Dayton, OH, by a Mr. John Phelps. Throughout the years, driven by student interest and teaching success, the school now commands a 192 acre campus with 49 degree programs and 2 graduate certificates which include nearly every form of digital media production avenue available to future digital media content providers. They provide degrees in music and recording (including studies in sound recording and design), games, art and design, technology, media and communications, art and design, and media and sports business.

There is credence to their degrees. They are licensed by the Commission for Independent Education (Florida Department of Education) to offer Associates through Masters degrees.  They are also accredited by the Accrediting Commission of Career Schools and Colleges (ACCSC), an organization that is recognized as a national accrediting school by the U.S. Department of Education. Meaning, and this is very important for students who desire a valid BA, Full Sail provides degrees that will be recognized by other schools in the United States if a student may want to achieve a Masters Degree in Sound Design or other media discipline at another university or college in the United States.  In addition, their Bachelor programs are efficient and can be completed in 20 to 29 months. Programs begin monthly and therefore there is no need to wait through semesters of time to begin studying. Many of us perhaps remember slogging through low paying jobs during summers waiting for school to begin. This waste of time does not happen at Full Sail. Graduation for the hard working student can come in half the time of a traditional 4-year college. This will give any graduate an edge in entering the workplace at younger age than their peers, ready and prepared for their career possibilities of the future.
In addition to their massive array of course offerings in multiple media disciplines, Full Sail functions as a proper university, providing financial aid and housing options near the campus. Upon graduation, they provide assistance for career development and work on job placement. They use a unique combination of technology such as networking with classmates and easy access to instructors online, continuous technical support, video conferencing, the ability to create via laptops from anywhere at anytime, and the use of cutting-edge media creation software to teach multi-media production. They couple this effective use of technology with the traditional model and rigors of a 4-year bachelors, requiring many hours of study and a wide array of courses needed to earn a degree. In all, Full Sail appears to provide a solid education in a wide array of media disciplines and the school and staff work diligently to aid their students in careers after graduation. One quick glance at their Alumni page attests to the success of their graduates, who work in film and sound capacities with the top media production firms throughout the world. This school is a valid option for students passionate and interested in media production.

The second school mentioned in this post, is the Point Blank School, an Electronic Music School that teaches music production and performance skills and techniques in London, England, Los Angeles, US, Ibiza, Spain, and online.  At the onset here, one must agree that it would be hard for anyone interested in music production not to salivate at the possibility of attending a campus to study electronic music on the island of Ibiza. This would be a young DJ’s heaven. On the island of Ibiza, specifically, it appears that students study by day and then at night are given the opportunity to rub elbows with highly successful DJ’s and are afforded the opportunity to perform at internationally renowned nightclubs on the island. This sounds like a win-win for those who can afford to attend.  Regardless of the three locations, it is clear that Point Blank is a successful electronic music school that attracts globally successful DJ’s and producers as instructors and most likely is helping to create the next generation of electronic music producers. On an academic note, Point Blank School has a an affiliation with Middlesex University, which validates their Higher Education classes and those who complete the Point Blank set of courses receive a certificate and award upon completion. Middlesex apparently also “validates” the BA Music Production & Sound Engineering program.
To sum it up, both Full Sail and Point Blank provide top-notch media production education. The location in which you live may determine your preference. Clearly, if you are in the United States, then Full Sail is more accessible and if you are in Europe then Point Blank is the closer option. Full Sail has a longer history and a vastly larger set of course offerings. They have managed to achieve accreditation as a “four year” university that bestows actual BA’s that in theory will transfer to other schools. The programs are steeped in a wide array of disciplines and many of their students move into the media industry with great success. For the youngster graduating high school in the US, or those willing to trek across the pond and can afford it, Full Sail is a valid, excellent alternative to the traditional liberal arts education and provides not only electronic sound and music production, but video arts and other media productions fields as well. They also have, which has not been mentioned yet in this article, a massive film set/lot where Hollywood sized films can be created. They command one singular massive campus with a four year college degree experience.In comparison and contrast, Point Blank is equally passionate and active in their own realm. Their realm is more singular however, catering to the world of elite DJ-ing and production and focusing on electronic music production and performance only. Clearly, the school has succeeded in attracting excellent talent to instruct and built campuses in three of the most illustrious places in the world (London, Los Angeles, Ibiza) to be an electronic music artist. Clearly their combination of talented instructors, course layout, and very importantly the exposure for current students to the club scenes of their various locations, are all major pluses. It depends on what one’s goals are as an aspiring sound/media artist and the degree you would like to have upon your graduation. Full Sail will give you legitimate academic credentials, serious professional contacts, a community, and support. On the other hand, Point Blank will give you expert skills for rocking dance floor, a certificate and perhaps a BA. Mostly, though, it seems it will give you high level contacts and experience in the world of globally recognized DJ’s. Both schools rock. Hats off.
Surround Music in video games – we consider the viability and aesthetics

Surround Music in video games – we consider the viability and aesthetics

Considering the Aesthetics of Surround Music in Games- By Rob Bridgett

Surround Music: The Emperor’s New Clothes?

 

There has been much talk recently, predominantly among composers, dedicated to the virtues and benefits of surround music in video games. Certainly with the increased memory capacity of next-generation platforms, and a greatly increased install-base of surround sound systems, the prospect of surround score and licensed music seems more feasible than it was on previous generation consoles. While this exciting expansion in the dimensions of the video-games medium offers some fantastic opportunities and new horizons for music in video games, there are also pitfalls which should be carefully considered when designing a game’s soundtrack with surround music in mind. Leaving aside any technical aspects of surround music for games (these have been discussed in depth and frequently elsewhere), there are some pertinent questions that need to be asked of when surround music is useful and when it is a needless distraction.

Surround sound systems are getting more and more common in video gaming setups

Diegetic and Non-Diegetic Boundaries

 

For certain types of games, the notion of surround sound music offers some very particular spatial and aesthetic challenges. In 3D first or third person perspective games (mostly those games which aim to imitate a ‘cinematic’ sound model), spatialization of music can break the ‘non-diegetic contract’ between the space occupied by score and sound effects. Non-diegetic music, or score, is not a part of the game world (diegesis). The score is not expected, for example, to get effected by environmental reverbs and it’s volume or position is not attenuated according to where the player/listener is positioned. So the spatialization of various instruments or textures within a piece of score, particularly to the rear of the surround-field, can be mistakenly read as a sound effect that is positional in the game-world by the listener. This is due in part to the tonal similarity of much ambient sound design to certain elements of an orchestral score. The same would be the case if the game were set on board a space-ship and the score employed positional ambient electronic bleeps in the rears, there would be a high likelihood that the listener would be unable to determine what was a diegetic sound effect and what was a non-diegetic positional part of the score. If the style of the music and the style of sound effects are too similar, then this spatial confusion can easily arise.

Distraction From Action and Immersion

 

Surround sound has a unique function in 3D games as it allows for navigation and essential off-screen action and space to be communicated to the player. Sound effects such as gunshots, dialogue and footsteps coming from positional points actually play a huge role in helping the gamer to play the game and navigate the 3D space effectively. In cinema a very different set of rules apply, anything too prominent in the surrounds is considered a distraction from the screen, essentially having your audience looking behind them towards the exit signs in the theatre to figure out what that sound was. The use of surrounds in film, often tends towards ambience to softly envelop the listener within the diegesis, essentially using sounds that are non distracting. This is a clear, tried-and-tested formula of maintaining immersion in cinema. Similarly, within these specific types of games, anything in the musical score that can be misread by the listener as a sound effect that is physically located behind them, can distract the player and draw their attention to the music.

Not only would music that is making prominent use of surround channels be confusing to the player who is attempting to use the surrounds for navigation and for knowledge of enemy positions etc, it also has great implications for the final mix of all the elements of the game’s soundtrack, particularly sound effects and dialogue. With music potentially taking up a wide band of frequency in the rear-field as well as the front, this introduces more clutter into the overall mix of sound effects, dialogue and music and places further demands upon interactive mixing (an area that still needs to make huge leaps technically and artistically in order to catch up with the quality of cinema sound) to compensate and allow the listener to clearly hear dialogue and sound effects at key moments without being overwhelmed by the score.

Where Surround Music Will Work Well…

 

This is by no means a definitive list, but there are of course many exciting areas where surround music can be exploited in ways that are unique to interactive games and that look away from the cinematic ‘film-sound’ model. In 3D games, a score that remains as ‘underscore’, and remains in the ‘ambient realm’ could work well in a 3D environment. Provided there is nothing to draw the player’s attention to the rear field, nothing sudden and unexpected in the score that reads as a sound effect. It is therefore essential that communication between the composer and the sound director occurs to figure out exactly how much activity is required in the rear-field in terms of the score. Again, anything too prominent, and unexpected, runs a high risk of being read as a positional sound effect by the player.

Positional sound effects are useful to the player, a cowbell positioned in the rear field of the score is not. Having said this, there are also occasions where the diegetic and non-diegetic ‘contract’ can be exploited. Having surround elements in the music deliberately ‘read’ as positional could be used in particularly scary or quiet moments in a game, such as a survival horror, to create some very tangible aspects of tension and confusion. “Was that a sound behind me? Ah no it was just the music! phew… oh damn” etc. It all depends upon the requirements of the game that are placed upon the sound, music and dialogue.

Racing games in particular also offer significant opportunities for use of surround music in games as so much of the action in a racing title is focused on what is directly in front of the player. There is also a direct real-life example and sonic analogy that can be drawn between the player and the listener as though they were sitting in a real car and listening to a surround sound piece of music (from speakers actually placed in the front and in the rear of the vehicle). The only sound effects that the player is also likely to be interested in coming from the rears are the sounds of other cars, not in themselves entirely subtle or hard to mistake for elements of the surround music. Not only this, but the style of music expected in a racing game, that of high-octane, electronic adrenalin-pumping beats, is completely different to a ‘score’ and can be read and understood more clearly by a listener as ‘positional electronic music’ as opposed to ‘positional sound effects’. It again comes down to a clear difference between style of music (electronic) and style of sound effects (engines).

Racing games offer a real-life analogy of surround music,
with speakers often situated in the rear of the car as well as the front.

Another good example is in 2D environments where there is no suggested diegetic rear space for sounds to occupy. One such example is in front-end menus, there is a good opportunity to use surround music there to fill the rear field space. In 2D games there is also a great opportunity to explore the space created with surround music, because, again, there are no implications for confusing 3D sounds with any spatialized music that may be heard.

A third example is diegetic, positional music, i.e. music that is emanating directly from the game world, for example a live band playing in a nightclub. The sounds of the separate instruments could be positioned in 3D at their exact points of origin on the stage, thereby when the player walks around, the balance of the music will change accordingly. The player could even get up on stage and among the musicians to hear the vocalist in front, the drummer coming from the rears, guitar and bass from left and right respectively. There could even be specific audience reactions at a particular moments in the performance positionally coming from the audience space. An 8-channel interleaved stream, with each individual channel mapped to a particular position would cater well for this implementation.

Guitar Hero offer real potential for convincing surround music in games,
as it represents an actual on-stage performance, with potential for audience and band spatialization.

Planning Ahead for Surround Music

Deciding on exactly where each element of the soundtrack is to be placed and what belongs to the scene (source sounds / music) and what comes from ‘beyond the scene (non-diegetic) is a critical decision to be made up-front by the audio director in conjunction with the game designers before the composer begins their work. Also determining the ‘style’ and timbre of sound effects and music will help to separate any confusion as to what is music and what is a sound effect. Having a composer write a surround score with specific uses in mind from the beginning will have a great many benefits from suddenly deciding that the music should be up-mixed into surround format when it is all complete. Finally, the type of game will greatly inform many of these decisions, but also thinking about the specifically designed functions of the music within a particular type of game will help to define further creative strategies for the use of surround music in games, and eventually make games a more immersive and interactive experience for gamers. Any decision whether or not to commission a surround score, or surround licensed music content , must first consider all the elements of the finished games’ soundtrack together as a single final entity and the effects that this will have on the player.

About the author: Rob Bridgett (of www.sounddesign.org.uk)
was one of the first to complete the Master’s degree in Sound Design
for the Moving Image at Bournemouth University and is currently Senior
Sound Director at Radical Entertainment in Vancouver. Work for games
includes… 50 Cent: Blood on the Sand (2009) (sound director) ,
Prototype (2009) (sound mixer & cut-scene sound designer) , Crash:
Mind Over Mutant (2008) (sound mixer) , TimeShift (2007) (cut-scene
sound designer) , Crash of the Titans (2007) (additional sound design) ,
World in Conflict (2007) (additional sound design) , Scarface: The
World Is Yours (2006) (sound director) , Sudeki (2004) (VG) (additional
sound designer) , Serious Sam: Next Encounter (2004) (sound designer
& composer) , Vanishing Point (2000) (sound designer).

Recording Sound for Perspective

Recording Sound for Perspective

A sound design tutorial by Paul Virostek

Why Record for Perspective?

I remember a time I first began editing when I was struggling to make a car door slam match the picture on film. I shifted the sound earlier, later, added and removed elements and it still didn’t fit. The editor who was mentoring me said:

If you’re trying too hard to make a sound fit, then you’re using the wrong sound.

He told me why: the car door sound effect should have been correct (it was the proper model and year), but it had been recorded inches away from the car. In the scene the camera was a few meters away from the car. This difference made the sound jarringly wrong.

In other words, no matter how you synchronize the sound with the picture, if the actual nature of the sound is wrong, it will never work. This taught me how important it is to use the proper sound:

• correct volume
• correct timbre
• correct perspective or apparent distance

Cheating the Effect

Of course, volume is easy to adjust. If the timbre is wrong, you can choose another sound from the same family. However if the sound’s perspective or apparent distance doesn’t match the picture, no matter what you try it will never completely fit.

Simply raising or lowering the volume of the sound may seem to make the sound closer or further away, but this is only a ‘cheat’. The match will be close, but will invariably seem subtly, disturbingly, off.

What About Using Reverb?

One common trick is to apply reverb to closely-recorded sounds to make them seem further away. Even the best reverb plug-ins cannot replicate perspective perfectly, however, and the result will sound slightly odd. How do we solve this problem? Read on.

What is Perspective?

Perspective describes how close a sound appears. Typically a sound’s perspective is described in four ways:

  • Close (anything under 10 feet/3-4 meters)
  • Medium Distant (roughly 10 feet/3-4 meters away)
  • Distant (anything more than 10 feet/3-4 meters away)
  • Background or BG (quiet and muted)

Also:

  • MCU or Major Close Up (inches away from the microphone, although this term is being phased out in favor of Close)

A close/medium distance mic setup

Here are some examples of a smoke alarm recorded at various perspectives. NOTE: it may help to wear headphones to hear the perspective or ‘space’ or ‘room’ properly (and have the volume down, the sound is sharp):

Notice how the Distant alarm has more echo, even though it is slightly louder than the Medium Distant alarm? The difference between these recordings is how much ‘air’ or space is apparent in the recording. ‘Air’ is created by a) the space where the recording takes place (also known as ‘room’) and b) the amount of reverb.

An Example: Woof

Imagine a dog barking in a city alley. A close recording will have prominent barks, and very little of the echo of the barks in the alley.

The further the dog is from the microphone, the more ‘air’ or ‘room’ will appear on the recording. The dog will seem quieter since it is further from the microphone. We will also hear more of the barks reverberating or bouncing off the alley walls.

It is exactly this aspect of the sound that we want. This distance, or perspective, will make it match perfectly with medium distant camera shots.
A recording of the close dog and one of the distant dog, although they are the same animal, will sound completely different.

Me, between a close/medium mic setup

Which Recording Perspective is Best?

So which perspective do you choose if you are going to record a sound effect? The short answer: all of them.

With today’s digital multi-track recorders you can record all perspectives at once. Patricio Libenson, the recordist for Hollywood Edge’s Explosions library, told me his set up involved multiple microphones, all at different distances and angled mathematically to account for phasing. The result is an incredibly rich collection.

Let’s return to our dog in the alley. We can set up one mic at the end of the alley, and have another next to the dog, both plugged into the same recorder. When the dog barks, we’ll have recorded both perspectives at once.

Match the Recording to Your Project

If you have to choose one perspective over the others, consider the project you are working on:

  • Multimedia or Radio – it is always best to record close perspective for these projects. The reason? Distance has little value when you won’t be using picture or visuals. Also sound designers like the immediacy and power of close effects.
  • Film or Television – most film editors prefer their effects recorded Medium Distant. The idea is that most camera shots are typically Medium Distant or further. Also, in a pinch you can fake a close perspective by raising the volume. In a perfect world they would like to have a Close version available as well.

Unfortunately, most commercial libraries are recorded close. Imagine you are trying to use a close dog bark in a scene where the dog is across the yard. It won’t fit.

That’s why at Airborne Sound we record two perspectives: close and medium distant, even if it requires multiple takes.

Conclusion

When you record sounds to match the requirements for your project, you’ll find the sounds fit easier, and require less editing. And, of course, it just sounds right.

About the author: Paul Virostek travels worldwide recording the sounds of cities and cultures. He shares his collection at airbornesound.com, and writes about his experiences field recording, and sharing sound effects at jetstreaming.org. He is also the author of “Field Recording: from Research to Wrap – An Introduction to Gathering Sound Effects“, which was published in 2012.

Using Sound Effects within music composition and production for increased overall effect

Using Sound Effects within music composition and production for increased overall effect

Excuse Me, You’ve Got Some Sound Effects in
My Music

About using sound effects in music production and how the line between sound effects and music is blurring

by Kole Hicks

The use of certain elements we consider “Sound Effects” in Music is much more common than we may think. Whether it’s nature ambiances heard lightly in the background of a New Age track or the aurally unpleasant bang of a trashcan lid in Industrial music, our perception of what purely differentiates the line between Sound Effects and Music is rapidly blurring.

 

 I recently became more aware of this progression earlier this year when tasked to compose an eerie / ethereal background track for a horror game. The piece most definitely had to set a mood and have direction, but never really intrude the player’s “consciousness” enough to have them recognize or become aware of “Oh hey there is music being played now.” So, in a way the music was to act in a role we may consider to be more common with sound design.

Now this practice in and of itself is not new, but the questions I asked myself while approaching this problem and the previously closed “doors” the answers opened up to me are new and unique enough to want to share my findings with you.

I. Approaching the Issue & Asking the Right Questions

Before I even attempted to do the traditional “sit down & start writing” phase, I tried to think of and answer all of the necessary questions that are unique with a piece like this. Should there be any thematic material… would it “get in the way”? How Dynamic can the piece be? Will I be using “traditional” instruments? What role will the mixing process play in this piece? Etc…

Asking and answering all of these questions were absolutely critical for taking an accurate first step towards fully expressing my intent with the piece. That is why I often take this step and recommend many others do as well (especially if you need to be very articulate with what you’re wanting to express).

 

II. Answering the Questions

Let’s go through the process of asking and answering a few questions unique to a piece of music like this.

First, lets look at “Will I be using traditional Instruments?”

Since there is no right or wrong answer to this question, I only felt compelled to organize/understand my instrumentation choices enough to justify their usage in the piece. So, I decided that my approach to this piece had to be one focused more on timbre/moods and that writing standard musical phrases easily identifiable as “music” by the human ear were off limits. At least initially, as I also decided that “sneaking in” the main theme from time to time would be okay (as long as it’s full introduction was gradual). However, for the most part, I “justified” the usage of some traditional musical instruments by challenging myself to use them in a unique way that wouldn’t immediately be perceived as a musical phrase by the listener. “Typical” sound design elements (impacts/crashes/scrapes/etc.) were also allowed, but must be organized in such a manner that they would have a perceived direction.

Which brings us to our next question… “What role will Form play in this piece?”

As I mentioned before, the line between what could only be considered Sound Effects and what could only be Music, is rapidly blurring. Impacts, soundscapes, and other “sound design elements” are being used so often in modern music that I believe the only clear distinction between the two is the way each one is structured.

This is not to say that Sound Effects can’t be organized in a way to tell a story, for they surely can, but rather the way in which we approach and organize our sounds for music is different. Repetition and imitation are two of the most common techniques used in music from almost anywhere in the world at anytime in history. When you’re lacking tonality, melody, and other “common” western musical constructs, more often than not we revert to repetition and imitation to structure our music (both for our sake and the listener’s ears). Often times, when your creating Sound Effects to picture, its not ideal to only use one punch/kick sound for an entire fight scene. However, I can also imagine the argument that the variety in those punch/kick sound effects, are the equivalent of musical imitation. So, perhaps the only real thing separating the difference between Sound Design and Music is our perception/preconceived notions of what each one “should” be.

With that said, I decided that the role of Form in this piece was to take these isolated sound ideas/motifs and repeat/imitate them in a manner that felt like it was going somewhere (The repetition/imitation itself not having to be structured, but perhaps more organic or improvised). Complex and strict forms like Sonata or even Pop wouldn’t accurately achieve this goal. So, it was determined that the form must be even more basic (remember we don’t want the listener to immediately recognize this as music). My solution was to introduce and eventually repeat/imitate these “themes/motifs” as they were applied throughout the changes in the dynamic curve.

Last but not least… “What role will the Mixing Process play in this piece?”

I feel very strongly about the role of Mixing in the Composition process, as it’s unavoidable in modern times. However, I’ll save the majority of what I have to say about this topic for a separate article.

As it applies to this question though, I determined that the subject
matter and piece itself needed “mixing forethought.” Simply
thinking about what pitches, rhythm, or articulation to use would not
be enough, so I went a step further and asked myself questions like…
“Is a High Pass Filter needed in this piece? If so, When and for
what Part(s)? How much distortion should be used on the guitar…
what pickup? Should I automate the reverb to the dynamic curve or keep
it consistent throughout the piece?

It’s through questions like these that some of my most creative
answers originated. When you become more aware of exactly what frequencies
you want expressed at a certain point in a piece of music or how you plan
to creatively pan each instrument, your music will immediately benefit
from the original answers you come up with.

I always like to say that if it affects the way your music sounds at
the end of the day then it’s a part of the Composition Process that
should be taken into consideration. That goes for Mixing and even your
state of mind prior to writing (make sure it matches the necessary mood
you want to express in the piece of music!)

III. Applying the Answers

Now that we have some unique answers to work with, it’s all about performing and capturing their essence. For instrumentation it was decided that everything is permitted, but most “standard” writing practices would not apply.

Bend a string of the guitar beyond its “comfortable point” and play your theme. Play the Piano with socks on your hands or breathe into the mic and apply massive reverse delay. Place a huge pillowcase over your mic/head and start to sing. Record your harp in the bathtub or pitch up/down kitchen pan impacts and organize them to build a triad.

The options available to you are only restrained by your ability to ignore the fear of “What will others think?” The Answer to “What is Music?” is growing every day with new ideas from creative composers willing to push the boundaries of sound and a more accepting audience that’s aching for something new/original. With that said, I’d like to wish all of you the best and keep composing fellow artists!

If you’d like to listen to piece of music I finished, click here and tell me where to send it.

About
the author: Kole Hicks is an Author, Instructor, and most prominently
an Audio Designer with a focus in Games. He’s had the pleasure of
scoring mobile hits like ‘Bag it!’, has provided audio for Indie PC
titles like ‘Kenshi’ and ‘Jeklynn Heights’, and was nominated for a 2012
GANG award for an article written exclusively for Shockwave-Sound.com
titled, “Mixing as Part of the Composing Process. Emotionally Evocative
Music & Visceral Sound Effects… Kole Audio Solutions.