Artists draw inspiration from everything. The entire world around them and the human relationships they have are all sources of experience that provide the meaning they need to express. One powerful source of expression for the visual art is sound and music, the topic of this post. Below are some beautifully intricate creations inspired by the sonic world.
Luke Jerram’s Aeolus
Luke Jerram is a multidisciplinary artist who creates live art projects, sculptures and installations internationally, with over 300 exhibitions worldwide in 43 countries since 1998. His work is impressive in scope and beauty. One piece, Aeolus, was inspired by acoustics and its relationship to light, architecture, and wind. As stated on his website, Jerram’s initial idea for this project was from an interaction he had with a well digger of the Qanat desert in Iran. The well digger gave accounts of the wells singing when wind passed through them. This intriguing story motivated Jerram to explore the interaction of architecture and sound.
Aeolus is a Greek God, the keeper of the winds, and King of the island of Aeolia. He gave Odysseus and his crew a favourable wind to aid in their return to their home in Ithaca and his legacy is clearly an apt name. Jerram calls Aeolus an acoustic wind pavilion. The design of this massive stringed instrument amplifies the sound of shifting wind as well as the visual aspect of the sky as it changes. Built as a giant aeolian harp, the structure resonates on its own without additional power. According to the wind, strings attached to tubes vibrate which reverberate on skins at the top. These sound waves are sent via the tubes down to the viewers below. Aeolian harp strings are webbed throughout the structure, delicately sensitive to the wind and give an auditory interpretation of the wind in three dimensions to the viewer/listener, which Jerram writes is a “shifting wind map.” Beyond that, the tubes without strings are tuned to the aeolian scale and constantly hum even without wind.
In addition to the acoustic element, Jerram placed a great emphasis on the optical nature of Aeolus. 310 “internally polished stainless steel tubes” are placed so that the viewer can look through them, reflecting the shifting sun. This creates a continuously changing “landscape of light” as the steel tubes magnify and invert the area around the structure. The shifting skylight, acted upon by clouds and the sun, creates a dramatic picture in constant motion. The Institute of Sound and Vibration Research at the University of Southampton and The Acoustics Research Centre at the University of Salford were collaborators.
Dentsu Sound Sculptures
This next mention is a stunning display of sound energy turned into art. Not only inspired by sound, but created by sound, ad agency Dentsu London worked with biochemist-photographer Linden Gledhill and photography Jason Tozer to capture sound displayed in paint. Appropriately, the project was for the Canon PIXMA color printer and the results of the project unique and ultra-vibrant. The concept of placing objects on speakers has been used, through the results in this case with the addition of high speed photography, give a special view into the physics of sound. Here, paint was placed on a cover over a vibrating speaker. While the resulting paint movements were only several cm high, the high speed photography yielded gorgeous colorscapes and unique shapes in this interplay of sound and paint.
In the video “Bringing colour to life” above, Cannon Account Director Rob Zuurbier explains that the project is a celebration of color meant to highlight the “great quality of prints that the Canon Pixma produces.” The goal of the campaign was to revitalize Cannon’s image, so to speak, and this writer would say its quite successful as they were able to create other-worldly shapes using craft and
cutting-edge technology. The video shows a what appears to be a rubber membrane wrapped around a small speaker. Speakers in the video explain that photographs were taken at the incredible speed of 5,400 frames per second. A multitude of colors were used, resulting in some figures having hundreds of shades of color. The technological feat is impressive as they had less than a millimetre of depth to focus on a frame 4-5 feet in diameter.
Water sound sculpture by brusspup
In this demonstration of sonic sculpture, youtube user bruspup uses a speaker, a rubber hose, water, tone generating software producing a 24hz sine wave, and a 24 fps camera to send vibrations to pouring water, resulting in some surprising shapes. Brusspup secures the hose to a speaker simply with duct tape so that the speaker’s vibrations will be transferred to the hose and thus the water Next, he produces a 24Hz sine wave through the speaker and turns on the water.
Towards the end of the video, brusspup demonstrates the 25Hz forward effect and the 23Hz reverse effect, which makes the water appear as if it is either spiralling forward (downward) or in reverse (upwards) while flowing down. This visual is not a result of the sound waves passing through the water, but rather of the camera speed in relation to the Hz produced. In order to achieve the forward effect, one bumps up the Hz of the sound to 25Hz for forward and down to 23Hz for reverse as explained by Dan Nosowitz on popsci.com. All sorts of strange things happen between the interplay of the visual, time, and sound when the camera rate is changed. Sound designers certainly are familiar with the necessity of matching sample rate with video, ensuring that 48Khz audio is used for 48Khz video. Other, the sound and picture very quickly become entirely off. Either way, Brusso’s experiment and demonstration is an efficient and artful way of showing the potential between vibration and sculpture. Another intriguing element is the spiral itself. Perhaps the ratio in its spiral is the same as the mysterious Fibonacci sequence found throughout nature?
Benoit Maubrey: Speaker’s Wall
”An artist’s job is to interpret reality. Instead of using pigment on canvas, you can imagine the air is the canvas and the pigment is the sound, so you’re out there painting canvases.” – Benoit Maubery (mvtjournal.com)
Benoit Maubrey is an American electroacoustic sculptor who combines three-dimensional space with sound across a wide array of the arts including performance, sculpture, dance, sound, and the technological arts. He specializes in manifesting public sculptures, interactive in nature, that use cheap, recycled, and found electronics. The electronics are active which lend to interactivity and a performance element.
An entire book could be written on the intricacies and unique vision of his work, but the focus here is Speaker’s Wall.
This project was inspired by an art competition in West Berlin in 1987 entitled “Overcoming the Wall by Painting the Wall.” In the New York Times archives, an article from the Berlin Journal; In Search of a Work of Art to Overcome the Wall by Serge Schemann from 1987 describes the art movements of the time on the West Berlin side protesting the existence of the wall and totalitarianism in general. As stated by German artist Peter Unsicker, the wall is “built by Germans in the East, painted by Germans in the West.” Multitudes of artists at the time were inspired to paint and sculpt on or around the wall, mocking and heavily criticizing the Soviet Union and East German governments. Confronted by this physical presence and the horrors it symbolized, artists mobilized for this 1987 competition and protest festival. Speaker’s Wall won second place and is now part of the permanent collection of Museum Haus at Checkpoint Charlie in Berlin, Germany. The work is an impressive combination of not only various fields of art but also the politics of the time and its significant messages of freedom vs. captivity. Maubrey’s work certainly captivates attention.
The electroacoustic sculpture uses 1000 recycled loudspeakers, amplifiers, and radios. Incredibly, callers can phone the sculpture and talk through it. During the exhibition, more than 900 calls were made. In addition, at the time it also served as a PA system. Maubrey has a myriad of similar, but equally original, sculptures listed on his site. Another sculpture that can be physically called is the Speaker’s Monument, exhibited in 1991 in the West Berlin Exhibition, Riga, Lithuania. Maubrey recycles a trashed Stalinist sculpture “Heroes of the Working Class” into a speaker system that accepts calls. The work is covered with loudspeakers, a telephone answering machine and an amplifier. Again this artist meshes art, sound, and live political performance via the callers.
The artists above demonstrate that sound is not only a powerful medium in and of itself, but its power extends into other art forms with ease.
Sound is a force of nature that has its own special and unique properties. It can used artistically to create music and soundscapes and is a vital part of human and animal communication, allowing us to develop language and literature, avoid danger, and express emotions. In addition, understanding and harnessing the unique properties of sound has resulted in some surprising and fascinating inventions and technologies. Below are some interesting notes on the behavior of sound and some novel technological uses, both as a weapon and in contrast in medicine and health care.
The Speed of Sound
Sound exists when its waves reverberate through objects by pushing on molecules that then push neighboring molecules and soon. The speed at which sound travels is interesting as it behaves in the opposite manner of liquid. While the movement of liquid slows down depending on the density of the material it is trying to pass through, for example cotton as opposed to wood, sound actually speeds up when faced with denser material. For example,sound travels in Air (21% Oxygen, 78% Nitrogen) at 331 m/s, 1493, m/s through water, and a whopping 12,000 m/s through Diamond.
This sound behavior is also evident in how quickly it can pass through the human body, which is generally around 1550 m/s, but passes much more quickly through skull bone at 4080 m/s which is much denser then soft tissue. Interestingly, the average speed through the human body is very similar to that of water, which makes sense because human beings are 90% made up of water.
Sound in a Vacuum
Not only does the density of objects increase the speed ofsound, sound needs material to be present in order to “make sound” inthe first place. Because, it exists when sound waves reverberate through objects. Without objects present, sound does not exist, such as is a vacuum. his makes as a vacuum is an area of space that is completely avoid of matter and therefore has no molecules. This video demonstrates the effect of a vacuum on sound. As the air is sucked out of the bell jar, the bell can no longer be heard.
Sound is in the Ear of the Earholder
For humans and animals, the perception of sound waves passing through their ears, depends on the shape of the ear, which influences that vibrations. The shape of an animal’s outer ears determine the range of frequencies that they can hear. Elephants have flat and broad ears, which allowthem to hear very low frequencies, which they use to communicate. Lower frequencies are associated with large surface areas, such as bass drums, so this makes sense. Mice have ears that are round, which allow them sensitivity to sounds that come from above. Again, this makes sense as they are tiny and close to ground and all threats would be coming from above: hawks wanting to
eat, cats hunting, humans screaming and jumping on chairs, etc. The tall ears of rabbits make them sensitive to sounds flying around horizontally, obviously so they know when to jump. Owls work their famous head pivot to create a precise sound listening experience while checking for prey and threats. Deer work to avoid predators with muscles in their ears that allow them to point in different directions.
Sound as a Weapon
The Long Range Acoustical Device (LRAD) is a machine used to send messages and warnings over very long distances at extremely high volumes by law enforcement, government agencies, and security companies. They are used to keep wildlife from airport runways and nuclear power facilities. The LRAD is also used for non-lethal crowd control. It is effective in crowd control because of its very high decibel range which can reach 162. This exceeds the level of 130 decibels, which is the threshold for pain in humans. It is very precise and can send a “sound beam” between 30 and 60 degrees at 2.5kHZ and will scatter crowds that are caught within the beam. Those standing next to it or behind it might not hear it at all. But those who do report feeling dizzy with symptoms of migraine headaches. This is called acoustic trauma and depending on the length of the exposure and it’s intensity, damage to the eardrum may result in hearing loss. Since 2000, the LRAD has been used in many instances of crowd control throughout countries in the world, and even on pirates attempting to attack cruise ships.
Almost humorously, high pitched alarms can also be used to deter teenagers from loitering around shops or engaging in vandalism and drug activity. The “teenage repellant” has been used throughout Europe and the US. Since teenagers have a higher frequency range of hearing than adults, it targets them specifically, while adults are spared the annoyance of the 17.4KHz emission. There are critics that state these devices unfairly target specific groups (youth) and are therefore discriminatory.
Sound levitation, or acoustic levitation, uses sound properties to allow solids, liquids and gases to actually float. It uses sound wave vibrations that travel through gas to balance out the force of gravity and creating a situation in which objects can be made to float. Dating back to the 1940s, the process uses ultrasonic speakers to manipulate air pressure and points in the sound wave that counteracts the force of gravity. A “standing wave” is created between a “transducer,” such as a speaker and a reflector. The balancing act occurs when the upward pressure of the sound wave exactly equals the force of gravity. Apparently the shape of liquid such as water can be changed by altering the harmonics of the frequencies that result in star shaped droplets.
In terms of practical uses of sound levitation, they do improve the development of pharmaceuticals. When manufacturers create medicines they fall into two categories called amorphous and crystalline. The amorphous drugs are absorbed into the body more efficiently than crystalline drugs. Therefore, amorphous are ideal because a lower does can be used so they are cheaper to create. So, during evaporation of a solution during manufacturing, acoustic levitation is because it helps prevent the formation of crystals because the substance does not touch any physical surfaces. Acoustic levitation, in others words stops substances from crystallizing, thus creating a much more efficient method of drug creation. In addition, sound levitation creates essentially a zero-gravity environment and is therefore an excellent environment for cell growth. Levitating cells makes sure that a flat shape is maintained which is the best for the growing cell to absorb nutrition. It could also be used to create cells of the perfect size and shape for individuals.Sound behaves in its own fashion and is a phenomenon that can be used in force and in healing. It taps into the physics of the natural world and through its interaction allows for all sorts of human invention. Surely, sound will continue to be researched and pursued as a powerful natural element to be used in a myriad of new ways.
Traditionally, post-secondary education in Western culture for years has been hinged on both parents and students desiring education that “rounds out” the student’s mind, ie a “liberal education.” This concept of the liberal education still commands the trajectory of many high-achieving students who graduate high school in the US, and gymnasium in Europe. The goal, as many still believe, is to then attend an expensive college or university to learn the higher concepts of Western academics: philosophy, literature, history, and the social sciences. However,
as the world changes based on obvious technological advances involving computing, the internet, and digital production, there are now new choices for those students and families who choose not to adhere to the beaten path.
Simply put: there are now more
choices to post-secondary education and one path is sound and music and media
production.This is a good thing.
Two schools, one in the United States, and one in the United Kingdom, exemplify this forward motion into the future of post-secondary education by emphasizing education based on media production, and de-emphasizing traditional courses in Western philosophy, for example, Shakespeare. The two schools covered below offer real degrees or certificates and more importantly, practical career benefits upon graduation. They differ somewhat in their approaches, degrees, and course selection, but are similar in their intent: to provide pragmatic post-secondary education to talented and creative young adults who are intent on pursuing creative passions while earning real incomes after graduation. The two schools covered below are Full Sail University in the United States and the Point Blank School in the United Kingdom and this article simply intends to lay out the presentations and claims of both for any student seeking a higher education in sound and media production. The contrast between the two is rather striking. However, both are sound, no pun intended. Simply put, for readers heading toward a career in sound, music, and media, both schools are worthwhile, high quality, and have a track record of success among their alumni.
Full Sail University is located in Winter Park, Florida in the United States and bolsters an interesting and vibrant history. The inception of the school began in 1979, as a recording workshop called “Full Sail Recording Workshop” in Dayton, OH, by a Mr. John Phelps. Throughout the years, driven by student interest and teaching success, the school now commands a 192 acre campus with 49 degree programs and 2 graduate certificates which include nearly every form of digital media production avenue available to future digital media content providers. They provide degrees in music and recording (including studies in sound recording and design), games, art and design, technology, media and communications, art and design, and media and sports business.
There is credence to their degrees. They are licensed by the Commission for Independent Education (Florida Department of Education) to offer Associates through Masters degrees. They are also accredited by the Accrediting Commission of Career Schools and Colleges (ACCSC), an organization that is recognized as a national accrediting school by the U.S. Department of Education. Meaning, and this is very important for students who desire a valid BA, Full Sail provides degrees that will be recognized by other schools in the United States if a student may want to achieve a Masters Degree in Sound Design or other media discipline at another university or college in the United States. In addition, their Bachelor programs are efficient and can be completed in 20 to 29 months. Programs begin monthly and therefore there is no need to wait through semesters of time to begin studying. Many of us perhaps remember slogging through low paying jobs during summers waiting for school to begin. This waste of time does not happen at Full Sail. Graduation for the hard working student can come in half the time of a traditional 4-year college. This will give any graduate an edge in entering the workplace at younger age than their peers, ready and prepared for their career possibilities of the future.
In addition to their massive array of course offerings in multiple media disciplines, Full Sail functions as a proper university, providing financial aid and housing options near the campus. Upon graduation, they provide assistance for career development and work on job placement. They use a unique combination of technology such as networking with classmates and easy access to instructors online, continuous technical support, video conferencing, the ability to create via laptops from anywhere at anytime, and the use of cutting-edge media creation software to teach multi-media production. They couple this effective use of technology with the traditional model and rigors of a 4-year bachelors, requiring many hours of study and a wide array of courses needed to earn a degree. In all, Full Sail appears to provide a solid education in a wide array of media disciplines and the school and staff work diligently to aid their students in careers after graduation. One quick glance at their Alumni page attests to the success of their graduates, who work in film and sound capacities with the top media production firms throughout the world. This school is a valid option for students passionate and interested in media production.
The second school mentioned in this post, is the Point Blank School, an Electronic Music School that teaches music production and performance skills and techniques in London, England, Los Angeles, US, Ibiza, Spain, and online. At the onset here, one must agree that it would be hard for anyone interested in music production not to salivate at the possibility of attending a campus to study electronic music on the island of Ibiza. This would be a young DJ’s heaven. On the island of Ibiza, specifically, it appears that students study by day and then at night are given the opportunity to rub elbows with highly successful DJ’s and are afforded the opportunity to perform at internationally renowned nightclubs on the island. This sounds like a win-win for those who can afford to attend. Regardless of the three locations, it is clear that Point Blank is a successful electronic music school that attracts globally successful DJ’s and producers as instructors and most likely is helping to create the next generation of electronic music producers. On an academic note, Point Blank School has a an affiliation with Middlesex University, which validates their Higher Education classes and those who complete the Point Blank set of courses receive a certificate and award upon completion. Middlesex apparently also “validates” the BA Music Production & Sound Engineering program.
To sum it up, both Full Sail and Point Blank provide top-notch media production education. The location in which you live may determine your preference. Clearly, if you are in the United States, then Full Sail is more accessible and if you are in Europe then Point Blank is the closer option. Full Sail has a longer history and a vastly larger set of course offerings. They have managed to achieve accreditation as a “four year” university that bestows actual BA’s that in theory will transfer to other schools. The programs are steeped in a wide array of disciplines and many of their students move into the media industry with great success. For the youngster graduating high school in the US, or those willing to trek across the pond and can afford it, Full Sail is a valid, excellent alternative to the traditional liberal arts education and provides not only electronic sound and music production, but video arts and other media productions fields as well. They also have, which has not been mentioned yet in this article, a massive film set/lot where Hollywood sized films can be created. They command one singular massive campus with a four year college degree experience.In comparison and contrast, Point Blank is equally passionate and active in their own realm. Their realm is more singular however, catering to the world of elite DJ-ing and production and focusing on electronic music production and performance only. Clearly, the school has succeeded in attracting excellent talent to instruct and built campuses in three of the most illustrious places in the world (London, Los Angeles, Ibiza) to be an electronic music artist. Clearly their combination of talented instructors, course layout, and very importantly the exposure for current students to the club scenes of their various locations, are all major pluses. It depends on what one’s goals are as an aspiring sound/media artist and the degree you would like to have upon your graduation. Full Sail will give you legitimate academic credentials, serious professional contacts, a community, and support. On the other hand, Point Blank will give you expert skills for rocking dance floor, a certificate and perhaps a BA. Mostly, though, it seems it will give you high level contacts and experience in the world of globally recognized DJ’s. Both schools rock. Hats off.
Considering the Aesthetics of Surround Music in Games- By Rob Bridgett
Surround Music: The Emperor’s New Clothes?
There has been much talk recently, predominantly among composers, dedicated to the virtues and benefits of surround music in video games. Certainly with the increased memory capacity of next-generation platforms, and a greatly increased install-base of surround sound systems, the prospect of surround score and licensed music seems more feasible than it was on previous generation consoles. While this exciting expansion in the dimensions of the video-games medium offers some fantastic opportunities and new horizons for music in video games, there are also pitfalls which should be carefully considered when designing a game’s soundtrack with surround music in mind. Leaving aside any technical aspects of surround music for games (these have been discussed in depth and frequently elsewhere), there are some pertinent questions that need to be asked of when surround music is useful and when it is a needless distraction.
Surround sound systems are getting more and more common in video gaming setups
Diegetic and Non-Diegetic Boundaries
For certain types of games, the notion of surround sound music offers some very particular spatial and aesthetic challenges. In 3D first or third person perspective games (mostly those games which aim to imitate a ‘cinematic’ sound model), spatialization of music can break the ‘non-diegetic contract’ between the space occupied by score and sound effects. Non-diegetic music, or score, is not a part of the game world (diegesis). The score is not expected, for example, to get effected by environmental reverbs and it’s volume or position is not attenuated according to where the player/listener is positioned. So the spatialization of various instruments or textures within a piece of score, particularly to the rear of the surround-field, can be mistakenly read as a sound effect that is positional in the game-world by the listener. This is due in part to the tonal similarity of much ambient sound design to certain elements of an orchestral score. The same would be the case if the game were set on board a space-ship and the score employed positional ambient electronic bleeps in the rears, there would be a high likelihood that the listener would be unable to determine what was a diegetic sound effect and what was a non-diegetic positional part of the score. If the style of the music and the style of sound effects are too similar, then this spatial confusion can easily arise.
Distraction From Action and Immersion
Surround sound has a unique function in 3D games as it allows for navigation and essential off-screen action and space to be communicated to the player. Sound effects such as gunshots, dialogue and footsteps coming from positional points actually play a huge role in helping the gamer to play the game and navigate the 3D space effectively. In cinema a very different set of rules apply, anything too prominent in the surrounds is considered a distraction from the screen, essentially having your audience looking behind them towards the exit signs in the theatre to figure out what that sound was. The use of surrounds in film, often tends towards ambience to softly envelop the listener within the diegesis, essentially using sounds that are non distracting. This is a clear, tried-and-tested formula of maintaining immersion in cinema. Similarly, within these specific types of games, anything in the musical score that can be misread by the listener as a sound effect that is physically located behind them, can distract the player and draw their attention to the music.
Not only would music that is making prominent use of surround channels be confusing to the player who is attempting to use the surrounds for navigation and for knowledge of enemy positions etc, it also has great implications for the final mix of all the elements of the game’s soundtrack, particularly sound effects and dialogue. With music potentially taking up a wide band of frequency in the rear-field as well as the front, this introduces more clutter into the overall mix of sound effects, dialogue and music and places further demands upon interactive mixing (an area that still needs to make huge leaps technically and artistically in order to catch up with the quality of cinema sound) to compensate and allow the listener to clearly hear dialogue and sound effects at key moments without being overwhelmed by the score.
Where Surround Music Will Work Well…
This is by no means a definitive list, but there are of course many exciting areas where surround music can be exploited in ways that are unique to interactive games and that look away from the cinematic ‘film-sound’ model. In 3D games, a score that remains as ‘underscore’, and remains in the ‘ambient realm’ could work well in a 3D environment. Provided there is nothing to draw the player’s attention to the rear field, nothing sudden and unexpected in the score that reads as a sound effect. It is therefore essential that communication between the composer and the sound director occurs to figure out exactly how much activity is required in the rear-field in terms of the score. Again, anything too prominent, and unexpected, runs a high risk of being read as a positional sound effect by the player.
Positional sound effects are useful to the player, a cowbell positioned in the rear field of the score is not. Having said this, there are also occasions where the diegetic and non-diegetic ‘contract’ can be exploited. Having surround elements in the music deliberately ‘read’ as positional could be used in particularly scary or quiet moments in a game, such as a survival horror, to create some very tangible aspects of tension and confusion. “Was that a sound behind me? Ah no it was just the music! phew… oh damn” etc. It all depends upon the requirements of the game that are placed upon the sound, music and dialogue.
Racing games in particular also offer significant opportunities for use of surround music in games as so much of the action in a racing title is focused on what is directly in front of the player. There is also a direct real-life example and sonic analogy that can be drawn between the player and the listener as though they were sitting in a real car and listening to a surround sound piece of music (from speakers actually placed in the front and in the rear of the vehicle). The only sound effects that the player is also likely to be interested in coming from the rears are the sounds of other cars, not in themselves entirely subtle or hard to mistake for elements of the surround music. Not only this, but the style of music expected in a racing game, that of high-octane, electronic adrenalin-pumping beats, is completely different to a ‘score’ and can be read and understood more clearly by a listener as ‘positional electronic music’ as opposed to ‘positional sound effects’. It again comes down to a clear difference between style of music (electronic) and style of sound effects (engines).
Racing games offer a real-life analogy of surround music, with speakers often situated in the rear of the car as well as the front.
Another good example is in 2D environments where there is no suggested diegetic rear space for sounds to occupy. One such example is in front-end menus, there is a good opportunity to use surround music there to fill the rear field space. In 2D games there is also a great opportunity to explore the space created with surround music, because, again, there are no implications for confusing 3D sounds with any spatialized music that may be heard.
A third example is diegetic, positional music, i.e. music that is emanating directly from the game world, for example a live band playing in a nightclub. The sounds of the separate instruments could be positioned in 3D at their exact points of origin on the stage, thereby when the player walks around, the balance of the music will change accordingly. The player could even get up on stage and among the musicians to hear the vocalist in front, the drummer coming from the rears, guitar and bass from left and right respectively. There could even be specific audience reactions at a particular moments in the performance positionally coming from the audience space. An 8-channel interleaved stream, with each individual channel mapped to a particular position would cater well for this implementation.
Guitar Hero offer real potential for convincing surround music in games, as it represents an actual on-stage performance, with potential for audience and band spatialization.
Planning Ahead for Surround Music
Deciding on exactly where each element of the soundtrack is to be placed and what belongs to the scene (source sounds / music) and what comes from ‘beyond the scene (non-diegetic) is a critical decision to be made up-front by the audio director in conjunction with the game designers before the composer begins their work. Also determining the ‘style’ and timbre of sound effects and music will help to separate any confusion as to what is music and what is a sound effect. Having a composer write a surround score with specific uses in mind from the beginning will have a great many benefits from suddenly deciding that the music should be up-mixed into surround format when it is all complete. Finally, the type of game will greatly inform many of these decisions, but also thinking about the specifically designed functions of the music within a particular type of game will help to define further creative strategies for the use of surround music in games, and eventually make games a more immersive and interactive experience for gamers. Any decision whether or not to commission a surround score, or surround licensed music content , must first consider all the elements of the finished games’ soundtrack together as a single final entity and the effects that this will have on the player.
About the author: Rob Bridgett (of www.sounddesign.org.uk)
was one of the first to complete the Master’s degree in Sound Design
for the Moving Image at Bournemouth University and is currently Senior
Sound Director at Radical Entertainment in Vancouver. Work for games
includes… 50 Cent: Blood on the Sand (2009) (sound director) ,
Prototype (2009) (sound mixer & cut-scene sound designer) , Crash:
Mind Over Mutant (2008) (sound mixer) , TimeShift (2007) (cut-scene
sound designer) , Crash of the Titans (2007) (additional sound design) ,
World in Conflict (2007) (additional sound design) , Scarface: The
World Is Yours (2006) (sound director) , Sudeki (2004) (VG) (additional
sound designer) , Serious Sam: Next Encounter (2004) (sound designer
& composer) , Vanishing Point (2000) (sound designer).
I remember a time I first began editing when I was struggling to make a car door slam match the picture on film. I shifted the sound earlier, later, added and removed elements and it still didn’t fit. The editor who was mentoring me said:
If you’re trying too hard to make a sound fit, then you’re using the wrong sound.
He told me why: the car door sound effect should have been correct (it was the proper model and year), but it had been recorded inches away from the car. In the scene the camera was a few meters away from the car. This difference made the sound jarringly wrong.
In other words, no matter how you synchronize the sound with the picture, if the actual nature of the sound is wrong, it will never work. This taught me how important it is to use the proper sound:
Of course, volume is easy to adjust. If the timbre is wrong, you can choose another sound from the same family. However if the sound’s perspective or apparent distance doesn’t match the picture, no matter what you try it will never completely fit.
Simply raising or lowering the volume of the sound may seem to make the sound closer or further away, but this is only a ‘cheat’. The match will be close, but will invariably seem subtly, disturbingly, off.
What About Using Reverb?
One common trick is to apply reverb to closely-recorded sounds to make them seem further away. Even the best reverb plug-ins cannot replicate perspective perfectly, however, and the result will sound slightly odd. How do we solve this problem? Read on.
What is Perspective?
Perspective describes how close a sound appears. Typically a sound’s perspective is described in four ways:
Close (anything under 10 feet/3-4 meters)
Medium Distant (roughly 10 feet/3-4 meters away)
Distant (anything more than 10 feet/3-4 meters away)
Background or BG (quiet and muted)
MCU or Major Close Up (inches away from the microphone, although this term is being phased out in favor of Close)
A close/medium distance mic setup
Here are some examples of a smoke alarm recorded at various perspectives. NOTE: it may help to wear headphones to hear the perspective or ‘space’ or ‘room’ properly (and have the volume down, the sound is sharp):
Notice how the Distant alarm has more echo, even though it is slightly louder than the Medium Distant alarm? The difference between these recordings is how much ‘air’ or space is apparent in the recording. ‘Air’ is created by a) the space where the recording takes place (also known as ‘room’) and b) the amount of reverb.
An Example: Woof
Imagine a dog barking in a city alley. A close recording will have prominent barks, and very little of the echo of the barks in the alley.
The further the dog is from the microphone, the more ‘air’ or ‘room’ will appear on the recording. The dog will seem quieter since it is further from the microphone. We will also hear more of the barks reverberating or bouncing off the alley walls.
It is exactly this aspect of the sound that we want. This distance, or perspective, will make it match perfectly with medium distant camera shots.
A recording of the close dog and one of the distant dog, although they are the same animal, will sound completely different.
Me, between a close/medium mic setup
Which Recording Perspective is Best?
So which perspective do you choose if you are going to record a sound effect? The short answer: all of them.
With today’s digital multi-track recorders you can record all perspectives at once. Patricio Libenson, the recordist for Hollywood Edge’s Explosions library, told me his set up involved multiple microphones, all at different distances and angled mathematically to account for phasing. The result is an incredibly rich collection.
Let’s return to our dog in the alley. We can set up one mic at the end of the alley, and have another next to the dog, both plugged into the same recorder. When the dog barks, we’ll have recorded both perspectives at once.
Match the Recording to Your Project
If you have to choose one perspective over the others, consider the project you are working on:
Multimedia or Radio – it is always best to record close perspective for these projects. The reason? Distance has little value when you won’t be using picture or visuals. Also sound designers like the immediacy and power of close effects.
Film or Television – most film editors prefer their effects recorded Medium Distant. The idea is that most camera shots are typically Medium Distant or further. Also, in a pinch you can fake a close perspective by raising the volume. In a perfect world they would like to have a Close version available as well.
Unfortunately, most commercial libraries are recorded close. Imagine you are trying to use a close dog bark in a scene where the dog is across the yard. It won’t fit.
That’s why at Airborne Sound we record two perspectives: close and medium distant, even if it requires multiple takes.
When you record sounds to match the requirements for your project, you’ll find the sounds fit easier, and require less editing. And, of course, it just sounds right.