Shockwave-Sound Blog and Articles
Sound Effects and the Fake Engine Roar

Sound Effects and the Fake Engine Roar

Over the years, the auto industry has increasingly honed their craft at creating environmentally sound cars and reducing unwanted noise levels for the drivers. As a result, the authentic organic engine sounds is masked more and more. For car aficionados who may buy vehicles specifically for the engine roar, this is not necessarily a good thing and they’ve made this known. The auto industry has responded by creating new technologies upon these new technologies that attempt to restore the classic engine sounds that so many have come to cherish.

The trend is succinctly described by K.C. Calwell from caranddriver.com in “Faking it Engine Sound Enhancement Explained.” Calwell references work done by Yamaha’s Center for Advanced Sound Technologies, hired by Lexus for the launch of the LFA model in 2009. Fascinatingly, the Yamaha involved here is the company that creates musical instruments – violins, guitars, etc. Lexus contracted Yamaha to specifically “utilize sound as a medium that can achieve a direct link between the driver and the vehicle” (archive.yamaha.com “Yamaha Creates Acoustic Design for Engine of the Lexus LFA Super Sports Car”). Here, sound is utilized as a concrete object, a physical means to affect the mental state of the driver – it is “sound design” in its purest form.

 

Yamaha was chosen because of their expertise in establishing a powerful emotional and performance connection between musicians and their instruments with the intent of maximum enjoyment of the musician. In this case, the vehicle is the instrument and the driver is the musician. Beyond the pleasure of driving an excellent sounding vehicle that responds to the driver’s acceleration actions, the additional sound element also adds a higher sense of control, allowing the driver to be more “in tune” with their vehicle. As Yamaha states, “Accurately passing on high-grade engine sounds to the driver makes it possible to feel the vehicle’s condition and instantly take the next minute action that is required (“Yamaha Creates Acoustic Design for Engine of the Lexus LFA Super Sports Car” archive.yamaha.com). Yamaha refers to the this back and forth interaction between the driver and vehicle through sound as “feedback,” and an “interactive loop” which makes the driving experience more pleasurable and exciting.

Calwell smartly compares the cabin of the car to the “hall” of a performance area and the driver as the “audience.” In addition to Lexus, he mentions BMW as a forerunner in the addition of recorded engine sound to the driving experience. BMW’s method is playing an exterior perspective recording of the car’s engine directly through the stereo speakers. Incredibly, the samples are chosen according to the load on the engine and the rpm in real time. As the real sounds of the engine are still somewhat audible, the additional sound through the stereo speakers is described as “backing track.”

Volkswagen – Soundaktor: Active Sound

So, here is what Volkswagen initiated in 2011. In order to beef up the sound of their engines, they created the “Soundaktor” which is German for “sound actuator” – ie something that creates sound. Essentially it is a speaker between the engine and the cabin which combines noise to the normal engine sound to create a more “authentic” old-school power sound to the driving experience. This is the definition of “active sound” in terms of automobiles – sound through the speakers triggered by real-time actions of the driver. An audio file is housed on the vehicle’s computer and triggered by changes in the throttle. All noise from the Soundaktor is played through the one dedicated speaker as opposed to other systems that play enhanced engine sound through the car’s stereo speakers. Interestingly, with a bit of digging you’ll find car enthusiasts on forums discussing the best methods with which to dismantle the function on forums – one user saying he pulled a fuse to dismantle it as soon as he bought his VW. It seems that some of these connoisseurs tend to not like the “fakeness” of the added sound, though most drivers its appears aren’t bothered enough to worry about the authenticity. A quick search for BMW Active Sound shows these videos – all providing info on how to dismantle their sound system.

Most likely, the general consumer doesn’t even realize they are listening to a replacement engine sound and simply appreciate the experience. Some users in the know, however, are wishing there could be a toggle on/off for the additional sound which would give them to choice to engage or not. As with all these systems, the purpose of the additional audio is to compensate for the muffling of the actual engine sound due to advancements in sound proofing.

Ford – General Motors – Acura: Active Sound

The Cadillac models incorporate Bose sound systems to add additional noise-canceling technology to rid the cabin of unwanted “road noise” and simultaneously employing a stereo based system akin to Volkswagen. As an audio engineer and music producer, I 100% appreciate what these auto sound technicians are doing – they are “cleaning up” the audio of the car’s performance. It is purely analogous to the job of live sound mixers as well as those in post-production and mixing/mastering music – get rid of the unwanted noise! At the same time, they enhance the choice sounds via the stereo system.

Acura as well have moved into the foray of vehicle sound designing in an impressive way. The moniker for their efforts is Active Noise Cancellation (ANC). This is eerily similar to the theatre mixing work of blah blah discussed in a previous post. Which is creating a system that dynamically responds to sonic assaults in real time that may disrupt the performance/driving experience in order to kill the noise. Acura’s ANC works to cut out the low frequencies noises similar to cutting out the bass under 60-100dB when mixing an audio track. A bit of google digging could probably unearth the exact frequencies they are targeting – perhaps it’s 500 dB where the “mud” of an audio track tends to live at the meeting of the bass drum and the bass guitar/element. Regardless, to cut the unwanted bass out of the cabin’s aural experience Acura uses overhead mics within the cabin that create a reverse phase (noise canceling) signal to handle and mute the unwelcome deep tones. At this point, ANC is able to increase the sound levels from the engine to fill in the now clean space afforded by noise cancellation. Again, all of this is dynamic and works to raise the engine sound level within the cabin by up to 4dB. This audio system is a standard element of the MDX, RLX, TLX, and ILX models. (www.thedrive.com/tech/22834/from-acura-to-vw-bmw-to-porsche-car-companies-are-getting-sneakier-about-engine-sound-enhancement)

Incredibly, this technology has been taken to now allow cars that have 4 cylinders to sound like engines that are much much bigger, as explained in this video.

From a professional sound design perspective, being challenged with syncing dynamic engine audio samples to be triggered in real time during a live driving experience is enticing. For an audio nut, or a current student it’s kind of like “hell yeah this sounds like fun!” not to mention the earning and career potential doing sound design for car companies. This is a wide open field for sound designers. On the flip side, for the consumers and those who love these vehicles, it appears to be sort of a nightmare, as they want the “authentic.” In fact, when googling “car sound pipes” the first 5 entries and videos are all about how to dismantle them – as with the forum posts mentioned above. I include a post from Larry Webster on popularmechanics.com here because it is not only exceptionally written but quite telling. Webster, on popularmechanics.com in “The Rise of the Fake Engine Roar” laments the development of this experience that he deems “fake. First of all, the title says it all – the “Fake Engine Roar.” He references the main contributors for these “fake” sounds – muffled noise from excellent insulation and environmental regulations. He quotes car buyers who state that the industry is “lying” to them by using sound samples.

While car owners who want the classic noise might appreciate the attempt at improving the aural experience, there is some negative reaction from car lovers – from those who live by their car, and they appear to deem the auto industry’s effort to be a fake, creating a faulty experience. Whether we think the environmental benefits outweigh the opinions of these car lovers, or whether we lament the loss of the “classic engine sound,” one thing is true. That sound design and sound effects continue to play a major role in many types of products, not only on the stage, but in vehicles. The use of the sounds transforms the car itself into a performance venue.

Sound Design Founders of the Theatre

Sound Design Founders of the Theatre

The Theatre’s Contribution to Sound Design

As with any human discipline or industry, sound design as a practice and art form developed collectively over time, spurred on by the contributions of many and the striking visions and passion of leaders in the field. Below are two major contributors to the world of sound design as we know it today: Dan Dugan and Charlie Richmond. Both arose within the theatre world and created solutions to problems they faced – resulting in internationally respected products that we use today. Much of the major building blocks and tools used in sound design, live and in the studio, developed in the 1960s and 1970s,  and in this era these two sound founders began to make waves.

Dan Dugan

Dan Dugan, an American inventor, audio engineer, and sound recordist was born March 20, 1943. As a young man, 24, he began working in theater sound for the San Diego National Shakespeare Festival and the American conservatory. In 1968 the term “Sound Designer” was created to explain Dugan’s efforts. His first major contribution to sound design, aside from giving a reason for the term “sound designer” to exist, is specifically relevant to live performance – the “automatic microphone mixer,” known as the automixer, such below through several generations of production.

As a sound pioneer, Dan Dugan realized early in his career that he needed to work for and by himself to solve the problems he encountered – new problems were occurring in real time. As reported by Sound and Video Contractor in “AV Industry Icons” 2006, Dugan states: “I realized I had to work for myself … so I built my own studio. It was one of those gigs in ’68 or ’69 that sparked the invention of the automatic mic mixer.” It was the frustration with feedback and noise problems that arise from using multiple microphones in a singular live setting, such as a theatre stage, that gave rise to his experimentations to improve live sound and the ability to design sound without sonic flaws. The two most important problems to solve specifically were one, reducing the amplified noise contributed by multiple microphones that pick up ambient noise and, two, eliminating the feedback created with multiple actors/microphones moving around stage into different positions and the cross signaling of their outputs going into each other’s inputs – ie “feedback

In the video below, Dugan explains the problems he encountered working on the live production of Hair, in which there were 16 area mics, 10 mics in the band, 9 hand mics, and one wireless mic – all operated by one person on a manual mixer.

Dugan played around with voltage-controlled amplifiers (VCAs) for several years in the early 1970s to solve the problem of spontaneous feedback and noise buildup, devising a system that used a distant reference microphone which accepted the signals from stage microphones. The output of each microphone was automatically adjusted depending on the input received by the reference microphone in real time via the reference mic. This enabled the system as a whole to avoid unwanted feedback while also balancing microphone levels. As he explains in Sound and Video Contractor, “I was messing around with logarithmic level detection, seeing what would happen if I used the sum of all the inputs as a reference. That’s when I accidentally came upon the system. It was really discovered, not invented.” he says. “I didn’t really know what I had, just that it worked like gangbusters.”(Sound and Video Contractor 2006).

His two main mixing systems, the Dan Dugan Speech System and the Dan Dugan Music System are demonstrated here in split-screen during a David Letterman show

On his website, Dugan explains that his products, the Model D, E, M, N, Dugan-MY16 and the Dugan-VN16 are “accessories” to sound mixing consoles, not mixers in themselves. The products are patched into the insert points of the send and return loops of each individual channel on an existing console. Thus, mics do not need to cued and faders can be left alone unless tweaked when used. As Dugan writes, “This frees the operator from being chained to the faders.”

It is clear that live sound design would not be the same without Dugan’s pioneering efforts. Dugan remains active operating Dan Dugan Sound Design in San Francisco, CA. You can check out his products and more at dandugan.com. He has an extensive list of products all based on and stemming from his original designs and creations. His products have notably been used by CBS Late Night with David LettermanOprahHollywood Squares, WABC in New York, WETA in DC, WVIZ in Cleveland, U.S. presidential debates, ESPN, NBC, CNBK, CBS, Fox Sports, MLB Network and more.

Charlie Richmond

A contemporary of Dugan’s, Charlie Richmond was born January 5, 1950, and is an American inventor who came onto the scene in the 1970s and like Dugan, began creating solutions to solve the problems faced by live theater. In 1975, he addressed the need for a mixing console that would take 100 inputs, and wrote “A Practical Theatrical Sound Console” for the Audio Engineering Society (AES). In it, Richmond describes a unit which elegantly and economically allows one operator to control 100 controls at once without the need of a computerized assistance. The paper is in the AES online library which can be viewed by members or purchased.

Richmond launched Richmond Sound Design in 1972 and was the first to produce and market two new off-the-shelf products for theater mixing, a sound design console named the Model 816 in 1973 and a computerized sound design control system in 1985 – Command/Cue. In addition, he invented the “Automatic Crossfading Device,” trademarked “Auto-Pan” in 1975. According to the Richmond Sound Design website, the Model 816 was “matrix-equipped and included our patented AUTO-PAN™ programmable crossfaders” and revolutionary at the time. Richmond’s company went on to create the Command/Cue computerized audio control system used in multiple theater performances, theme park shows internationally and in Las Vegas. In 1991, with Stage Manager show control software they pioneered the use of MIDI to manage multiple media controllers including sound, lighting, lasers, etc which became an industry standard for all types of live shows from Broadway to cruise ships. Since then, Richmond Sound Design has contributed significant sound software and hardware that have greatly expanded the possibilities of live sound design including: the MIDIShowCD in 1994 which provided multichannel sources at the fraction of the cost, the AudioBox Theatre Sound Design Control system, the ShowMan software and the ShowMan Server Virtual Sound System which brought compatibility for all of its products to an industry standard.

Richmond and Mushroom Studios

Richmond also brought his success with live mixing to studio music mixing by purchasing Mushroom Studios in Vancouver, British Columbia, Canada in 1980. Richmond hosted concert musicians to score many feature films including the film score album of Top Gun. Skinny Puppy, Tom Cochrane, Fear Factory, and Sarah
McLachlan were some of the notable acts that recorded there. Richmond sold the studio in 2006. Clearly, talent with sound bleeds over from sound design into music mixing and sound leaders like Richmond can easily traverse both realms.

Richmond’s Writings

What might be most striking in Richmond’s relation to sound is his gift with written language and his visionary nature. Software such as Garage Band which comes free with Mac products today, and professional software such as Logic Pro and ProTools were obviously only a distant dream of sound designers 30 years ago. In Theatre Design & Technology Magazine, 1988, Richmond contributed a piece entitled “A Sound Future” and in it he predicts the invention of the Digital Audio Workstation (DAW) that inundates the sound world today. As he writes:

Sound designers have been waiting for a long time for a system which allows us to create soundscapes easily, almost intuitively: a system which would perform as a transparent extension of our desires, a tool which requires no interpretation between wish and result. – Charlie Richmond 1988

In 1988, he also predicts the creation of the graphically oriented interfaces that we use today, buttons, etc:

Just point at the picture of the deck and click the mouse button and the (graphically represented) reels will start turning, click again and they will stop. Great, but …what about all the different types of loudspeakers? All of a sudden, I start seeing a lot of work for our software people and a delivery date of some time in the 1990’s for a customized system.Charlie Richmond 1988

Again, visionary. Richmond goes further to suggest how digital graphics could be used to control the parameters of software and it is reminiscent of the many DAWs we see today:

Maybe we should be able to display a big picture of the loudspeaker representing the output in which we want to increase the volume. We could represent the overall volume of the loudspeaker by changing the overall size (volume!) of the graphic representation. Charlie Richmond 1988

Dugan and Richmond have both significantly contributed to the hardware and hence the software that enables sound designers today, both live and in the studio, to create in ways never before possible and perhaps never possible without them. I find it interesting how it was the demands and problems specific to live theatre that propelled Dugan and Richmond to invent new solutions to audio problems that live bands and studio recordists meet, or world have met, without them.

Free sound effects pages back by popular demand

Back in the good old days, when our site Shockwave-Sound.com had the “old style” look and feel, and about half as much content as we have on the site now, we used to have a selection of “Free sound effects” pages. It wasn’t really anything very fancy, just a bunch of pages from which users could download sound effect files completely free of charge.

We re-designed and did a complete overhaul of the website in October 2015 and the new-look site had a more focused approach, with less clutter and more straight to the point, with high-quality sound effects and royalty-free music that we wished to focus on.

Turns out that people are missing those old free sound effects pages. Having received numerous calls for their return, we’ve listened and finally got them back. Here they are.

Just a reminder – the sounds available on these free sound effects pages are of varying quality levels, some good, some pretty bad. We didn’t make the sounds. We just gathered them and described them for you. We also cannot grant any license for them, other than strictly personal / home use. If you want sound effects of consistently high quality and with a real license to use them in your own productions, then be sure to buy the professional level sound effects which you can find by searching or browsing from our main home page.

We currently have the following pages of completely free sound effects:

We hope you’ll find some use, or just some pleasure or amusement out of these sounds. Remember — home use only. No license granted for these.

 

Harnessing the Power of Sound: Behavior and Invention

Harnessing the Power of Sound: Behavior and Invention

Sound is a force of nature that has its own special and unique properties. It can used artistically to create music and soundscapes and is a vital part of human and animal communication, allowing us to develop language and literature, avoid danger, and express emotions.  In addition, understanding and harnessing the unique properties of sound has resulted in some surprising and fascinating inventions and technologies. Below are some interesting notes on the behavior of sound and some novel technological uses, both as a weapon and in contrast in medicine and health care.

 

The Speed of Sound
Sound exists when its waves reverberate through objects by pushing on molecules that then push neighboring molecules and soon. The speed at which sound travels is interesting as it behaves in the opposite manner of liquid. While the movement of liquid slows down depending on the density of the material it is trying to pass through, for example cotton as opposed to wood, sound actually speeds up when faced with denser material. For example,sound travels in Air (21% Oxygen, 78% Nitrogen) at 331 m/s, 1493, m/s through water, and a whopping 12,000 m/s through Diamond. 
This sound behavior is also evident in how quickly it can pass through the human body, which is generally around 1550 m/s, but passes much more quickly through skull bone at 4080 m/s which is much denser then soft tissue. Interestingly, the average speed through the human body is very similar to that of water, which makes sense because human beings are 90% made up of water.

Sound in a Vacuum

 


Not only does the density of objects increase the speed ofsound, sound needs material to be present in order to “make sound” inthe first place. Because, it exists when sound waves reverberate through objects. Without objects present, sound does not exist, such as is a vacuum. his makes as a vacuum is an area of space that is completely avoid of matter and therefore has no molecules. This video demonstrates the effect of a vacuum on sound. As the air is sucked out of the bell jar, the bell can no longer be heard.
Sound is in the Ear of the Earholder

 

For humans and animals, the perception of sound waves passing through their ears, depends on the shape of the ear, which influences that vibrations. The shape of an animal’s outer ears determine the range of frequencies that they can hear. Elephants have flat and broad ears, which allowthem to hear very low frequencies, which they use to communicate. Lower frequencies are associated with large surface areas, such as bass drums, so this makes sense. Mice have ears that are round, which allow them sensitivity to sounds that come from above. Again, this makes sense as they are tiny and close to ground and all threats would be coming from above: hawks wanting to
eat, cats hunting, humans screaming and jumping on chairs, etc. The tall ears of rabbits make them sensitive to sounds flying around horizontally, obviously so they know when to jump. Owls work their famous head pivot to create a precise sound listening experience while checking for prey and threats. Deer work to avoid predators with muscles in their ears that allow them to point in different directions.
Sound as a Weapon
 The Long Range Acoustical Device (LRAD) is a machine used to send messages and warnings over very long distances at extremely high volumes by law enforcement, government agencies, and security companies. They are used to keep wildlife from airport runways and nuclear power facilities. The LRAD is also used for non-lethal crowd control. It is effective in crowd control because of its very high decibel range which can reach 162. This exceeds the level of 130 decibels, which is the threshold for pain in humans. It is very precise and can send a “sound beam” between 30 and 60 degrees at 2.5kHZ and will scatter crowds that are caught within the beam. Those standing next to it or behind it might not hear it at all. But those who do report feeling dizzy with symptoms of migraine headaches.  This is called acoustic trauma and depending on the length of the exposure and it’s intensity, damage to the eardrum may result in hearing loss. Since 2000, the LRAD has been used in many instances of crowd control throughout countries in the world, and even on pirates attempting to attack cruise ships.
Almost humorously, high pitched alarms can also be used to deter teenagers from loitering around shops or engaging in vandalism and drug activity. The “teenage repellant” has been used throughout Europe and the US. Since teenagers have a higher frequency range of hearing than adults, it targets them specifically, while adults are spared the annoyance of the 17.4KHz emission. There are critics that state these devices unfairly target specific groups (youth) and are therefore discriminatory.
Sound Levitation
Sound levitation, or acoustic levitation, uses sound properties to allow solids, liquids and gases to actually float. It uses sound wave vibrations that travel through gas to balance out the force of gravity and creating a situation in which objects can be made to float. Dating back to the 1940s, the process uses ultrasonic speakers to manipulate air pressure and points in the sound wave that counteracts the force of gravity. A “standing wave” is created between a “transducer,” such as a speaker and a reflector. The balancing act occurs when the upward pressure of the sound wave exactly equals the force of gravity. Apparently the shape of liquid such as water can be changed by altering the harmonics of the frequencies that result in star shaped droplets.
 
 
In terms of practical uses of sound levitation, they do improve the development of pharmaceuticals. When manufacturers create medicines they fall into two categories called amorphous and crystalline. The amorphous drugs are absorbed into the body more efficiently than crystalline drugs. Therefore, amorphous are ideal because a lower does can be used so they are cheaper to create. So, during evaporation of a solution during manufacturing, acoustic levitation is because it helps prevent the formation of crystals because the substance does not touch any physical surfaces. Acoustic levitation, in others words stops substances from crystallizing, thus creating a much more efficient method of drug creation. In addition, sound levitation creates essentially a zero-gravity environment and is therefore an excellent environment for cell growth. Levitating cells makes sure that a flat shape is maintained which is the best for the growing cell to absorb nutrition. It could also be used to create cells of the perfect size and shape for individuals.Sound behaves in its own fashion and is a phenomenon that can be used in force and in healing. It taps into the physics of the natural world and through its interaction allows for all sorts of human invention. Surely, sound will continue to be researched and pursued as a powerful natural element to be used in a myriad of new ways.

 

Google Close Captions Sound Effects

Google Close Captions Sound Effects

Google, ever the inventors of new technology and the owners of YouTube.com, have broadened their work into the area of sound effects, specifically through the audio captioning on their YouTube network. Traditionally “closed captions,” which provide text on the screen for those with hearing challenges, provided dialog and narration text from audio. Now, however, Google has rolled out technology that can recognized the .wav forms of different types of sounds to include on their videos, dubbed “Sound Effects Captioning.” They do this to convey as much of the sound impact as possible from their videos, which is often contained with the ambient sound, above and beyond the voice.

In “Adding Sound Effect Information to YouTube Captions” by Scorish Chaudhuri, Google’s own research information blog, three different Google teams, Accessibility, Sound Understanding, and YouTube utilized machine learning (ML) to develop a completely new technology, a sound captioning system for video. In order to do this, they used a Deep Neural Network (DNN) model for ML and three specific steps were required for success: to accurately be able to detect various ambient sounds, to “localize” the sound within that segment, and place it in the correct spot in the caption sequence. They had to train their DNN using sound information in a huge labeled data set. For example, they acquired or generated many sounds of a specific type, say “applause,” to be used to teach their machine.

Interestingly, and smartly, the three Google teams decided to begin with 3 basic sounds that are listed as among the most common in human created caption tracks, which are music, applause, and laughter: [[MUSIC], [APPLAUSE], and [LAUGHTER]. They report that they made sure to build an infrastructure that can accommodate new sounds in the future that are more specific, such as types of music and types of laughter.  They explain a complex system of created classifications of sounds that the DNN can recognize even multiple sounds are playing, meaning the ability to “localize” a sound in a wider variety of simultaneous audio. Which, apparently they were successful in achieving.

After being able to recognize a specific sound such as laughter, the next task for the teams was to figure out how to convey this information in a usable way to the viewer. While they do not specify which means they use to present the captioning, the different choices seem to be: have one part of the screen for voice captioning and one for sound captioning, interleave the two, or only have the sfx captions at the end of sentences. They were also interested in how users felt about the captions with the sound off and interestingly, discovered that viewers were not displeased with error, as long as the majority of time the sound captions communicated the basic information.  In addition, listeners who could hear the audio did not have difficulty ignoring any inaccuracies.

Overall this new system of automatically capturing sounds to display as closed captioning via a computer system as opposed to a human by hand looks very promising. And, as Google has shown time and time again, they don’t seem to have a problem with the constant evolution of products that succeed and that users value. They stress that this auto capturing of sound increases specifically the “richness” of their user generated videos.  They believe the current iteration is a basic framework for future improvement in sound captioning, improvements that may be brought on by user input themselves.