Shockwave-Sound Blog and Articles
Sound Oddities, part 1

Sound Oddities, part 1

TV Station’s Viewers Irate over Choice of Music and Sound Effects

Scott Schaffer, of ABC’s  WNEP affiliate in PA. launches into the topic of sound effects during the segment “Talkback 16” in which the station responds to viewers’ feedback. In this particular segment, the issues defined as “critical topics” some WNEP callers relate to the station’s use of sound effects and choice of music. One caller addresses his notion of a “critical issue of the day:” “Yeah that little thing you do between the, eh, commercials and the program… you know that goes DADANNNDANDNADNDNN DANN .. the part goes DANDANDNNDA ..” and apparently he wants it all edited except the final part.

The host further explains that amidst other calls regarding the summit in Helsinki and severe weather at home, another caller instead wanted to address the bell sound effect used throughout their segments.“Instead of having that little bell going Ding Ding, have like a big church bell … BONGGG BONGGG BONGGG .. not a little deeeng deeeng deeeng deeeng .. big church bell … BONGGG.” The sound effects wars apparently continued as Schaffer shared a response caller a bit irate that the previous calls have warranted air time, stating that someone like herself, “who can actually think,” should be chosen as opposed to others who apparently can only talk in sound effects. 

Sound, Music, and Voice in Google Assistant’s Personality

Google Assistant, Google’s artificially intelligent virtual assistant, is in constant evolution in the development of its human persona. While it may appear “alive” at times, it is, of course, being created and developed by people, as the following video of a brainstorming session with developers shows. Designers, engineers, illustrators, and yes – sound designers – make up the huge cross-sectional group trying to bring life to this technology to make people’s lives better, called the Google Assistant’s Personality Team. The assistant’s team uses sound, music, and voice to enhance not only the experience, but in addition the usefulness, of this AI assistant used on 500,000 million devices worldwide. This is discussed in “Exclusive: Inside Google Assistant’s Personality Team” on a video on the Yahoo Finance website – hosted by Yahoo Finance tech critic David Pogue.

The Google team has plenty of opportunities to utilize their creative prowess as the questions that users might ask is nearly limitless. For example, as Drew Williams (Writer – Personality Features) if a user wants to call Santa and tells the AI to do so, the user could experience calling the North Pole and an elf would answer etc. to enrich the experience. Clearly, for any of that to happen sound effects and sound designers would be needed to bring sonic “realness” to the experience. The article discussed here only mentions “sound designers” once, but we can rightly assume that in order for an “elf” to answer the phone, someone is going to need to tweak the pitch of the voice to make is sound tiny, and/or employ “north pole type sound effects” such as Santa’s laugh and the sounds of toy-making in the background. And, if this isn’t being done – it should be.

In addition to sound elements, music too plays a role. At 1:15 in the video on Yahoo Finance we see an instance of a developer suggesting a musical interlude to make a point. This is Elena Skopetos (Character Editor) who is in charge of bringing life to Google Assistant’s opinions and thoughts, imbuing the AI with a definitive character. As she explains for example if a user asks “Do you want to build a Snowman?” the AI responds sarcastically “Frozen came out in 2013 – let it go” referencing, of course, the musical hit from the show “Let it Go.” Again, the use of music.

The actual voice of the AI itself is also highly important as Andy Pratt (Features Lead) and Ryan Germick (Principal Designer) explain, the voice and personality are meant to be helpful, useful, with a bit of humor but not snarky like Siri. Pratt states that he thinks of Google as a massive library and Google Assistant as a “cool librarian.” On a side note, to ensure that the humor of the voice is culturally palatable Google employs developers from around the world for their intimate cultural knowledge of their specific locales – a term called “trans-creating.” The discussion in the video goes on to a discussion on the importance of “voice” itself in technology.  Lilian Rincon (Product Management Director) and the host make the point that “voice” itself is playing a greater and greater role in controlling and navigating technology. For example, in areas of the world where literacy, typing specific dialects, and access to computing is a concern, “voice” both by the user and device itself can punch through these difficulties: no need to type and read, rather just ask and listen.

The Ecological Impact of Sound Pollution

Human beings are great at many things, including polluting the heck out of any environment in which we live. I’ve personally never considered “noise pollution” beyond the noise violations my roommates and I received in college for blasting Jimi Hendrix at 2 am in the dorms or the jackhammer at 7 am on neighborhood streets when work crews are starting their work days. Those are noise assaults on people by people. Apparently, however, there is an entire realm of human noise pollution that negatively impacts the natural environment along the lines of other forms of human intrusion like chemicals, exhaust, plastics, waste etc.

The Conversation (conversation.com) in Rock n Roll is Noise Pollution with Ecological Implications covers Brandon Barton of Mississippi State University, who smartly covers the effects of noise pollution on the natural environment. Barton uses AC/DC’s track “Rock n Roll ain’t noise pollution” to set up his piece. Barton’s research group recently tested AC/DC’s hypothesis to see if rock music or human noise such as highways does have an effect on the surrounding environment. His conclusion is that yes indeed rock ‘n’ roll and other loud sounds can physically affect the world around it.

Noise pollution does not just affect single animals in its path but also the ecosystem as a whole when the behavior and interactions between compromised plants and animals disrupt the system. Barton gives examples of how the noise from mining and drilling affect wildlife and marine life. Barton’s group specifically studied lady beetles, and the Asian lady beetle classified as “Harmonia axyridis.” Lady beetles is another name for the common ladybug. These beetles are used by soybean farmers as a natural biological pest control to decrease the use of pesticides.

As Barton explains, ladybugs are an excellent form of pest control because they eat pests called aphids and lessen the need for pesticides. They are an extremely part of pesticide control because, as cute as they might look, they are predators with huge appetites. They are important to the current environment and interaction between humans and the natural world.

The ladybug is an excellent choice to determine the effects of noise pollution because if the beetle’s normal behavior is compromised they cannot serve the function of pest control and it would affect the soybean yield. With the help of fellow AC/DC fans and academic colleagues, Barton experimented by putting beetle larvae with enough aphids to eat (again aphids are the critters that they generally feed on, thus protecting the soybeans) and systematically played sounds and music tracks between 95-100 decibels, including sound effects of city environments. The results are conclusive – the beetles in silence brought the aphid population down to essentially zero which is normal, but when “Back to Black” by AC/DC was played for two weeks straight it reduced the beetle’s effect on the aphids by 50%. Barton further demonstrated the effect on the “system” by also doing the experiment with the beetles, the aphids, and growing plants. As expected, pest abundance increased – 40 times more in fact – when the beetles were under the duress of rock music. Barton also mentions that country and folk music, as opposed to rock, had no effect.

Toy Gun with Sound Effects used in Hold Up

Jacob Adelman of the Philadelphia Inquirer (philly.com) reported on July 21, 2018 a young man in his 20s reportedly held up a 7-Eleven in Northeast Philadelphia. Around 1 a.m., the suspect entered the store at the intersection of Bustleton Avenue and Knorr Street. Police report that he demanded cash and scratch off lottery tickets. Then, the suspect reportedly forsook the tickets and instead went straight for the cash when the attendant was opening the register and fled with around $500.

The suspect left his weapon at the scene where had placed it on the counter while grabbing the cash. Police report that the gun was actually a toy semiautomatic handgun, adding that the toy also came with gun sound effects.  I’m not sure what is stranger in this situation – I suppose the fact that the sound effects were mentioned in the first place. Apparently, they weren’t used as part of the hold-up and most likely wouldn’t have been very effective in the thief’s achieving his goal. But, no one is complaining about the gratuitous sound effects publicity – sfx are everywhere. By the way, the suspect had a teardrop tattoo near his left eye and was wearing a black shirt with an Adidas logo. As of now, police are still searching and I’m assuming they have fingerprints but no mention.

Temp Tracks: A Movie’s Secret Score

Selecting ‘temp music’ tracks is an essential part of the overall scoring process in film making. Yet its importance is often overlooked.
In this article, I explain exactly what temp music is and the role it plays in everything from a low budget short films to a major Hollywood feature.

A Temporary Definition

First of all, let’s look at the definition of the term ‘temp music’. A temp track is temporary music (sometimes referred to as ‘scratch music’) chosen by a movie’s music editor (or indeed, the director themselves) for key scenes in a feature film. This temporary music is intended as a guide on early previews of the film to suggest the mood or essence of a particular scene. As music has the power to alter emotional responses to the narrative, it’s important that this temporary music precisely depicts the director’s vision and intentions for the scene. And is music that is able to be reinterpreted at a later stage by the composer and turned into an actual score for the film.

Stock Music & Pop Music

Many directors may already have chosen some temporary music at an early stage. Even in pre-production. However, more often than not, the music editor will turn to a vast and varied collection of Library Music (Production Music/Stock Music) to compose a temporary score for the film.
After all, with the right experience and knowledge, the editor can quickly bring to mind the perfect track for the scene chosen from a vast array of ready-made stock music mood tracks. That’s why Library Music is often used as the quick and effective solution for temp tracks for this in-depth and often time-consuming task of supplying a temporary score.

Other solutions may include published music or film music from other movies. Especially in sequels, where the temp music will often be ripped from the previous release. As in the case of Lethal Weapon 4 where the temp score was taken from Lethal Weapon 1, 2 and 3!

Hollywood Wish List

So, for temporary music, the music editor and director have a wide range of music to choose from.

Using vast amounts of Library Music, hit songs and other movie soundtracks. After all, it’s never going to be heard outside the studio walls, so they can really allow their imagination to run amok and make some ‘wish list’ choices. In fact, There have been occasions where this wish list has become a reality.

Quentin Tarantino chose the track ‘Stuck in the Middle with You’ to accompany the notorious ‘ear slicing’ scene in his early feature, Reservoir Dogs. It only became clear at a later stage that a publishing deal for the song may not be granted. Until Tarantino, himself stepped in and hired another music supervisor who could guarantee the Stealer’s Wheel track for his film.

When editing Apocalypse Now, director Francis Ford Coppola scored the entire film with ‘temporary’ tracks by the rock group The Doors. All that remained of this temporary score by the time the movie was released was The Doors’ track ‘The End’. Used to chilling effect alongside the Napalm attack on the jungle outpost at the beginning of the film.

Stanley Kubrick was inspired by classical music by Richard Strauss and Gyorgy Ligeti to bring alive his vision of Man’s evolution in 2001: A Space Odyssey. So moved by the results, that this temporary music became the score in the final cut of the film.

Low Budget Independents

Of course, not everyone is producing a Hollywood style blockbuster movie. Most productions are small independent films where the director will communicate their ideas for the score directly with a composer. The ‘temp score’ may indeed be a rough demo produced by the composer. Something that includes his ideas, but with pared-down instrumentation. On approval, the composer will set to work bringing the score to life. Perhaps replacing sampled or synthesized sounds with real instruments and orchestration.

Imitation & Limitation

So those are most of the ways that temporary music finds its way into early pre-production versions of a movie. However, it can be an area of contention and not always just a cut and dried process. In fact, temp music can be very subjective indeed.

I was once working on a score for a film when the director presented me with a piece of temp music for a particular scene. It was taken from the soundtrack of one of the Star Trek movies. A few days later I sent him the cue that I had produced. I sensed he wasn’t entirely happy and asked him if my cue was what he wanted. “Yes,” he replied, “it’s exactly what I wanted and that’s the problem!”

Without realizing, I had mimicked the Star Trek music to such a degree that they sounded almost identical. This is a very common pitfall with temp music and from a composers viewpoint, it’s a two-stage process. Get the music sounding similar, but then step back and add your own essence to the piece. It’s surprising what new directions may be revealed. A few days later I sent the director a second draft and he was entirely happy with the finished piece. And so was I. It had taught me a valuable lesson about how to reinterpret temp music into new compositions and surprise the director with an extra layer of ingenuity.

Temp music can also prove to be extremely limiting on the composer’s ability to use their imagination. Another time I was sent an edited version of a short film. While watching the film I had many ideas for the type of music I would like to compose. A week later, the director sent me some temporary music he had chosen from his personal record library. This music was nothing like I had imagined. In fact, almost the polar opposite! In those situations, the composer must come forward to see if a compromise can be reached. Perhaps the director will eventually appreciate and enjoy the new freedom that a second person’s input can offer. We eventually agreed to make some changes and the results were better than we both imagined they would be at the time!

Marvelous Music?

A point here could be made about current big budget movie scores such as the Marvel franchise. Clearly, the scores in these movies are designed to be nothing more than audio wallpaper these days. The music rises and falls along with the action, but never breaks out as a stand-alone feature unless a piece of published music or a hit song is somehow crow barred into the soundtrack.

But this hasn’t always been the case with Marvel.

Remember Danny Elfman’s music for Spiderman back in 2001? Well, this just may be the problem. If that terrific score has been used as temp music ever since, what we are now faced with is a decades old imitation of the perfect super hero music. There is only one Danny Elfman. And endless photocopies of photocopies just won’t ever produce another brilliant composer. Please, Marvel. It’s your duty to try something new? I’m pretty sure that you have the available budget by now!

Back To The Studio

Meanwhile, back in the studio and with all these available sources of temp music neatly edited into a score, the film is ready for early screenings to studio executives or test audiences. And, of course, as inspiration for the actual score, to be produced at a later time by the chosen composer.

As an example, the temporary music for the original Star Wars test showings was The Planets by Gustav Holst. Easy to judge how this resulted in the eventual rousing classical score by composer, John Williams. As too, music by Irish singer Enya was used as temporary music for key scenes in Titanic. Which then inspired James Horner’s soaring Celtic tones in the film’s final cut.

Temporary Music Credit?

Just a note here on temp music that may be a subject for discussion. A film score composer is being asked to imitate (dare I say, plagiarize?) a piece of music that the director and/or music editor has decided fits perfectly with the emotional arc of the scene. Yet that temporary piece of music is then discarded and never credited when an imitation has been made. Does that seem a little unfair to the composer & producer of the original temporary track?

This is perhaps where a general usage/single payment license seems to be the perfect solution. This way the composer/producer and publisher of the temporary track will receive a payment for their temporary placement in a film. That’s mutually beneficial and seems only fair. Even if no credit is given in the actual film itself.

A Final Imitation

So finally with the score completed, the movie is yet another stage closer to its final release date. The temp music has done its job as the secret ‘invisible’ score. A temporary music bed that has allowed the director, music editor, music supervisors and composer to work towards a common goal. Communicating their ideas through music in order to get the best score and soundtrack the film could possibly have.

Simon Power
As Dream Valley Music, Simon Power has scored a number of short films with his music being placed in feature films such as Chamber’s Gate, Pickings and Ouija 3.

Sound Effects and the Fake Engine Roar

Sound Effects and the Fake Engine Roar

Over the years, the auto industry has increasingly honed their craft at creating environmentally sound cars and reducing unwanted noise levels for the drivers. As a result, the authentic organic engine sounds is masked more and more. For car aficionados who may buy vehicles specifically for the engine roar, this is not necessarily a good thing and they’ve made this known. The auto industry has responded by creating new technologies upon these new technologies that attempt to restore the classic engine sounds that so many have come to cherish.

The trend is succinctly described by K.C. Calwell from caranddriver.com in “Faking it Engine Sound Enhancement Explained.” Calwell references work done by Yamaha’s Center for Advanced Sound Technologies, hired by Lexus for the launch of the LFA model in 2009. Fascinatingly, the Yamaha involved here is the company that creates musical instruments – violins, guitars, etc. Lexus contracted Yamaha to specifically “utilize sound as a medium that can achieve a direct link between the driver and the vehicle” (archive.yamaha.com “Yamaha Creates Acoustic Design for Engine of the Lexus LFA Super Sports Car”). Here, sound is utilized as a concrete object, a physical means to affect the mental state of the driver – it is “sound design” in its purest form.

 

Yamaha was chosen because of their expertise in establishing a powerful emotional and performance connection between musicians and their instruments with the intent of maximum enjoyment of the musician. In this case, the vehicle is the instrument and the driver is the musician. Beyond the pleasure of driving an excellent sounding vehicle that responds to the driver’s acceleration actions, the additional sound element also adds a higher sense of control, allowing the driver to be more “in tune” with their vehicle. As Yamaha states, “Accurately passing on high-grade engine sounds to the driver makes it possible to feel the vehicle’s condition and instantly take the next minute action that is required (“Yamaha Creates Acoustic Design for Engine of the Lexus LFA Super Sports Car” archive.yamaha.com). Yamaha refers to the this back and forth interaction between the driver and vehicle through sound as “feedback,” and an “interactive loop” which makes the driving experience more pleasurable and exciting.

Calwell smartly compares the cabin of the car to the “hall” of a performance area and the driver as the “audience.” In addition to Lexus, he mentions BMW as a forerunner in the addition of recorded engine sound to the driving experience. BMW’s method is playing an exterior perspective recording of the car’s engine directly through the stereo speakers. Incredibly, the samples are chosen according to the load on the engine and the rpm in real time. As the real sounds of the engine are still somewhat audible, the additional sound through the stereo speakers is described as “backing track.”

Volkswagen – Soundaktor: Active Sound

So, here is what Volkswagen initiated in 2011. In order to beef up the sound of their engines, they created the “Soundaktor” which is German for “sound actuator” – ie something that creates sound. Essentially it is a speaker between the engine and the cabin which combines noise to the normal engine sound to create a more “authentic” old-school power sound to the driving experience. This is the definition of “active sound” in terms of automobiles – sound through the speakers triggered by real-time actions of the driver. An audio file is housed on the vehicle’s computer and triggered by changes in the throttle. All noise from the Soundaktor is played through the one dedicated speaker as opposed to other systems that play enhanced engine sound through the car’s stereo speakers. Interestingly, with a bit of digging you’ll find car enthusiasts on forums discussing the best methods with which to dismantle the function on forums – one user saying he pulled a fuse to dismantle it as soon as he bought his VW. It seems that some of these connoisseurs tend to not like the “fakeness” of the added sound, though most drivers its appears aren’t bothered enough to worry about the authenticity. A quick search for BMW Active Sound shows these videos – all providing info on how to dismantle their sound system.

Most likely, the general consumer doesn’t even realize they are listening to a replacement engine sound and simply appreciate the experience. Some users in the know, however, are wishing there could be a toggle on/off for the additional sound which would give them to choice to engage or not. As with all these systems, the purpose of the additional audio is to compensate for the muffling of the actual engine sound due to advancements in sound proofing.

Ford – General Motors – Acura: Active Sound

The Cadillac models incorporate Bose sound systems to add additional noise-canceling technology to rid the cabin of unwanted “road noise” and simultaneously employing a stereo based system akin to Volkswagen. As an audio engineer and music producer, I 100% appreciate what these auto sound technicians are doing – they are “cleaning up” the audio of the car’s performance. It is purely analogous to the job of live sound mixers as well as those in post-production and mixing/mastering music – get rid of the unwanted noise! At the same time, they enhance the choice sounds via the stereo system.

Acura as well have moved into the foray of vehicle sound designing in an impressive way. The moniker for their efforts is Active Noise Cancellation (ANC). This is eerily similar to the theatre mixing work of blah blah discussed in a previous post. Which is creating a system that dynamically responds to sonic assaults in real time that may disrupt the performance/driving experience in order to kill the noise. Acura’s ANC works to cut out the low frequencies noises similar to cutting out the bass under 60-100dB when mixing an audio track. A bit of google digging could probably unearth the exact frequencies they are targeting – perhaps it’s 500 dB where the “mud” of an audio track tends to live at the meeting of the bass drum and the bass guitar/element. Regardless, to cut the unwanted bass out of the cabin’s aural experience Acura uses overhead mics within the cabin that create a reverse phase (noise canceling) signal to handle and mute the unwelcome deep tones. At this point, ANC is able to increase the sound levels from the engine to fill in the now clean space afforded by noise cancellation. Again, all of this is dynamic and works to raise the engine sound level within the cabin by up to 4dB. This audio system is a standard element of the MDX, RLX, TLX, and ILX models. (www.thedrive.com/tech/22834/from-acura-to-vw-bmw-to-porsche-car-companies-are-getting-sneakier-about-engine-sound-enhancement)

Incredibly, this technology has been taken to now allow cars that have 4 cylinders to sound like engines that are much much bigger, as explained in this video.

From a professional sound design perspective, being challenged with syncing dynamic engine audio samples to be triggered in real time during a live driving experience is enticing. For an audio nut, or a current student it’s kind of like “hell yeah this sounds like fun!” not to mention the earning and career potential doing sound design for car companies. This is a wide open field for sound designers. On the flip side, for the consumers and those who love these vehicles, it appears to be sort of a nightmare, as they want the “authentic.” In fact, when googling “car sound pipes” the first 5 entries and videos are all about how to dismantle them – as with the forum posts mentioned above. I include a post from Larry Webster on popularmechanics.com here because it is not only exceptionally written but quite telling. Webster, on popularmechanics.com in “The Rise of the Fake Engine Roar” laments the development of this experience that he deems “fake. First of all, the title says it all – the “Fake Engine Roar.” He references the main contributors for these “fake” sounds – muffled noise from excellent insulation and environmental regulations. He quotes car buyers who state that the industry is “lying” to them by using sound samples.

While car owners who want the classic noise might appreciate the attempt at improving the aural experience, there is some negative reaction from car lovers – from those who live by their car, and they appear to deem the auto industry’s effort to be a fake, creating a faulty experience. Whether we think the environmental benefits outweigh the opinions of these car lovers, or whether we lament the loss of the “classic engine sound,” one thing is true. That sound design and sound effects continue to play a major role in many types of products, not only on the stage, but in vehicles. The use of the sounds transforms the car itself into a performance venue.

Sound Design Founders of the Theatre

Sound Design Founders of the Theatre

The Theatre’s Contribution to Sound Design

As with any human discipline or industry, sound design as a practice and art form developed collectively over time, spurred on by the contributions of many and the striking visions and passion of leaders in the field. Below are two major contributors to the world of sound design as we know it today: Dan Dugan and Charlie Richmond. Both arose within the theatre world and created solutions to problems they faced – resulting in internationally respected products that we use today. Much of the major building blocks and tools used in sound design, live and in the studio, developed in the 1960s and 1970s,  and in this era these two sound founders began to make waves.

Dan Dugan

Dan Dugan, an American inventor, audio engineer, and sound recordist was born March 20, 1943. As a young man, 24, he began working in theater sound for the San Diego National Shakespeare Festival and the American conservatory. In 1968 the term “Sound Designer” was created to explain Dugan’s efforts. His first major contribution to sound design, aside from giving a reason for the term “sound designer” to exist, is specifically relevant to live performance – the “automatic microphone mixer,” known as the automixer, such below through several generations of production.

As a sound pioneer, Dan Dugan realized early in his career that he needed to work for and by himself to solve the problems he encountered – new problems were occurring in real time. As reported by Sound and Video Contractor in “AV Industry Icons” 2006, Dugan states: “I realized I had to work for myself … so I built my own studio. It was one of those gigs in ’68 or ’69 that sparked the invention of the automatic mic mixer.” It was the frustration with feedback and noise problems that arise from using multiple microphones in a singular live setting, such as a theatre stage, that gave rise to his experimentations to improve live sound and the ability to design sound without sonic flaws. The two most important problems to solve specifically were one, reducing the amplified noise contributed by multiple microphones that pick up ambient noise and, two, eliminating the feedback created with multiple actors/microphones moving around stage into different positions and the cross signaling of their outputs going into each other’s inputs – ie “feedback

In the video below, Dugan explains the problems he encountered working on the live production of Hair, in which there were 16 area mics, 10 mics in the band, 9 hand mics, and one wireless mic – all operated by one person on a manual mixer.

Dugan played around with voltage-controlled amplifiers (VCAs) for several years in the early 1970s to solve the problem of spontaneous feedback and noise buildup, devising a system that used a distant reference microphone which accepted the signals from stage microphones. The output of each microphone was automatically adjusted depending on the input received by the reference microphone in real time via the reference mic. This enabled the system as a whole to avoid unwanted feedback while also balancing microphone levels. As he explains in Sound and Video Contractor, “I was messing around with logarithmic level detection, seeing what would happen if I used the sum of all the inputs as a reference. That’s when I accidentally came upon the system. It was really discovered, not invented.” he says. “I didn’t really know what I had, just that it worked like gangbusters.”(Sound and Video Contractor 2006).

His two main mixing systems, the Dan Dugan Speech System and the Dan Dugan Music System are demonstrated here in split-screen during a David Letterman show

On his website, Dugan explains that his products, the Model D, E, M, N, Dugan-MY16 and the Dugan-VN16 are “accessories” to sound mixing consoles, not mixers in themselves. The products are patched into the insert points of the send and return loops of each individual channel on an existing console. Thus, mics do not need to cued and faders can be left alone unless tweaked when used. As Dugan writes, “This frees the operator from being chained to the faders.”

It is clear that live sound design would not be the same without Dugan’s pioneering efforts. Dugan remains active operating Dan Dugan Sound Design in San Francisco, CA. You can check out his products and more at dandugan.com. He has an extensive list of products all based on and stemming from his original designs and creations. His products have notably been used by CBS Late Night with David LettermanOprahHollywood Squares, WABC in New York, WETA in DC, WVIZ in Cleveland, U.S. presidential debates, ESPN, NBC, CNBK, CBS, Fox Sports, MLB Network and more.

Charlie Richmond

A contemporary of Dugan’s, Charlie Richmond was born January 5, 1950, and is an American inventor who came onto the scene in the 1970s and like Dugan, began creating solutions to solve the problems faced by live theater. In 1975, he addressed the need for a mixing console that would take 100 inputs, and wrote “A Practical Theatrical Sound Console” for the Audio Engineering Society (AES). In it, Richmond describes a unit which elegantly and economically allows one operator to control 100 controls at once without the need of a computerized assistance. The paper is in the AES online library which can be viewed by members or purchased.

Richmond launched Richmond Sound Design in 1972 and was the first to produce and market two new off-the-shelf products for theater mixing, a sound design console named the Model 816 in 1973 and a computerized sound design control system in 1985 – Command/Cue. In addition, he invented the “Automatic Crossfading Device,” trademarked “Auto-Pan” in 1975. According to the Richmond Sound Design website, the Model 816 was “matrix-equipped and included our patented AUTO-PAN™ programmable crossfaders” and revolutionary at the time. Richmond’s company went on to create the Command/Cue computerized audio control system used in multiple theater performances, theme park shows internationally and in Las Vegas. In 1991, with Stage Manager show control software they pioneered the use of MIDI to manage multiple media controllers including sound, lighting, lasers, etc which became an industry standard for all types of live shows from Broadway to cruise ships. Since then, Richmond Sound Design has contributed significant sound software and hardware that have greatly expanded the possibilities of live sound design including: the MIDIShowCD in 1994 which provided multichannel sources at the fraction of the cost, the AudioBox Theatre Sound Design Control system, the ShowMan software and the ShowMan Server Virtual Sound System which brought compatibility for all of its products to an industry standard.

Richmond and Mushroom Studios

Richmond also brought his success with live mixing to studio music mixing by purchasing Mushroom Studios in Vancouver, British Columbia, Canada in 1980. Richmond hosted concert musicians to score many feature films including the film score album of Top Gun. Skinny Puppy, Tom Cochrane, Fear Factory, and Sarah
McLachlan were some of the notable acts that recorded there. Richmond sold the studio in 2006. Clearly, talent with sound bleeds over from sound design into music mixing and sound leaders like Richmond can easily traverse both realms.

Richmond’s Writings

What might be most striking in Richmond’s relation to sound is his gift with written language and his visionary nature. Software such as Garage Band which comes free with Mac products today, and professional software such as Logic Pro and ProTools were obviously only a distant dream of sound designers 30 years ago. In Theatre Design & Technology Magazine, 1988, Richmond contributed a piece entitled “A Sound Future” and in it he predicts the invention of the Digital Audio Workstation (DAW) that inundates the sound world today. As he writes:

Sound designers have been waiting for a long time for a system which allows us to create soundscapes easily, almost intuitively: a system which would perform as a transparent extension of our desires, a tool which requires no interpretation between wish and result. – Charlie Richmond 1988

In 1988, he also predicts the creation of the graphically oriented interfaces that we use today, buttons, etc:

Just point at the picture of the deck and click the mouse button and the (graphically represented) reels will start turning, click again and they will stop. Great, but …what about all the different types of loudspeakers? All of a sudden, I start seeing a lot of work for our software people and a delivery date of some time in the 1990’s for a customized system.Charlie Richmond 1988

Again, visionary. Richmond goes further to suggest how digital graphics could be used to control the parameters of software and it is reminiscent of the many DAWs we see today:

Maybe we should be able to display a big picture of the loudspeaker representing the output in which we want to increase the volume. We could represent the overall volume of the loudspeaker by changing the overall size (volume!) of the graphic representation. Charlie Richmond 1988

Dugan and Richmond have both significantly contributed to the hardware and hence the software that enables sound designers today, both live and in the studio, to create in ways never before possible and perhaps never possible without them. I find it interesting how it was the demands and problems specific to live theatre that propelled Dugan and Richmond to invent new solutions to audio problems that live bands and studio recordists meet, or world have met, without them.

Order #222222 placed at Shockwave-Sound.com

It’s always a bit of fun when you reach little milestones with your business, and we are happy to announce that a couple of days ago, order # 222222 was placed here at Shockwave-Sound.com. That’s since we started our current order database in April of 2005. (We had ‘manual’ order counting from 2000 till 2005). The customer behind the $82.38 order #222222 was ITV Channel Television (UK). We’ve crossed out the customer’s name and email address in the interest of their privacy. Thank you, ITV and all of you great customers who placed the previous 222221 orders!