TV Station’s Viewers Irate over Choice of Music and Sound Effects
Scott Schaffer, of ABC’s WNEP affiliate in PA. launches into the topic of sound effects during the segment “Talkback 16” in which the station responds to viewers’ feedback. In this particular segment, the issues defined as “critical topics” some WNEP callers relate to the station’s use of sound effects and choice of music. One caller addresses his notion of a “critical issue of the day:” “Yeah that little thing you do between the, eh, commercials and the program… you know that goes DADANNNDANDNADNDNN DANN .. the part goes DANDANDNNDA ..” and apparently he wants it all edited except the final part.
The host further explains that amidst other calls regarding the summit in Helsinki and severe weather at home, another caller instead wanted to address the bell sound effect used throughout their segments.“Instead of having that little bell going Ding Ding, have like a big church bell … BONGGG BONGGG BONGGG .. not a little deeeng deeeng deeeng deeeng .. big church bell … BONGGG.” The sound effects wars apparently continued as Schaffer shared a response caller a bit irate that the previous calls have warranted air time, stating that someone like herself, “who can actually think,” should be chosen as opposed to others who apparently can only talk in sound effects.
Sound, Music, and Voice in Google Assistant’s Personality
Google Assistant, Google’s artificially intelligent virtual assistant, is in constant evolution in the development of its human persona. While it may appear “alive” at times, it is, of course, being created and developed by people, as the following video of a brainstorming session with developers shows. Designers, engineers, illustrators, and yes – sound designers – make up the huge cross-sectional group trying to bring life to this technology to make people’s lives better, called the Google Assistant’s Personality Team. The assistant’s team uses sound, music, and voice to enhance not only the experience, but in addition the usefulness, of this AI assistant used on 500,000 million devices worldwide. This is discussed in “Exclusive: Inside Google Assistant’s Personality Team” on a video on the Yahoo Finance website – hosted by Yahoo Finance tech critic David Pogue.
The Google team has plenty of opportunities to utilize their creative prowess as the questions that users might ask is nearly limitless. For example, as Drew Williams (Writer – Personality Features) if a user wants to call Santa and tells the AI to do so, the user could experience calling the North Pole and an elf would answer etc. to enrich the experience. Clearly, for any of that to happen sound effects and sound designers would be needed to bring sonic “realness” to the experience. The article discussed here only mentions “sound designers” once, but we can rightly assume that in order for an “elf” to answer the phone, someone is going to need to tweak the pitch of the voice to make is sound tiny, and/or employ “north pole type sound effects” such as Santa’s laugh and the sounds of toy-making in the background. And, if this isn’t being done – it should be.
In addition to sound elements, music too plays a role. At 1:15 in the video on Yahoo Finance we see an instance of a developer suggesting a musical interlude to make a point. This is Elena Skopetos (Character Editor) who is in charge of bringing life to Google Assistant’s opinions and thoughts, imbuing the AI with a definitive character. As she explains for example if a user asks “Do you want to build a Snowman?” the AI responds sarcastically “Frozen came out in 2013 – let it go” referencing, of course, the musical hit from the show “Let it Go.” Again, the use of music.
The actual voice of the AI itself is also highly important as Andy Pratt (Features Lead) and Ryan Germick (Principal Designer) explain, the voice and personality are meant to be helpful, useful, with a bit of humor but not snarky like Siri. Pratt states that he thinks of Google as a massive library and Google Assistant as a “cool librarian.” On a side note, to ensure that the humor of the voice is culturally palatable Google employs developers from around the world for their intimate cultural knowledge of their specific locales – a term called “trans-creating.” The discussion in the video goes on to a discussion on the importance of “voice” itself in technology. Lilian Rincon (Product Management Director) and the host make the point that “voice” itself is playing a greater and greater role in controlling and navigating technology. For example, in areas of the world where literacy, typing specific dialects, and access to computing is a concern, “voice” both by the user and device itself can punch through these difficulties: no need to type and read, rather just ask and listen.
The Ecological Impact of Sound Pollution
Human beings are great at many things, including polluting the heck out of any environment in which we live. I’ve personally never considered “noise pollution” beyond the noise violations my roommates and I received in college for blasting Jimi Hendrix at 2 am in the dorms or the jackhammer at 7 am on neighborhood streets when work crews are starting their work days. Those are noise assaults on people by people. Apparently, however, there is an entire realm of human noise pollution that negatively impacts the natural environment along the lines of other forms of human intrusion like chemicals, exhaust, plastics, waste etc.
The Conversation (conversation.com) in Rock n Roll is Noise Pollution with Ecological Implications covers Brandon Barton of Mississippi State University, who smartly covers the effects of noise pollution on the natural environment. Barton uses AC/DC’s track “Rock n Roll ain’t noise pollution” to set up his piece. Barton’s research group recently tested AC/DC’s hypothesis to see if rock music or human noise such as highways does have an effect on the surrounding environment. His conclusion is that yes indeed rock ‘n’ roll and other loud sounds can physically affect the world around it.
Noise pollution does not just affect single animals in its path but also the ecosystem as a whole when the behavior and interactions between compromised plants and animals disrupt the system. Barton gives examples of how the noise from mining and drilling affect wildlife and marine life. Barton’s group specifically studied lady beetles, and the Asian lady beetle classified as “Harmonia axyridis.” Lady beetles is another name for the common ladybug. These beetles are used by soybean farmers as a natural biological pest control to decrease the use of pesticides.
As Barton explains, ladybugs are an excellent form of pest control because they eat pests called aphids and lessen the need for pesticides. They are an extremely part of pesticide control because, as cute as they might look, they are predators with huge appetites. They are important to the current environment and interaction between humans and the natural world.
The ladybug is an excellent choice to determine the effects of noise pollution because if the beetle’s normal behavior is compromised they cannot serve the function of pest control and it would affect the soybean yield. With the help of fellow AC/DC fans and academic colleagues, Barton experimented by putting beetle larvae with enough aphids to eat (again aphids are the critters that they generally feed on, thus protecting the soybeans) and systematically played sounds and music tracks between 95-100 decibels, including sound effects of city environments. The results are conclusive – the beetles in silence brought the aphid population down to essentially zero which is normal, but when “Back to Black” by AC/DC was played for two weeks straight it reduced the beetle’s effect on the aphids by 50%. Barton further demonstrated the effect on the “system” by also doing the experiment with the beetles, the aphids, and growing plants. As expected, pest abundance increased – 40 times more in fact – when the beetles were under the duress of rock music. Barton also mentions that country and folk music, as opposed to rock, had no effect.
Toy Gun with Sound Effects used in Hold Up
Jacob Adelman of the Philadelphia Inquirer (philly.com) reported on July 21, 2018 a young man in his 20s reportedly held up a 7-Eleven in Northeast Philadelphia. Around 1 a.m., the suspect entered the store at the intersection of Bustleton Avenue and Knorr Street. Police report that he demanded cash and scratch off lottery tickets. Then, the suspect reportedly forsook the tickets and instead went straight for the cash when the attendant was opening the register and fled with around $500.
The suspect left his weapon at the scene where had placed it on the counter while grabbing the cash. Police report that the gun was actually a toy semiautomatic handgun, adding that the toy also came with gun sound effects. I’m not sure what is stranger in this situation – I suppose the fact that the sound effects were mentioned in the first place. Apparently, they weren’t used as part of the hold-up and most likely wouldn’t have been very effective in the thief’s achieving his goal. But, no one is complaining about the gratuitous sound effects publicity – sfx are everywhere. By the way, the suspect had a teardrop tattoo near his left eye and was wearing a black shirt with an Adidas logo. As of now, police are still searching and I’m assuming they have fingerprints but no mention.
Over the years, the auto industry has increasingly honed their craft at creating environmentally sound cars and reducing unwanted noise levels for the drivers. As a result, the authentic organic engine sounds is masked more and more. For car aficionados who may buy vehicles specifically for the engine roar, this is not necessarily a good thing and they’ve made this known. The auto industry has responded by creating new technologies upon these new technologies that attempt to restore the classic engine sounds that so many have come to cherish.
The trend is succinctly described by K.C. Calwell from caranddriver.com in “Faking it Engine Sound Enhancement Explained.” Calwell references work done by Yamaha’s Center for Advanced Sound Technologies, hired by Lexus for the launch of the LFA model in 2009. Fascinatingly, the Yamaha involved here is the company that creates musical instruments – violins, guitars, etc. Lexus contracted Yamaha to specifically “utilize sound as a medium that can achieve a direct link between the driver and the vehicle” (archive.yamaha.com “Yamaha Creates Acoustic Design for Engine of the Lexus LFA Super Sports Car”). Here, sound is utilized as a concrete object, a physical means to affect the mental state of the driver – it is “sound design” in its purest form.
Yamaha was chosen because of their expertise in establishing a powerful emotional and performance connection between musicians and their instruments with the intent of maximum enjoyment of the musician. In this case, the vehicle is the instrument and the driver is the musician. Beyond the pleasure of driving an excellent sounding vehicle that responds to the driver’s acceleration actions, the additional sound element also adds a higher sense of control, allowing the driver to be more “in tune” with their vehicle. As Yamaha states, “Accurately passing on high-grade engine sounds to the driver makes it possible to feel the vehicle’s condition and instantly take the next minute action that is required (“Yamaha Creates Acoustic Design for Engine of the Lexus LFA Super Sports Car” archive.yamaha.com). Yamaha refers to the this back and forth interaction between the driver and vehicle through sound as “feedback,” and an “interactive loop” which makes the driving experience more pleasurable and exciting.
Calwell smartly compares the cabin of the car to the “hall” of a performance area and the driver as the “audience.” In addition to Lexus, he mentions BMW as a forerunner in the addition of recorded engine sound to the driving experience. BMW’s method is playing an exterior perspective recording of the car’s engine directly through the stereo speakers. Incredibly, the samples are chosen according to the load on the engine and the rpm in real time. As the real sounds of the engine are still somewhat audible, the additional sound through the stereo speakers is described as “backing track.”
Volkswagen – Soundaktor: Active Sound
So, here is what Volkswagen initiated in 2011. In order to beef up the sound of their engines, they created the “Soundaktor” which is German for “sound actuator” – ie something that creates sound. Essentially it is a speaker between the engine and the cabin which combines noise to the normal engine sound to create a more “authentic” old-school power sound to the driving experience. This is the definition of “active sound” in terms of automobiles – sound through the speakers triggered by real-time actions of the driver. An audio file is housed on the vehicle’s computer and triggered by changes in the throttle. All noise from the Soundaktor is played through the one dedicated speaker as opposed to other systems that play enhanced engine sound through the car’s stereo speakers. Interestingly, with a bit of digging you’ll find car enthusiasts on forums discussing the best methods with which to dismantle the function on forums – one user saying he pulled a fuse to dismantle it as soon as he bought his VW. It seems that some of these connoisseurs tend to not like the “fakeness” of the added sound, though most drivers its appears aren’t bothered enough to worry about the authenticity. A quick search for BMW Active Sound shows these videos – all providing info on how to dismantle their sound system.
Most likely, the general consumer doesn’t even realize they are listening to a replacement engine sound and simply appreciate the experience. Some users in the know, however, are wishing there could be a toggle on/off for the additional sound which would give them to choice to engage or not. As with all these systems, the purpose of the additional audio is to compensate for the muffling of the actual engine sound due to advancements in sound proofing.
Ford – General Motors – Acura: Active Sound
The Cadillac models incorporate Bose sound systems to add additional noise-canceling technology to rid the cabin of unwanted “road noise” and simultaneously employing a stereo based system akin to Volkswagen. As an audio engineer and music producer, I 100% appreciate what these auto sound technicians are doing – they are “cleaning up” the audio of the car’s performance. It is purely analogous to the job of live sound mixers as well as those in post-production and mixing/mastering music – get rid of the unwanted noise! At the same time, they enhance the choice sounds via the stereo system.
Acura as well have moved into the foray of vehicle sound designing in an impressive way. The moniker for their efforts is Active Noise Cancellation (ANC). This is eerily similar to the theatre mixing work of blah blah discussed in a previous post. Which is creating a system that dynamically responds to sonic assaults in real time that may disrupt the performance/driving experience in order to kill the noise. Acura’s ANC works to cut out the low frequencies noises similar to cutting out the bass under 60-100dB when mixing an audio track. A bit of google digging could probably unearth the exact frequencies they are targeting – perhaps it’s 500 dB where the “mud” of an audio track tends to live at the meeting of the bass drum and the bass guitar/element. Regardless, to cut the unwanted bass out of the cabin’s aural experience Acura uses overhead mics within the cabin that create a reverse phase (noise canceling) signal to handle and mute the unwelcome deep tones. At this point, ANC is able to increase the sound levels from the engine to fill in the now clean space afforded by noise cancellation. Again, all of this is dynamic and works to raise the engine sound level within the cabin by up to 4dB. This audio system is a standard element of the MDX, RLX, TLX, and ILX models. (www.thedrive.com/tech/22834/from-acura-to-vw-bmw-to-porsche-car-companies-are-getting-sneakier-about-engine-sound-enhancement)
Incredibly, this technology has been taken to now allow cars that have 4 cylinders to sound like engines that are much much bigger, as explained in this video.
From a professional sound design perspective, being challenged with syncing dynamic engine audio samples to be triggered in real time during a live driving experience is enticing. For an audio nut, or a current student it’s kind of like “hell yeah this sounds like fun!” not to mention the earning and career potential doing sound design for car companies. This is a wide open field for sound designers. On the flip side, for the consumers and those who love these vehicles, it appears to be sort of a nightmare, as they want the “authentic.” In fact, when googling “car sound pipes” the first 5 entries and videos are all about how to dismantle them – as with the forum posts mentioned above. I include a post from Larry Webster on popularmechanics.com here because it is not only exceptionally written but quite telling. Webster, on popularmechanics.com in “The Rise of the Fake Engine Roar” laments the development of this experience that he deems “fake. First of all, the title says it all – the “Fake Engine Roar.” He references the main contributors for these “fake” sounds – muffled noise from excellent insulation and environmental regulations. He quotes car buyers who state that the industry is “lying” to them by using sound samples.
While car owners who want the classic noise might appreciate the attempt at improving the aural experience, there is some negative reaction from car lovers – from those who live by their car, and they appear to deem the auto industry’s effort to be a fake, creating a faulty experience. Whether we think the environmental benefits outweigh the opinions of these car lovers, or whether we lament the loss of the “classic engine sound,” one thing is true. That sound design and sound effects continue to play a major role in many types of products, not only on the stage, but in vehicles. The use of the sounds transforms the car itself into a performance venue.
As with any human discipline or industry, sound design as a practice and art form developed collectively over time, spurred on by the contributions of many and the striking visions and passion of leaders in the field. Below are two major contributors to the world of sound design as we know it today: Dan Dugan and Charlie Richmond. Both arose within the theatre world and created solutions to problems they faced – resulting in internationally respected products that we use today. Much of the major building blocks and tools used in sound design, live and in the studio, developed in the 1960s and 1970s, and in this era these two sound founders began to make waves.
Dan Dugan, an American inventor, audio engineer, and sound recordist was born March 20, 1943. As a young man, 24, he began working in theater sound for the San Diego National Shakespeare Festival and the American conservatory. In 1968 the term “Sound Designer” was created to explain Dugan’s efforts. His first major contribution to sound design, aside from giving a reason for the term “sound designer” to exist, is specifically relevant to live performance – the “automatic microphone mixer,” known as the automixer, such below through several generations of production.
As a sound pioneer, Dan Dugan realized early in his career that he needed to work for and by himself to solve the problems he encountered – new problems were occurring in real time. As reported by Sound and Video Contractor in “AV Industry Icons” 2006, Dugan states: “I realized I had to work for myself … so I built my own studio. It was one of those gigs in ’68 or ’69 that sparked the invention of the automatic mic mixer.” It was the frustration with feedback and noise problems that arise from using multiple microphones in a singular live setting, such as a theatre stage, that gave rise to his experimentations to improve live sound and the ability to design sound without sonic flaws. The two most important problems to solve specifically were one, reducing the amplified noise contributed by multiple microphones that pick up ambient noise and, two, eliminating the feedback created with multiple actors/microphones moving around stage into different positions and the cross signaling of their outputs going into each other’s inputs – ie “feedback“
In the video below, Dugan explains the problems he encountered working on the live production of Hair, in which there were 16 area mics, 10 mics in the band, 9 hand mics, and one wireless mic – all operated by one person on a manual mixer.
Dugan played around with voltage-controlled amplifiers (VCAs) for several years in the early 1970s to solve the problem of spontaneous feedback and noise buildup, devising a system that used a distant reference microphone which accepted the signals from stage microphones. The output of each microphone was automatically adjusted depending on the input received by the reference microphone in real time via the reference mic. This enabled the system as a whole to avoid unwanted feedback while also balancing microphone levels. As he explains in Sound and Video Contractor, “I was messing around with logarithmic level detection, seeing what would happen if I used the sum of all the inputs as a reference. That’s when I accidentally came upon the system. It was really discovered, not invented.” he says. “I didn’t really know what I had, just that it worked like gangbusters.”(Sound and Video Contractor 2006).
His two main mixing systems, the Dan Dugan Speech System and the Dan Dugan Music System are demonstrated here in split-screen during a David Letterman show
On his website, Dugan explains that his products, the Model D, E, M, N, Dugan-MY16 and the Dugan-VN16 are “accessories” to sound mixing consoles, not mixers in themselves. The products are patched into the insert points of the send and return loops of each individual channel on an existing console. Thus, mics do not need to cued and faders can be left alone unless tweaked when used. As Dugan writes, “This frees the operator from being chained to the faders.”
It is clear that live sound design would not be the same without Dugan’s pioneering efforts. Dugan remains active operating Dan Dugan Sound Design in San Francisco, CA. You can check out his products and more at dandugan.com. He has an extensive list of products all based on and stemming from his original designs and creations. His products have notably been used by CBS Late Night with David Letterman, Oprah, Hollywood Squares, WABC in New York, WETA in DC, WVIZ in Cleveland, U.S. presidential debates, ESPN, NBC, CNBK, CBS, Fox Sports, MLB Network and more.
A contemporary of Dugan’s, Charlie Richmond was born January 5, 1950, and is an American inventor who came onto the scene in the 1970s and like Dugan, began creating solutions to solve the problems faced by live theater. In 1975, he addressed the need for a mixing console that would take 100 inputs, and wrote “A Practical Theatrical Sound Console” for the Audio Engineering Society (AES). In it, Richmond describes a unit which elegantly and economically allows one operator to control 100 controls at once without the need of a computerized assistance. The paper is in the AES online library which can be viewed by members or purchased.
Richmond launched Richmond Sound Design in 1972 and was the first to produce and market two new off-the-shelf products for theater mixing, a sound design console named the Model 816 in 1973 and a computerized sound design control system in 1985 – Command/Cue. In addition, he invented the “Automatic Crossfading Device,” trademarked “Auto-Pan” in 1975. According to the Richmond Sound Design website, the Model 816 was “matrix-equipped and included our patented AUTO-PAN™ programmable crossfaders” and revolutionary at the time. Richmond’s company went on to create the Command/Cue computerized audio control system used in multiple theater performances, theme park shows internationally and in Las Vegas. In 1991, with Stage Manager show control software they pioneered the use of MIDI to manage multiple media controllers including sound, lighting, lasers, etc which became an industry standard for all types of live shows from Broadway to cruise ships. Since then, Richmond Sound Design has contributed significant sound software and hardware that have greatly expanded the possibilities of live sound design including: the MIDIShowCD in 1994 which provided multichannel sources at the fraction of the cost, the AudioBox Theatre Sound Design Control system, the ShowMan software and the ShowMan Server Virtual Sound System which brought compatibility for all of its products to an industry standard.
Richmond and Mushroom Studios
Richmond also brought his success with live mixing to studio music mixing by purchasing Mushroom Studios in Vancouver, British Columbia, Canada in 1980. Richmond hosted concert musicians to score many feature films including the film score album of Top Gun. Skinny Puppy, Tom Cochrane, Fear Factory, and Sarah
McLachlan were some of the notable acts that recorded there. Richmond sold the studio in 2006. Clearly, talent with sound bleeds over from sound design into music mixing and sound leaders like Richmond can easily traverse both realms.
What might be most striking in Richmond’s relation to sound is his gift with written language and his visionary nature. Software such as Garage Band which comes free with Mac products today, and professional software such as Logic Pro and ProTools were obviously only a distant dream of sound designers 30 years ago. In Theatre Design & Technology Magazine, 1988, Richmond contributed a piece entitled “A Sound Future” and in it he predicts the invention of the Digital Audio Workstation (DAW) that inundates the sound world today. As he writes:
Sound designers have been waiting for a long time for a system which allows us to create soundscapes easily, almost intuitively: a system which would perform as a transparent extension of our desires, a tool which requires no interpretation between wish and result. – Charlie Richmond 1988
In 1988, he also predicts the creation of the graphically oriented interfaces that we use today, buttons, etc:
Just point at the picture of the deck and click the mouse button and the (graphically represented) reels will start turning, click again and they will stop. Great, but …what about all the different types of loudspeakers? All of a sudden, I start seeing a lot of work for our software people and a delivery date of some time in the 1990’s for a customized system.– Charlie Richmond 1988
Again, visionary. Richmond goes further to suggest how digital graphics could be used to control the parameters of software and it is reminiscent of the many DAWs we see today:
Maybe we should be able to display a big picture of the loudspeaker representing the output in which we want to increase the volume. We could represent the overall volume of the loudspeaker by changing the overall size (volume!) of the graphic representation. – Charlie Richmond 1988
Dugan and Richmond have both significantly contributed to the hardware and hence the software that enables sound designers today, both live and in the studio, to create in ways never before possible and perhaps never possible without them. I find it interesting how it was the demands and problems specific to live theatre that propelled Dugan and Richmond to invent new solutions to audio problems that live bands and studio recordists meet, or world have met, without them.
Humans have long considered sound and music as mystical and magical, whether worshipped by the ancients and embedded in political and culture in ancient China, regarded as a portal to the infinite by Buddhists chanting OM, or modern day musicians and sound designers revelling in and revering their own sound creations. Sound inspires poets and sculptures across the world, leaders, rebels, teachers and students, the “everyday man” both old and young alike. Sound plays major roles in human life, from the first cry of the newborn baby to the final breath and “death rattle” of those passing. For most of us, sound encompasses our entire lives and every breathing moment. At times it is to be rejoiced, other times to be escaped (has anyone else ever asked the room, “Can everyone just be quiet for a minute please I can’t think!). It is argued that pure silence only exists in “space”. Truly, sonic vibrations are as prevalent as particles of light, moving atoms back and forth within this physical plane. There is a concept that unveils this, a belief in a natural structure that exists beyond the human world and is and embedded in the physical structure of the universe itself: the sound matrix.
Hans Jenny: Pioneer in Cymatics
The belief in a sound matrix is the idea that a pre-created and predetermined sound configuration exists innately in the universe, one that can be exposed and studied. Several scholar/scientists have contributed to this field: now known as “cymatics,” a term applied by researcher Hans Jenny (1904-1972). Essentially, he explored a myriad of different distinct patterns created when particles are placed on a plate and vibrated at different frequencies. In other words, cymatics is the study of wave phenomena that created repeatable physical patterns from the frequency sound vibration of particles. His bookCymatics: A Study of Wave Phenomenon and Vibration is the defining work to be discussed further on in this post.
The inspiration for the exploration into cymatics is steeped in “anthroposophy,” a philosophy founded by Rudolf Steiner (1861-1925), which states that human beings can intellectually access and uncover elements of an existing spiritual plane. In fact, Jenny’s own book is “Dedicated to the memory and research of Rudolf Steiner.” Anthroposophists believe that the witness of the spiritual world and its demonstration through experiments in fields such as cymatics will stand the test of rational verification. This foray into sound study is an example of the founding principle of anthroposophy and mimics the methods of the natural sciences in their practices of evidence-based research. Before Jenny, however, there are a few other individuals that paved the way for this exploration into the unseen structures embedded and surrounding us in our universe.
The beginnings of cymatics
Ideas of inherent vibrational patterns in the natural world began centuries ago, with Galileo Galilei often quoted as an early witness to the phenomena from his writings of 1632 in his “Dialogue Concerning the Two Chief World Systems.” He explains his experiences when scraping a plate of brass with a chisel, attempting to clean it. Galileo noticed both a high whistling sound and the production of parallel streaks of brass particles that only occurred in tandem with the sound. Fifty years later, scientist and musician Robert Hooke, in 1680, noticed nodal patterns created with vibrating glass. Using a violin bow on a flour covered glass plate, he produced repeated patterns. One would think today, imaging back, that both of these experimenters must have experienced something mystical. They discovered, or uncovered, a structure within the physical world not yet noted by humankind; a matrix that predates history, clearly put in place by a non-human force. No wonder their testimony has withstood the test of time.
One hundred years after Hooke’s observations, one Ernst Florence Friedrich Chladni (1756-1827) published “Entdeckungen über die Theorie des Klanges” translated to English as “Discoveries in the Theory of Sound.” Chladni was inspecting the properties of “Lichtenberg figures.” German physicist Georg Christoph Lichtenberg discovered these radial patterns in in 1777 when placing powdered material on a high-voltage plate. Chladni had the intriguing impetus to run a violin bow along a metal plate holding powder (some say sand), which created vibrations and thus arranged the particles into patterns, creating visibility to the vibrations. It’s a complex phenomenon due to wave behavior with the particles being moved from the “antinode” to the “nodal” lines, but suffice it to day – really cool patterns emerged. “Hmmm … wow, what are these fantastic shapes appearing from sound? From whence do they come?” Chladni may have asked.
A good while later, Hans Jenny whistled his own experimental tune of the 1960s and worked diligently for over a decade to create and study patterns created by vibration in the exact same vein as his predecessors. With the aid of years of technology, his vibrating method was superior to those before him in its accuracy. He used crystal oscillators and tone generators to control the frequency and amplitude of his signals as opposed to the anvil of Galileo and the bow of Chladni. Jenny connected these devices to metal plates and his methods were repeatable, a necessary condition in scientific research. Below is an example of the intricate patterns that Jenny uncovered through his work. This, in particular, is the latticework in liquid, seen in video below
Jenny’s Work and Theory
Jenny published Kymatik (translated Cymatics) in 1967 after, as mentioned, more than 10 years of intense study. When reading through his work, found at through a google search of “hans jenny cymatics pdf,” his adherence to anthroposophy is obvious as he continually attempts to connect the dots between his work and other periodic systems throughout the physical world: “Whenever we look in Nature, animate or inanimate, we see widespread evidence of periodic systems. These systems show a continuously repeated change from one set of conditions to another, opposite set”(Jenny, 17) He mentions human circulation and respiration, the cycles found within the vegetable and animal kingdoms and goes into chemistry as he sets up the argument for cymatics as a structure of the universe predating humankind: that the organization of the sound matrix is prime and found throughout all matter, and recognized throughout his work.
A colleague of his, Jeff Volk, sums up the most poignant of Jenny’s ideas succinctly in his introduction. He writes: that “the principle underlying Cymatics, that of periodicity, is so ubiquitous in nature (and in Nature), that it is found in all manner of phenomena.” Volk further reflects on how Jenny’s discoveries “mirrored biological forms and natural processes, as well as flowers, mandalas and intricate geometric designs … these experiments seemed to reveal the hidden nature of creation, to lay bare the very principle through which matter coalesces into form.” Volk The most striking of Volk’s points is that Jenny’s shapes were the result of “audible vibration.” In other words, cymatics allows us to see sound.
By carefully controlling the frequencies he generated and the area size of the metal plates, Jenny could compare various substances such as sand, fluids, and powders at different frequencies and in different areas. The vibrations had a large range which resulted in a large array of various geometric shapes. From these he noted three fundamental principles of vibration and wave motion. One pole exhibits patterns and figures which is visible. The other pole demonstrates kinetic processes (plate vibrations) which is audible. Third, the entire process is periodic, which Jenny terms “essential periodicity.” The concept of essential periodicity is significant in understanding Jenny’s mission: “essential” refers to patterns that are of the essence of the physical world, and periodicity clearly that periodic cycles are also embedded in the physical plane.
Jenny explores a wide array of different frequencies on different media resulting in a striking variety of visible patterns: square metal plates of various sizes, triangular plates with crystals attached to their underside, The images in this video begin at 2:07
Featured moments in the above video Cymatics – Bringing Matter To Life With Sound
2:27 – triangular plate
2:34 – higher note creates a more complicated figure
3:26 – different materials exhibit different behaviors
4:49 – rotary effect
5:30 – figures throb and sway
6:30 – liquid latticework
6:39 – skeletal
6:55 – animal like structure
The moments above and others through the video are reminiscent of other living and non-living beings and figures found in nature, clearly. I will end this post with Jenny’s own words which I’ve transcribed from the video above and shed further light and is purpose and vision on the primacy of sound:
You will see many things that answer many questions. You will see living forms, living amoeba, almost animal-like creatures, you will see continents being formed, the earth itself coming into existence, explosions, eruptions, atomic explosions and bombs, you can see all this and watch it before your eyes. But everything owes its existence solely and completely to sound. Sound is a factor which holds it together. Sound is the basis of form and shape. In the beginning was the word and the word was God. We are told this is how the world began and how creation took shape.~ Hans Jenny
The prior post concerned the three major modes which are designated by their major 5ths. The minor modes are similarly designated by their minor 5ths. Each has a unique history and flavor, but they share the familiar minor darkness of emotion in common. From the bittersweetness of the Dorian, the tense power of the Phrygian, to the floating lunacy of the Locrian, they all hold a distinct place as one of our seven modern modes.
Mode II: Dorian
The first minor mode is the second of seven modes (three major and 4 minor) – named Dorian. In C major the sequence of notes begins on D: D, E, F, G, A, B, C, D. It’s intervals are very similar to the natural minor scale (known as the Aeolian), but it has a raised 6th note. This raised 6th is the peculiarity of the Dorian mode that gives it a special feel, wistful but not tragic, due to the brighter interval from the raised 6th. The sequence of steps is W, H, W, W, W, H, W (with W being whole-step and H being half-step) laid out in a symmetrical manner with the three wholetones in the middle bordered by halftones and ending on each with a whole tone.
The Dorian scale derives its name from the Dorian Greeks, who are mentioned in Homer’s Odyssey as living on the island of Crete. It was a scale during the Greek period and one of the church modes of the Middle Ages, as well as existing in a current modern form. Russian composer Mily Balakirev (1837-1910) gave prominence to the Dorian mode when studying the structure of folk songs and dubbed the mode the “Russian minor.” Tracks of recent era seem to have a dark but hopeful sense to them, sad but not crushingly desolate. To me, at least, these songs share a common sound and I suppose it’s because they all employ the Dorian mode: “Scarborough Fair” by Simon and Garfunkel, “Eleanor Rigby” by the Beatles, “Purple Haze” by Jimi Hendrix, “Evil Ways” by Santana, and “Who Will Save Your Soul” by Jewel. If your in the mood to compose something in this vein, play around with the Dorian mode.
Mode III: Phrygian
Phrygian, the second minor mode, is the third of the seven modes. In C major the sequence of the notes begins on E: E, F, G, A, B, C, D, E. Like the Dorian mode, the Phrygian is nearly identical to the Aeolian, but with a flat 2nd, giving the interval a dark and tense feel. This note sequence is especially tasty for metal tracks as with “Wherever I May Roam” by Metallica. This flat 2nd gives the Phrygian it’s unique characteristics, unexpected by most modern listeners accustomed to the normal whole step from the first to the second notes in both the normal major and minor scales (Ionian and Aeolian), giving an impending negative mood.
Interestingly, it has the same notes as F minor, a common key in horror scores. The sequence of steps is H, W, W, W, H, W, W, and its sequence gives mysterious sounding mode and also coined the “Gypsy mode.” The notes constitute an E minor chord OR an E major, which with a C major scale played on top is reminiscent of “Spanish Music” as guitarist John Heussenstamm demonstrates:
The Phrygian mode is named after the ancient kingdom of Phrygia in Greece. Its music contributed to Greek musical traditions through Greek colonies and the mode is associated with combat and war. In fact, according to scholars the ethnic name Phrygian describes the wild and passionate people of the mountainous regions in Anatolia. It would make sense then, that the music derived from this mode does not fit neatly into the traditionally common Western Ionic/Aeolian box, being of the strange and the wild. Also, it’s not surprising that it lends itself to heavy metal with it’s wildness and power.
Mode VI: Aeolian
The Aeolian is the third minor mode, the sixth of the seven modes. Its series of pitches corresponds to the natural minor scale in Western music. In 1547, music scholar Henricus Glareanus first named and described the Aeolian in his treatise on music Dodecachordon. He added to the eight church modes that had dominated for 600 years to include newer major and minor modes and the Aeolian was one of the four (the others being Hypoaeolian, Ionian, and the Hypionian). The Aeolian used A as its tonic and matches the current minor of C major: A, B, C, D, E, F, G, A with flats on the 3rd, 6th, and 7th. As with the other modes, Aeolian was named after an ancient Greek ethnicity – the inhabitants of Aeolis on the Aeolian Islands.
The sound and feel of the Aeolian mode, ie the minor scale, is commonly known even among non-musicians. It quite simply is the opposite of the major scale. While the major scale, the Ionian, is bright, happy, cherry and optimistic, the Aeolian is dark, sad, foreboding, and heavy. Often songwriters will move from a major scale to its minor counterpart during a transition or bridge. Again, it is striking that simply by rearranging the exact same notes from Ionian to Aeolian one can create an incredibly different sound and feel. REM’s “Losing My Religion” is in Aeolian, the natural minor:
Mode VII: Locrian
The fourth minor mode is the Locrian, the final of the entire seven modes. The triad based on its tonic is a diminished chord and is dissonant from B to F and termed a “tritone” – an interval of three whole tones. B is the tonic with the intervals H, W, W, H, W, W, W and the notes B, C, D, E, F, G, A, B. While Glareanus in 1547 happily added Aeolian to the canon of acceptable modes, the Locrian was left out, as it attempts to resolve on the B and creates the dissonant tritone. The mode is named after the Greek regions of Locris. Yet while the name Locrian harkens back to this era, it is rarely used and finding examples of its use difficult.
The dissonant tritone was not accepted into music for centuries as it fell under the label “diabolus in musica,” meaning the devil in the music. The tritone, and hence the Locrian, was forbidden until the Baroque era when it was used within limit. The sound of the Locrian can be sinister and unsettling as it is used over half-diminished chords and has the same pitches as B-flat Aeolian and D-flat Ionian. It uses notes and combinations of notes not in the norm of western music and tends to be avoided unless perhaps one wants something very horrific and disturbing. Some musicians, heavy metal of course, do use the mode as a scale to build riffs such as in “Sandman” by Metallica as noted by Christopher Smith on “What are some of pieces that use the Locrian mode” on quora.com.
Smith goes on to explain that, as stated, the Locrian mode doesn’t sit well with Western ears, but that it is used frequently in the music of South Asia, the Middle East, and North Africa. He mentions that some Egyptian and Persian melodies in the folk tradition adhere to the Locrian mode. He writes further that three techniques have been used to solve the resolution problem with the chord cadence in the Locrian. 1. Just end on an octave note, not a final chord. 2. Use a minor chord to end instead of the diminished, but only in the final chord. 3. End on a flat 6th chord which suits as an ending yet leaves the piece feeling not quite resolved. In Western music though, it can be used to rock as with this demo of “In Your Words” by Lamb of God.
Because of this open-ended and unappealing feel of the Locrian mode, it has been called a “theoretical mode.” In other words, it exists in theory just as fine and dandy as the other modes, but in practice it is not widely used. It comes down to theory. In this case since the Locrian does not have a perfect 5th it sounds basically terrible and is unable to resolve if one intends to adhere to it perfectly, which essentially no one does. The mode is reminiscent of the Lydian major mode in that it floats rather than grounds itself. And, perfectly, Bjork, who uses the Lydian mode a bit in “Possibly Maybe,” also uses the Locrian in “Army of Me” briefly in the bassline. This makes sense of course as Bjork is the quintessential experimenter.
What is striking about these modes and their unique characters and the emotions that they elicit is this: they point to a pre-existing structure and pull us toward obeying that structure. A perfect fifth is a necessity, at least in Western music, or it sounds dissonant. Why? As with Hans Jenny’s work discussed in another blog post here on Cymatics, there is evidence of pre-existing order that humans are tapping into. To me, it’s the same question I ask regarding math. Did we invent math or did we discover it? Did we invent music or did we discover it?