Sound Systems - Design and Optimization (3rd Edition) - PDFCOFFEE.COM (2024)

Sound Systems: Design and Optimization Third Edition Sound Systems: Design and Optimization provides an accessible and unique perspective on the behavior of sound systems in the practical world. The third edition reflects current trends in the audio field thereby providing readers with the newest methodologies and techniques. In this greatly expanded new edition, you’ll find clearer explanations, a more streamlined organization, increased coverage of current technologies and comprehensive case studies of the author’s award-winning work in the field. As the only book devoted exclusively to modern tools and techniques in this emerging field, Sound Systems: Design and Optimization provides the specialized guidance needed to perfect your design skills. This book helps you: ■ Improve your design and optimization decisions by understanding how audiences perceive reinforced sound. ■ Use modern analyzers and prediction programs to select speaker placement, equalization, delay and level settings based on how loudspeakers interact in the space. ■ Define speaker array configurations and design strategies that maximize the potential for spatial uniformity. ■ Gain a comprehensive understanding of the tools and techniques required to generate a design that will create a successful transmission/reception model. Bob McCarthy is the Director of System Optimization at Meyer Sound and President of Alignment & Design, Inc. As a developer of FFT analysis systems, he pioneered the methods for tuning modern speaker systems that have since become standard practice in the industry. He is the foremost educator in the field of sound system optimization and has conducted training courses worldwide for over thirty years. Bob received the USITT Distinguished Achiever in Sound Design in 2014. His clients have included esteemed companies such as Cirque du Soleil and Walt Disney Entertainment, as well as many of the world’s best sound designers, including Jonathan Deans, Tony Meola, Andrew Bruce and Tom Clark.

Sound Systems: Design and Optimization Modern Techniques and Tools for Sound System Design and Alignment

Third Edition

Bob McCarthy

Third edition published 2016 by Focal Press 711 Third Avenue, New York, NY 10017 and by Focal Press 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Focal Press is an imprint of the Taylor & Francis Group, an informa business. © 2016 Bob McCarthy The right of Bob McCarthy to be identified as the author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. First published XXX by XX Second edition published XXX by XX Library of Congress Cataloging-in-Publication Data McCarthy, Bob. Sound systems: design and optimization: modern techniques and tools for sound system design and alignment / by Bob McCarthy. — Third edition. pages cm Includes bibliographical references and index.

ISBN 978-0-415-73099-0 (hardback) — ISBN 978-0-415-73101-0 (pbk.) — ISBN 978-1-315-84984-3 (ebk) 1. Sound—Recording and reproducing. I. Title. TK7881.4.M42 2016 621.389'3—dc23 2015016842 ISBN: 978-0-415-73101-0 (pbk) ISBN: 978-0-415-73099-0 (hbk) ISBN: 978-1-315-84984-3 (ebk) Typeset in Giovanni By Apex CoVantage, LLC

Dedication To the woman who knew me back when this journey began, and is still there through three editions, the love of my life, Merridith.

In memoriam During the course of writing the first edition our field lost one of its most well-loved and respected pioneers, Don Pearson a.k.a. Dr Don. I had the good fortune to work with Don and receive his wisdom. He was there when it all began and is still missed. More recently we lost Tom Young, and Mike Shannon who contributed so much to the industry and to me personally. And finally, I lost my brother Chris who was an inspiration to all who knew him.





Preface This book is about a journey. On the one hand, the subject is the journey of sound as it travels through a sound system, then through the air, and inevitably to a listener. It is also a personal journey, my own quest to understand the complicated nature of this sound transmission. The body of this text will detail the strictly technical side of things. First, however, I offer you some of the personal side. I was supposed to build buildings. Unbeknownst to me at the time, this calling was derailed on February 9, 1964 by the appearance of the Beatles on The Ed Sullivan Show. Like so many of my generation, this landmark event brought popular music and an electric guitar into my life. I became a great enthusiast of live concerts, which I regularly attended throughout my youth at any chance presented. For years, it remained my expectation that I would enter the family construction business. This vision ended on a racetrack in Des Moines, Iowa on June 16, 1974. The experience of hearing the massive sound system at this Grateful Dead concert set my life in a new direction. On that day I made the decision that I was going to work in live concert sound. I wanted to help create this type of experience for others. I would be a mix engineer and my dream was to one day operate the mix console for big shows. I set my sights on preparing for such a career while at Indiana University. This was no simple matter because there was no such thing as a degree in audio. I soon discovered the Independent Learning Program. Under the auspices of that department, I assembled a mix of relevant courses from different disciplines and graduated with a college-level degree in my selfcreated program of audio engineering.

FIGURE 0.1 Ticket stub from the June 16, 1974 Grateful Dead concert in Des Moines, Iowa that led to my life of crime

By 1980, I had a few years of touring experience under my belt and had moved to San Francisco. There I forged relationships with John Meyer, Alexander Yuill-Thornton II (Thorny) and Don Pearson. These would become the key relationships in my professional development. Each of us was destined to stake our reputations on the same piece of equipment: the dual-channel FFT analyzer. I would like to say that I have been involved in live concert measurement with the dual-channel FFT

analyzer from day one, but this is not the case. It was day two. John Meyer began the process on a Saturday night in May of 1984. John took the analyzer, an analog delay line and some gator clips to a Rush concert in Phoenix, Arizona, where he performed the first measurements of a concert sound system using music as the source with audience in place. I was not destined to become involved in the project until the following Monday morning.

FIGURE 0.2 The author with the prototype SIM analyzer with the Grateful Dead in July 1984 at the Greek Theater in Berkeley, California (Clayton Call photo)

From that day forward, I have never been involved in a concert or a sound system installation without the use of a dual-channel FFT analyzer. I haven’t mixed a show since that day, resetting my vision to the task of helping mix engineers to practice their art. For Don, John, Thorny and many others, the idea of setting up a system without the presence of the FFT analyzer was unthinkable. Seeing a sound system response in high resolution, complete with phase, coherence and impulse response, is a bell that cannot not be un-rung. We saw its importance and its practical implications from the very beginning and knew the day would come when this would be standard practice. Our excitement was palpable, with each concert resulting in an exponential growth in knowledge. We introduced it to everyone who had an open mind to listen. The first product to come from the FFT analysis process was a parametric equalizer. A fortuitous coincidence of timing resulted in my having etched the circuit boards for the equalizer on my back porch over the weekend that John was in Phoenix with Rush. This side project (a bass guitar preamp) for my friend Rob Wenig was already six months late, and was destined to be even later. The EQ was immediately pressed into service when John nearly fell over as he saw that it could create the complementary response (in both amplitude and phase) to what he had measured in Phoenix. The CP-10 was born into more controversy than one might imagine. Equalization has always been an emotional “hot button“ but the proposition that the equalizer was capable of counteracting the summation properties of the speaker/room interaction was radical enough that we obtained the support of Stanford’s Dr Julius Smith to make sure that the

theory would hold up. Don Pearson was the first outside of our company to apply the concepts of in-concert analysis in the field. Don was owner of Ultrasound and was touring as the system engineer for the Grateful Dead. Don and the band immediately saw the benefit and, lacking patience to wait for the development of what would become the Meyer Sound SIM System, obtained their own FFT analyzer and never looked back. Soon thereafter, under the guidance of San Francisco Opera sound designer Roger Gans, we became involved with arena-scale performances for Luciano Pavarotti. We figured it was a matter of months before these techniques would become standard operating procedure throughout the industry. We had no idea it would take closer to twenty years! The journey, like that of sound transmission, was far more complex than we ever expected. There were powerful forces lined up against us in various forms: the massive general resistance of the audio community to sound analyzers and the powerful political forces advocating for alternate measurement platforms, to name a few. In general, the live sound community was massively opposed to what they conceptualized as an analyzer dictating policy to the creative forces involved in the music side of the experience. Most live concert systems of the day lacked complexity beyond piles of speakers with left and right channels. This meant that the process of alignment consisted of little more than equalization. Because all of the system calibration was being carried out at a single location, the mix position, the scientific and artistic positions were weighing in on the exact same question at the same point in space. Endless adversarial debate about what was the “correct” equalization ensued because the tonal balancing of a sound system is, and always has been, an artistic endeavor. It was an absurd construct. Which is better—by ear or by analyzer?

FIGURE 0.3 November 1984 photo of Luciano Pavarotti, Roger Gans, the author (back row), Drew Serb, Alexander Yuill-Thornton II and James Locke (front row) (Drew Serb photo)

This gave way to a more challenging and interesting direction for us: the quest beyond the mix position. Moving the mic out into the space left us with a terrible dilemma: The new positions revealed conclusively that the one-size-fits-all version of system equalization was utter fantasy. The precision tuning of parametric filters carried out with great care for the mix position had no justification at other locations. The interaction of the miscellaneous parts of the speaker system created a highly variable response throughout the room. Our focus shifted from finding a perfect EQ to the quest for uniformity over the space. This would require the subdivision of the sound system into defined and separately adjustable subsystems, each with individual level, equalization and delay capability. The subsystems were then combined into a unified whole. The rock and roll community was resistant to the idea, primarily because it involved turning some of the speakers down in level. The SPL Preservation Society staunchly opposed anything that might detract from the maximum power capability. Uniformity by subdivision was not worth pursuing if it cost power (pretty much nothing else was either). Without subdivision, the analysis was pretty much stuck at the mix position. If we are not going to change anything, why bother to look further? There were other genres that were open to the idea. The process required the movement of a microphone around the room and a systematic approach to deconstructing and reconstructing the sound system. We began developing this methodology with the Pavarotti tours. Pavarotti was using approximately ten subsystems, which were individually measured and equalized and then merged together as a whole. Our process had to be honed to take on even more complexity when we moved into the musical theater world with Andrew Bruce, Abe Jacob, Tony Meola, Tom Clark and other such sound designers. Our emphasis changed from providing a scientifically derived tonal response to maximizing consistency of sound throughout the listening space, leaving the tonal character in the hands of the mix engineer. Our tenure as the “EQ police” was over as our emphasis changed from tonal quality to tonal equality. The process was thus transformed into optimization, emphasizing spatial uniformity while encompassing equalization, level setting, delay setting, speaker positioning and a host of verifications on the system. A clear line was drawn between the artistic and the scientific sectors. In the early days, people assessed the success of a system tuning by punching out the filters of the equalizer. Now, with our more sophisticated process, we could no longer re-enact before-and-after scenarios. To hear the “before” sound might require repositioning the speakers, finding the polarity reversals, setting new splay angles, resetting level and time delays, and finally a series of equalizations for the different subsystems. Lastly, the role of the optimization engineer became clear: to ensure that the audience area receives the same sound as the mix position. In 1987, we introduced the Source Independent Measurement system (SIM). This was the first multichannel FFT analysis system designed specifically for sound system optimization (up to sixtyfour channels). It consisted of an analyzer, multiple mics and switchers to access banks of equalizers

and delays. All of this was under computer control, which also kept a library of data that could be recalled for comparison of up to sixteen different positions or scenarios. It thereby became possible to monitor the sound system from multiple locations and clearly see the interactions between subsystems. It was also possible to make multiple microphone measurements during a performance and to see the effects of the audience presence throughout the space. This is not to say we were on Easy Street at this point. It was a dizzying task to manage the assembly of traces that characterized a frequency response, which had to be measured in seven separate linear frequency sections. A single data set to fully characterize one location at a point in time was an assembly of sixty-three traces, of which only two could be seen at any one time on the tiny 4-inch screen. Comparison of one mic position to another had to be done on a trace-by-trace basis (up to sixty-three operations). It was like trying to draw a landscape while looking through a periscope. The multichannel measurement system opened the door to system subdivision. This approach broke the pop music sound barrier with Japanese sensation Yuming Matsutoya under the guidance of Akio Kawada, Akira Masu and Hiro Tomioka. In the arenas across Japan we proved that the techniques we had developed for musical theater and Pavarotti (level tapering, zoned equalization and a carefully combined subsystem) were equally applicable to high-power rock music in a touring application. The introduction of the measurement system as a product was followed by the first training seminar in 1987. A seminal moment came from an unexpected direction during this first seminar as I explained the process of system subdivision and mic placement for system optimization. Dave Robb, a very experienced engineer, challenged my mic placement as “arbitrary.” In my mind, the selection was anything but arbitrary. However, I could not, at that moment, bring forth any objective criteria with which to refute that assertion. Since that humiliating moment, my quest has been to find a defensible methodology for every decision made in the process of sound system optimization. It is simply not enough to know that something works; we must know why it works. Those optimization methodologies and an accompanying set of methods for sound system design are the foundation of this book. I knew nothing of sound system design when this quest began in 1984. Almost everything I have learned about the design of sound systems comes from the process of their optimization. The process of deconstructing and reconstructing other people’s designs gave me the unique ability/perspective to see the aspects that were universally good, bad or ugly. I am very fortunate to have been exposed to all different types of designs, utilizing many different makes and models of speakers, with all types of program materials and scales. My approach has been to search for the common solutions to these seemingly different situations and to distill them into a repeatable strategy to bring forward to the next application. Beginning with that very first class, with little interruption, I have been optimizing sound systems

and teaching anybody who wanted to attend my seminars everything I was learning. Thorny, meanwhile, had moved on and founded a company whose principal focus was sound system optimization services using the dual-channel FFT systems. Optimization as a distinct specialty had begun to emerge. The introduction of SIA-SMAART in 1995 resulted from the collaboration of Thorny and Sam Berkow with important contributions by Jamie Anderson and others in later years. This low-cost alternative brought the dual-channel FFT analyzer into the mainstream and made it available to audio professionals at every level. Even so, it took years before our 1984 vision of the FFT analyzer, as standard front-of-house equipment, would become reality. Unquestionably, that time has arrived. The paradigm has reversed to the point where tuning a system without scientific instrumentation would be looked at with as much surprise as was the reverse in the old days. Since those early days we have steadily marched forward with better tools—better sound systems, better sound design tools and better analyzers. The challenge, however, has never changed. It is unlikely that it will change, because the real challenge falls mostly in the spatial distribution properties of acoustical physics. The speakers we use to fill the room are vastly improved and signalprocessing capability is beyond anything we dreamed of in those early days. Prediction software is now readily available to illustrate the interaction of speakers, and we have affordable and fast analyzers to provide the on-site data. And yet we are fighting the very same battle that we have always fought: the creation of a uniform sonic experience for audience members seated everywhere in the venue. It is an utterly insurmountable challenge. It cannot be achieved. There is no perfect system configuration or tuning. The best we can hope for is to approach uniformity. I believe it is far better to be coldly realistic about our prospects. We will have to make decisions that we know will degrade some areas in order to benefit others. We want them to be informed decisions, not arbitrary ones. This book follows the full transmission path from the console to the listener. That path has gone through remarkable changes along its entire electronic voyage. But once the waveform is transformed into its acoustic form it enters the very same world that Jean-Baptiste Joseph Fourier found in the eighteenth century and Harry Olson found in the 1940s. Digital, schmigital. Once it leaves the speaker, the waveform is pure analog and at the mercy of the laws of acoustical physics. These unchanging aspects of sound transmission are the focus of 90 per cent of this book. Let’s take a moment to preview the challenges we face. The primary player is the interaction of speakers with other speakers, and with the room. These interactions are extremely complex on the one hand, and yet can be distilled down to two dominant relationships: relative level and relative phase. The combination of two related sound sources will create a unique spatial distribution of additions and subtractions over the space. The challenge is the fact that each frequency combines differently, creating a unique layout. Typical sound systems have a frequency range of 30 to 18,000

Hz, which spans a 600:1 ratio of wavelengths. A single room, from the perspective of spatial distribution over frequency, is like a 600-story skyscraper with a different floor plan at every level. Our job is to find the combination of speakers and room geometry that creates the highest degree of uniformity for those 600 floor plans. Every speaker element and surface will factor into the spatial distribution. Each element plays a part in proportion to the energy it brings to the equation at a point in the space. The combined level will depend upon the relationship between the individual phase responses at each location at each frequency. How do we see these floor plans? With an acoustic prediction program we can view the layout of each floor, and compare them and see the differences. This is the viewpoint of a single frequency range analyzed over the entire space. With an acoustic analyzer we get a different view. We see a single spot on each floor from the foundation to the rooftop through a piece of pipe as big around as our finger. This is the viewpoint of a single point in space analyzed over the entire frequency range. This is a daunting task. But it is comprehensible. This book will provide you with the information required to obtain the X-ray vision it takes to see through the 600-story building from top to bottom, and it can be done without calculus, integral math or differential equations. We let the analyzer and the prediction program do the heavy lifting. Our focus is on how to read X-rays, not on how to build an X-ray machine. The key to understanding the subject, and a persistent theme of this book, is sound source identity. Every speaker element, no matter how big or small, plays an individual role, and that solitary identity is never lost. Solutions are enacted locally on an element-by-element basis. We must learn to recognize the individual parties to every combination, because therein lie the solutions to their complex interaction. This is not a mystery novel, so there is no need to hide the conclusion until the last pages. The key to spatial uniformity is control of the overlap of the multiple elements. Where two elements combine they must be in phase to obtain spatial uniformity. If the elements cannot maintain an in-phase relationship, then they must decrease the overlap and subdivide the space so that one element takes a dominant role in a given area. There are two principal mechanisms to create isolation: angular separation and displacement. These can be used separately or in combination and can be further aided by independent control of level to subdivide the room. This is analogous to raising children: If they don’t play well together, separate them. The interaction of speakers to the room is similar to the interaction of speakers with other speakers. Those surfaces that return energy back toward our speakers will be the greatest concern. The strength of the inward reflections will be inversely proportional to our spatial uniformity. There is no single design for a single space. There are alternate approaches and each involves tradeoffs. There are, however, certain design directions that keep open the possibility of spatial uniformity and others that render such hopes statistically impossible. A major thrust of the text will be devoted to defining the speaker configurations and design strategies that maximize the potential

for spatial uniformity. Once designed and installed, the system must be optimized. If the design has kept the door open for spatial uniformity, our task will be to navigate the system through that door. The key to optimization is the knowledge of the decisive locations in the battle for spatial uniformity. The interactions of speakers and rooms follow a consistent set of spatial progressions. The layering of these effects over each other provides the ultimate challenge, but there is nothing random about this family of interactions. It is logical and learnable. Our measurement mics are the information portals to decipher the variations between the hundreds of floor plans and make informed decisions. Our time and resources are limited. We can only discern the meaning of the measured data if we know where we are in the context of the interaction progressions. We have often seen the work of archeologists where a complete rendering of a dinosaur is created from a small sampling of bone fragments. Their conclusions are based entirely on contextual clues gathered from the knowledge of the standard progressions of animal anatomy. If such progressions were random, there would be nothing short of a 100 per cent fossil record that could provide answers. From a statistical point of view, even with hundreds of mic positions, we will never be able to view more than a few tiny fragments of our speaker system’s anatomy in the room. We must make every measurement location count toward the collection of the data we need to see the big picture. This requires advance knowledge of the progression milestones so that we can view a response in the context of what is expected at the given location. As we shall see, there is almost nothing that can be concluded from a single location. The verification of spatial uniformity rests on the comparison of multiple locations. This book is about defined speakers in defined array configurations, with defined optimization strategies, measured at defined locations. This book is not intended to be a duplication of the general audio resource texts. Such books are available in abundance and it is not my intention to encompass the width and breadth of the complete audio picture. My hope is to provide a unique perspective that has not been told before, in a manner that is accessible to the audio professionals interested in a deeper understanding of the behavior of sound systems in the practical world. There are a few points that I wish to address before we begin. The most notable is the fact that the physical realities of loudspeaker construction, manufacture and installation are largely absent. Loudspeakers are described primarily in terms of acoustic performance properties, rather than the physical nature of what horn shape or transducers were used to achieve it. This is also true of electronic devices. Everything is weightless, colorless and odorless here. The common transmission characteristics are the focus, not the unique features of one model or another. The second item concerns the approach to particular types of program material such as popular music, musical theater or religious services, and their respective venues such as arenas, concert halls, showrooms or houses of worship. The focus here is the shape of the sound coverage, the scale of

which can be adjusted to fit the size of the venue at the appropriate sound level for the given program material. It is the venue and the program material taken together that create an application. The laws of physics are no different for any of these applications, and the program material and venues are so interchangeable that attempts to characterize them in this way would require endless iterations. After all, the modern-day house of worship is just as likely to feature popular music in an arena setting as it is to have speech and chant in a reverberant cathedral of stone. The third notable aspect is that there are a substantial number of unique terminologies found here and, in some cases, modification of standard terminologies that have been in general use. In most cases the conceptual framework is unique and no current standard expressions were found. The very young field of sound system optimization has yet to develop consistent methods or a lexicon of expressions for the processes shown here. In the case of some of these terms, most notably the word “crossover,” there are compelling reasons to modify the existing usage, which will be revealed in the body of the text. The book is divided into three parts. The first part, “Sound systems,” explores the behavior of sound transmission systems, human hearing reception and speaker interaction. The goal of this part is a comprehensive understanding of the path the signal will take, the hazards it will encounter along the way and how the end product will be perceived upon arrival at its destination. The second part, “Design,” applies the properties of the first part to the creation of a sound system design. The goals are comprehensive understanding of the tools and techniques required to generate a design that will create a successful transmission/reception model. The final part, “Optimization,” concerns the measurement of the designed and installed system, its verification and calibration in the space.

About the Third Edition From the viewpoint of my publisher, Focal Press, this is indeed the third edition of Sound Systems: Design and Optimization. From my perspective it feels more like the thirtieth edition, because I have been writing about these same subjects for thirty+ years. You might think I would have figured this subject out by now but I can assure you I am still learning. This field of work continues to evolve as we get better tools and techniques, which is exactly what I find most interesting about it. Study in this field is a moving target as new technology opens doors and removes obstacles and excuses. The more I learn about this, the more I realize how much I have to learn. Adding new areas is the easy part of creating a new edition. It’s what to do with the previous material that presents a challenge. There are two ways to approach the old material: innocent until proven guilty, or the opposite. The former approach leaves things in unless they are conclusively out of date or irrelevant. The latter approach throws the old material out unless it can prove it is still current practice and up to date. I studied the later editions of several other authors and noticed a troubling trend. Although new information was added in later editions, a lot of old information remained in place. Seeing vacuum tube circuits from the 1960s in a current-day pro audio text was a tipping point for me. The decision was made to trim out the old to make room for the new. If it’s the way we do things now, it’s in. If we’ve moved on, it’s out. The surprise for me was how much we have moved forward in this time, which meant entire chapters were bulldozed and rebuilt. One of the hardest decisions was to let go of the perspective sidebars that colored the previous editions with the wisdom and insight of so many of my friends and colleagues. The bottom line is that there is simply too much new information to be added. I take comfort in knowing that there are many other places where those voices can be heard and that optimization is now firmly ensconced as part of the audio landscape. There have been no updated laws of physics and our audio analyzers still compute things the same way as they did in 1991. But today’s analyzers are faster, easier and able to multitask, which means we can get much more done in a short time. We can tune methodically, and methods are what this book is about. We have far better loudspeakers, processors and steadily better rooms to work in. All this leads to the primary goal of this third edition: current methodologies and techniques for sound system design and optimization.

Acknowledgements The development of this book spans more than thirty years in the field of sound system optimization. Were it not for the discoveries and support of John and Helen Meyer, I would have never become involved in this field. They have committed substantial resources to this effort, which have directly helped the ongoing research and development leading up to this writing. In addition, I would like to acknowledge the contribution of every client who gave me the opportunity to perform my experiments on their sound systems. Each of these experiences yielded an education that could not be duplicated elsewhere. In particular I would like to thank David Andrews, Peter Ballenger, Nick Baybak, Mark Belkie, Mike Brown, Andrew Bruce, John Cardenale, Tom Clark, Mike Cooper, Jonathan Deans, François Desjardin, Steve Devine, Martin Van Dijk, Steve Dubuc, Duncan Edwards, Aurellia Faustina, T. C. Furlong, Roger Gans, Scott Gledhill, Michael Hamilton, Andrew Hope, Abe Jacob, Akio Kawada, Andrew Keister, Tony Meola, Ben Moore, Philip Murphy, Kevin Owens, Frank Pimiskern, Bill Platt, Marvic Ramos, Harley Richardson, Paul Schmitz, David Sarabiman, Pete Savel, Rod Sintow, Bob Snelgrove, David Starck, Benny Suherman, Leo Tanzil and Geoff Zink, all of whom have given me multiple opportunities through the years to refine the methods described here. Special thanks to Mr O at the National Theatre of Korea, Mr Song at Dongseo University, Mr Lee and others at the LG Art Center. I have also learned much from other engineers who share my passion for this field of work and constantly challenge me with new ideas and techniques. This list would be endless and includes but is not limited to Brian Bolly, Michael Creason, Ales Dravinec, Josh Evans, Peter Grubb, Glenn Hatch, Luke Jenks, Miguel Lourtie, Karoly Molnar, John Monitto, Matt Salerno, John Scandrett and Robert Scovill. I would also like to thank Meyer Sound for sponsoring my seminars and Gavin Canaan and others for organizing them. I am grateful to everyone who has attended my seminars, as the real-time feedback in that context provides a constant intellectual challenge and stimulation for me. My fellow instructors in this field have contributed much collaborative effort through discussion and the sharing of ideas. Notable among these are Jamie Anderson, Oscar Barrientos, Timo Beckman, Sam Berkow, Harry Brill, Richard Bugg, Steve Bush, Jim Cousins, Pepe Ferrer, Michael Hack, Mauricio Ramirez, Arthur Skudrow, Hiro Tomioka, Merlijn Van Veen and Jim Woods. Thanks to Daniel Lundberg for his creation of the uncoupled array calculator based on data from my previous edition. This tool is a mainstay of my design process. A huge majority of the knowledge, data (and graphics) in this book comes from two sources: Meyer Sound’s SIM3 Audio Analyzer and their MAPP Online™ platform. I would have nothing to write about without these tools. My gratitude goes to everyone who contributed to their creation,

including (but not limited to) John and Helen Meyer, Perrin Meyer, Dr Roger Schwenke, Fred Weed, Todd Meier, Mark Schmeider, Paul Kohut and the late Jim Isom. The following figures contain data from my earlier publications at Meyer Sound and their permission is gratefully acknowledged: Figures 3.9, 4.24, 4.27, 13.12 to 13.16. The data presented in Figures 1.1, 1.4, 1.11 and 12.11 were created using the calculations by Mauricio Ramirez. The 3-D wraparound graphics (Figure 12.8) were adapted from the animations created by Greg Linhares. Merlijn Van Veen contributed a mix of core calculations, data and graphics that went into the construction of Figures 1.12, 2.2, 2.5 to 2.12, 3.30, 3.37, 3.42, 4.8, 8.11, 8.14 and 14.10. John Huntington contributed some of the photographs used in the section break pages, specifically Section 1 (panels 1, 5 and 7) and Section 3 (panel 4). Thanks go to all of the people who aided in the process of bringing this edition to physical reality such as my editor Megan Ball and Mary LaMacchia at Focal Press. Additional thanks go to Margo Crouppen for her support throughout the entire publishing process. I received proofing and technical help for my previous editions from Jamie Anderson, Sam Berkow, David Clark, John Huntington, Mauricio Ramirez and Alexander (Thorny) Yuill-Thornton. Harry Brill Jr., Richard Bugg, Philip Duncan, John Huntington, Jeff Koftinoff and Mauricio Ramirez contributed to the third edition. Merlijn Van Veen deserves special recognition for his enormous contributions to this edition. He hung with me at every step of the way, pushing me to clarify language, methodology and backing up my calculations. He also contributed greatly to the graphics, providing both material support and advice on how to best convey the information. I learned much from Merlijn in the course of this writing and his mark has clearly been left in these pages. Finally, there is no one who comes close to contributing as much support as my wife Merridith. She has read every page and seen every drawing of every edition and been absolutely tireless in her efforts. Each edition has been an endurance test lasting over a year and she has stuck with me through each, swayed by my promises that this would be the last one (fooled her again). This book would not have been possible without her support as my agent, manager, copy editor, proofreader and cheerleader.

Part I Sound systems

Chapter 1 Foundation We begin with the establishment of a firm foundation upon which to build the structure for the study of sound system design and optimization. Here we standardize definitions and terminology for usage throughout this book. If this is not your first day in audio you will already understand many of the concepts in this chapter, because much of this is the universal foundation material found in books, the Internet and the tribal knowledge passed down from elders on the road. I have, however, selectively edited the list of fundamentals to those concepts pertinent to modern-day design and optimization. We won’t cover the Doppler effect, underwater acoustics, industrial noise suppression and any other areas that we can’t put to immediate practical use. foundation n. solid ground or base on which a building rests; groundwork, underlying principle; body or ground upon which other parts are overlaid. Concise Oxford Dictionary

The next section will read somewhat like a glossary, which is traditionally placed at the rear of the book, and the last place you would normally read. These are, however, the first concepts we need to establish and keep in mind throughout. The foundation we lay here will ease the building process as we progress upward in complexity. We begin with the foundations of this book.

Sound Sound is a vibration or mechanical wave that is an oscillation of pressure (a vibration back and forth) transmitted through some medium (such as air), composed of frequencies within the range of hearing.

System A system is a set of interacting or interdependent components forming an integrated whole. A sound system consists of a connected collection of components whose purpose is to receive, process and transmit audio signals. The basic components consist of microphones, signal processing, amplifiers, speakers, interconnection cabling and digital networking.

Design Design is the creative process of planning the construction of an object or system. We design sound systems in rooms by selecting the components, their function, placement and signal path.

Optimization Optimization is a scientific process whose goal is to achieve the best result when given a variety of options. In our case, the goal is the maximization of sound system performance in conformance with the design intent. And do we ever have a variety of options! The primary metric for optimization is uniformity of response over the space.

1.1 Universal Audio Properties Let’s define the universal properties within our limited field of study: the acoustical and analog electrical behavior of sound and its mathematical renderings in digital form.

1.1.1 Audio Audio is a stream of data beginning and/or ending as sound. The audible version connects directly to our ears through the air. Audio can also exist in an encrypted form that cannot be heard until decoded. The monumental breakthrough of Edison’s phonograph was encoding audio into a groove on a lacquer cylinder for playback through a mechanical decoder (a diaphragm attached to a moving needle). Encoded audio exists in many forms: magnetic flux (in tape, transformers, microphones or loudspeakers), electronic signal in a wire and even as a digital numerical sequence. Audio stream oscillations can be rendered as a sequential set of amplitude values over time. Analog audio renderings are a continuous function (i.e. the amplitude and time values are infinitely divisible). Digital audio renderings are finitely divisible (i.e. a single amplitude value is returned for each block of time).

1.1.2 Frequency (f) and time (T) Frequency (f or Hz) is the number of oscillations completed in one second (the reciprocal of the time period). Period (T) is the time interval to complete one cycle. Frequency is cycles/second and time is seconds/cycle (f = 1/T and T = 1/f). Either term describes an oscillation, the choice being for convenience only. Fluency in the translation of time and frequency is essential for design and optimization (Fig. 1.1). Period formulas for T are computed in seconds, but in practice we almost always use milliseconds (1 ms = 0.001 of a second).

1.1.3 Cycle A cycle is a completed oscillation, a round trip that returns to the starting state of equilibrium. The distinction between cycle and period is simply units. A period is measured in time (usually ms) and a cycle is measured in completed trips. A cycle at 250 Hz has a period of 4 ms. One cycle @125 Hz (or two cycles @250 Hz) have 8 ms periods. We often subdivide the cycle by fractions or degrees of phase, with 360° representing a complete cycle. It is common to use the term “cycle” when dealing with the phase response, e.g. 1 ms @250 Hz, which is ¼ cycle (90°).

1.1.4 Oscillation Oscillation is the back and forth process of energy transfer through a medium. This may be mechanical (e.g. a shaking floor), acoustical (e.g. sound in the air) or electromagnetic (e.g. an electronic audio signal). The oscillating matter’s movement is limited by the medium and returns to equilibrium upon completion. Energy transfer occurs through the medium.

FIGURE 1.1 Relationship of time and frequency

1.1.5 Amplitude (magnitude) Amplitude is the quantitative measure of oscillating energy, the extent of mechanical displacement (m, cm, etc.), acoustical pressure change (SPL), electrical voltage change (V), magnetic flux (B) and others. Amplitude values can be expressed linearly (e.g. volts) or logarithmically as a ratio (the dB scale). Amplitude is one of the more straightforward aspects of audio: bigger is bigger. Amplitude (black T-shirt) and magnitude (lab coat) are interchangeable terms. We will introduce audio amplitude in various forms next and then cover the scaling details (such as the dB scale) in section 1.2. It may be necessary to bounce between those sections if you are completely unfamiliar with these scales. DC Polarity (Absolute Polarity) DC (direct current) polarity is the signal’s directional component (positive or negative) relative to equilibrium. Electrical: +/- voltage, acoustical +/- pressure (pressurization/rarefaction), etc. This is

applicable in strict terms to DC signals only, because AC signals have both positive and negative values. A 9 V battery illustrates the electrical version. Connecting the battery to a speaker illustrates the acoustical version, because it only moves in one direction. Absolute Amplitude Absolute amplitude is the energy level relative to equilibrium (audio silence). Electrical audio silence is 0 VAC, whether DC is present or not. Only AC can make audio. DC moves a speaker but unfortunately the only sound it can make is the speaker burning. Acoustic systems are referenced to changes above or below the ambient air pressure (air’s equivalent for DC). Absolute amplitude values cannot be less than zero, because we can’t have less movement than equilibrium. A “-” sign in front of an amplitude value indicates negative polarity. Relative amplitude values are more common in audio than absolute ones. Relative to a Fixed Reference Audio levels change on a moment-to-moment basis. Therefore most amplitude measurements are relative to a reference (either fixed or movable). Examples of fixed references include 1 V (electrical) or the threshold of human hearing (acoustical) (Fig. 1.2). The reference level can be expressed in various units and scales, as long as we agree on the value. An amplitude value of 2 volts can be expressed as 1 volt above the 1 V volt reference (linear difference) or twice the reference level (linear multiple). Many reference standards for audio are specified in decibel values (dB), which show amplitude changes in a relative log scale (like our hearing). One volt, 0 dBV and +2.21 dBu are the same amount of voltage, expressed in different units or scales. A musical passage with varying level over time can be tracked against the fixed reference, e.g. a certain song reaches a maximum level of 8 volts (+18 dBV, +20.21 dBu) and an acoustical level of 114 dB SPL (114 dB above the threshold of hearing). Relative to a Variable Reference (Amplitude Transfer Function) We can monitor the amplitude of constantly changing signals in second-cousin form, i.e. relative to a relative (Fig. 1.3). We compare signal entering and exiting a device, such as music going through a processor. The relative/ relative measurement (the 2-channel output/input comparison) is termed “transfer function measurement,” the primary form of analysis used in system optimization. Frequency response amplitude traces in this book are relative amplitude (transfer function) unless specified otherwise.

Let’s return to the above example. The music level is changing, but the output and input waveforms track consistently as long as the processor gain remains stable. If output and inputs are level matched (a 1:1 ratio), the device has a transfer function voltage gain of unity (0 dB). If the voltage consistently doubles, its transfer function gain is 2× (+6 dB). We can span the electronic and acoustic domains by comparing the processor output with the sound level in the room. This reveals a voltage/SPL tracking relationship, such as +0 dBV (1 V) creates 96 dB SPL (and +6 dBV (2 V) creates 102 dB SPL etc.). The beauty of transfer function measurement is its ability to characterize a device (or series of devices) with random input material across multiple media, so long as the waveforms at both ends are correlated. This will be covered extensively in Chapter 12.

FIGURE 1.2 Absolute amplitude vs. frequency. Amplitude is referenced to 0 dB SPL at the bottom of the vertical scale.

FIGURE 1.3 Transfer function amplitude vs. frequency. Amplitude is referenced to unity gain at the center of the vertical scale. Peak (PK) and Peak-To-Peak (PK-PK) The peak (pk) amplitude value is the signal’s maximum extent above or below equilibrium whereas peak-to-peak (pk–pk) is the span between above and below values. Any device in the transmission path must be capable of tracking the full extent of the pk–pk amplitude. Failure results in a form of harmonic distortion known as “clipping” (because the tops of the peaks are flattened). The waveform seen on an oscilloscope or digital audio workstation is a representation of the pk–pk values. RMS (Root Mean Squared) The rms value (root-mean-squared) is the waveform’s “average-ish” amplitude. The rms calculation makes AC (+ and -) equivalent to DC (+ or -). For example a 9 VDC battery and 9 VRMS generator supply the same power. We use rms instead of simple averaging because audio signals move both above and below equilibrium. A sine wave (such as the AC line voltage) averages to zero because it’s equally positive and negative. Sticking your fingers in a wall socket provides a shocking illustration of the difference between “average” and rms. Kids, don’t try this at home. The rms value is calculated in three steps: (s) squaring the waveform, which enlarges and “absolutes,” making all values positive, (m) finding the squared waveform’s mean value and (r) taking the square root to rescale it back to normal size. Its proper full name is “the root of the mean of the square.” Note that rms values are strictly a mathematical rendering. We never hear the rms signal (it would be less recognizable as audio than an MP-3 file). The waveforms we transmit, transduce, render digitally and hear are peak–peak. Crest Factor Crest factor is the ratio between the actual amplitude traced by the waveform (the peak or crest) and the heat-load-simulating rms value (Fig. 1.4). For a DC signal (such as a battery) the difference is nothing: crest factor of 1 (0 dB). The simplest audio signal, the sine wave, has a crest factor of 1.414 (3 dB). Complex signals can have vastly higher peak/average ratios (and higher crest factors). Pink noise (see section is approximately 4:1 (12 dB), whereas transient signals such as drums can have 20 to 40 dB. Headroom

FIGURE 1. 4 Crest factor examples with different signals

Headroom is the remaining peak–peak amplitude capability before overload at a given moment, the reserved dynamic range, and our insurance policy against overload. Every electronic or electromagnetic device has its amplitude upper limit. Linear audio transmission requires the entire extent of the peak–peak signal to pass through without reaching the device’s upper limit (no clipping, limiting or compression). Headroom is the remainder between the device’s limits and the signal’s positive or negative peak. This has historically had a mysterious quality in part because of slow metering ballistics that fall short of tracking the peak–peak transient values. This leaves engineers concerned (rightfully) about potential clipping even when meters indicate remaining dynamic range. An oscilloscope demystifies headroom/clipping because it displays the peak–peak waveform. Digital headroom is the remaining upper bits in the rendering of the pk–pk waveform.

1.1.6 Phase Phase is the radial clock that charts our progress through a cycle. A completed cycle is 360°, a halfcycle is 180° and so on. The phase value is calculated in reference to a specific frequency. There is not a limit to the phase value, i.e. we can go beyond 360°. The phasor radial positions of 0°, 360° and 720° are equivalent but the phase delay is not, revealing that things have fallen one and two cycles behind respectively. Two race cars with matched radial positions will cross the finish line together, but there is a million dollar difference between 0° and 360°. This makes a difference to sound systems as well, as we will see. Absolute Phase Absolute phase is the value at a given moment relative to a stationary time reference, typically the

internal clock of an analyzer. Yes, you read that correctly. Absolute phase is relative to a reference, such as the start of the measurement period, which becomes the 0 ms time and 0° phase reference. We don’t need to see the absolute phase numbers even though our analyzers compute them. It’s like having a wristwatch with only a second hand, which won’t help us get to the gig on time. Our analyzers show relative phase (a comparison between two channels of internal absolute phase calculations). Note that the term “absolute phase” is often misapplied for the concept of absolute polarity (section Relative (Phase Transfer Function) As stated above, relative phase is the difference between two absolute phase values (Fig. 1.5). Relative phase (in degrees) is the only version of phase response shown in this book, so we can proceed to shorten the relative phase phrase to simply “phase.” A series of phase values over frequency taken together create a phase slope that can be translated to phase delay (section Phase is a radial function that can set our heads spinning when it comes to reading the 2-D charts. The response often contains “wraparound,” which is how we display phase response that exceed the limits of the 360° vertical scale (see section 1.3.4).

FIGURE 1.5 Transfer function phase vs. frequency. Phase is referenced to unity time (0°) at the center of the vertical scale. Phase Shift The “shift” in question here is phase change over frequency (Fig. 1.6). This can be stable and constant (such as phase shift caused by a filter) or unstable and variable (such as wind). Our practical concern is frequency-dependent delay (i.e. different frequencies shifted by different amounts). A system with such phase shift has a temporally stretched transient response (often termed “time smearing”). A translation example: The rise and fall of a drum hit would be rounded and expanded because parts of the transient are behind others. The secondary concern regarding phase shift is compatibility with other devices that share common signals and sum together (either in the air or inside our gear). An example is the combination of two different speaker models, with unmatched phase shift characteristics. When summed together we get a phase shift between the phase shifts, which we term

the phase offset (the next topic). Phase Offset Phase offset is the favored term here for phase differences between two measured systems (Fig. 1.7) and a famous audio sorority (Δ-Phi). A known phase offset (in degrees over a frequency span) can be converted to time offset (or vice versa). Phase offset is put to practical use when correlated sources are summed. With known phase and level offsets we can precisely predict the summed response at a given frequency. Phase offset requires a frequency specification and is therefore preferred for frequency-dependent time differences, such as the crossover between an HF and LF driver.

FIGURE 1.6 Example of phase shift added by a 24 dB/octave Linkwitz-Riley filter @90 Hz. The filter only affects the amplitude response below 100 Hz but the phase is shifted over a much wider range.

FIGURE 1.7 Phase offset example showing two different speaker models. The LF ranges match but the HF ranges are offset by 180°.

We don’t use the terms “phase offset” and “phase shift” interchangeably here. The distinction is

drawn as follows: Phase shift occurs inside a single device or system. Phase offset is between devices or systems. A filter creates phase shift. Moving two speakers apart creates phase offset. Time Offset Time offset (in ms) is a frequency-independent measure for propagation paths (Fig. 1.8). Latency in the signal path or different arrival times between a main and delay speaker are practical examples of time offset. Time offset is our preferred term to describe frequency-independent time differences. Frequency-dependent time offsets are better described by phase delay (below). Time offset (in ms) can be translated to phase offset (in degrees for a given frequency), and vice versa. A fixed-time offset at all frequencies creates an increasing amount of phase offset as frequency rises. We are very concerned with time offsets within the signal path, particularly in the analog electronic and digital paths where even small amounts are very audible. Strategies for managing time offset between speakers play a big part in system optimization.

FIGURE 1.8 Time offset example showing 1.0 ms between two otherwise matched speakers. The phase offsets are 90° @250 Hz, 360° @1 kHz and 1440° @4 kHz.

FIGURE 1.9 Example of frequency-dependent phase delay. The LF range has a downward phase slope, indicating phase delay behind the HF region. Phase Delay Phase delay, the time-translated version of phase shift, is a metric used to describe systems with frequency-dependent delay (Fig. 1.9). Phase delay is computed by finding the phase shift over a range of frequencies (the phase slope). The phase value at two frequencies must be known to make the calculation. Phase delay is mostly interchangeable with the term “group delay,” an unimportant distinction for optimization decisions. Any frequency band-limited device will exhibit some phase delay, i.e. some frequencies are transmitted later than others. This includes every loudspeaker not known to marketing departments. Unless extraordinary measures are performed, real-world loudspeakers exhibit increasing phase delay in the LF range. Speaker models have substantially different amounts of phase delay. This can cause compatibility issues when combined, because a single time offset value cannot synchronize the two speakers over the full spectrum. Phase delay is often used to characterize and remedy phase offsets between speakers during optimization.

1.1.7 Polarity Polarity is a binary term representing the positive vs. negative amplitude orientation of the waveform. Systems with “normal” polarity (+) proceed first in a positive direction followed by negative and back to equilibrium (whereas reverse polarity systems do the opposite). Loudspeakers with a positive voltage applied show normal polarity as a forward movement (pressurization) and negative polarity as a rearward movement (rarefaction). Polarity has a storied history in professional audio. The foremost concerns JBL founder James B. Lansing, who in 1946 chose to reverse the polarity of his speakers to ensure incompatibility with those of his former employer Altec Lansing. The polarity war lasted over forty years. We all lost. Our standard line level connector also lacked a polarity standard (the XLR connector was pin 3 hot in the USA and pin 2 hot for Europe and Japan). We had actually entered the digital audio age before an analog standard of pin 2 hot was established worldwide. The term polarity is often mistakenly substituted for “phase” because a polarity reversal creates a unique case of phase shift, where all frequencies are changed by 180°. Polarity, however, has no time offset or phase delay and is frequency independent. Phase shift (described above) is frequency dependent and is caused by delay in the signal.

It is interesting to note that we have more terms for polarity than options: +/-, normal or reversed, non-inverting or inverting, right or wrong. Absolute Polarity The “absolute” here refers to a system’s net polarity orientation to the original source, often referred to by audiophiles as “absolute phase.” A positive absolute polarity value is claimed if the waveform at the end of the chain matches the polarity of the original. Let’s use an example to follow absolute polarity from start to finish. A kick drum moves forward (+). The positive pressure wave moves a microphone diaphragm inward (+). The console and signal processing maintain normal polarity to the power amplifier inputs (+). The amplifier output swings positive voltage on the hot terminal (+) and the speaker moves forward creating a positive pressure wave (+). Success! But some big assumptions have been made. Perfect mics, electronics and speakers that have flat phase responses. Pssst! They don’t. There is frequency-dependent delay in the speakers, enough to make parts of them 180° different than others. Polarity is suddenly ambiguous. High-pass filters, low-pass filters, reflex box tuning and LF driver radiation properties all add up to frequency-dependent phase delay. Who’s right? The phase response of the waveform at the end of the chain does not match the original, which means the polarity can no longer be called absolute. I will not enter the fray as to whether humans can detect absolute polarity. Yet one thing is clear: A perfectly flat amplitude and phase transmission system moves us closer to reproducing the original drum sound, possibly to the point where polarity is absolute enough for us to make some conclusions. Nonetheless it is gospel among audiophiles that this is hugely important. Here is the best part about that. Audiophiles listen to recordings. Recordings can be assembled from hundreds of tracks recorded at different times, from different studios, transposed across different media, manipulated from here to kingdom come in digital audio workstations, sent to a mastering lab across the country, transcribed to a lacquer mother, a metal stamper and then pressed to vinyl, played by a turntable cartridge and finally through speakers that have drivers on the sides and rear to give that awesome surroundoscopic envelopment and create stereo everywhere. Can you please tell me what exactly is the absolute reference for my polarity? Relative Polarity Relative polarity is the one that counts: the polarity relationship between audio devices (Fig. 1.10). There is no disputing the fact that summing devices with different relative polarities causes cancellations and other unexpected outcomes. Relative polarity errors are best prevented by ensuring all devices are matched (and normal). Polarity and Phase Delay Combined

FIGURE 1.10 Example of polarity and reverse polarity over frequency

Systems that exhibit frequency-dependent phase delay (e.g. speakers and most filters) can have their phase response further modified by a polarity change. The resulting phase values over frequency are a combination of these two modifiers (phase delay is frequency dependent whereas polarity reversal changes all frequencies by 180°). This finds practical use in acoustical crossover filter and delay settings, and steering cardioid subwoofer arrays.

1.1.8 Waveform The waveform is the cargo transmitted through our delivery network: the sound system. It’s the shape of amplitude over time, the fingerprint of an audio signal in all its simplicity or complexity. A sine wave is waveform simplicity. “In-a-Gadda-da-Vida” is a 17-minute complex waveform with many frequencies and three chords. The waveform audio stream is a continuous function, a chronological series of amplitude values. Complex waveforms result from combinations of sine waves at various frequencies with unique amplitude and phase values. We can mathematically construct the waveform once we know the amplitude and phase values for each frequency. Conversely, we can deconstruct a known waveform into its component frequencies and their respective amplitude and phase values. We are all familiar with the “synthesizer,” a musical device that creates complex waveforms by combining individual oscillators at selectable relative levels and phase. We had an early MOOG synthesizer at Indiana University (SN# 0005) that required patch cables to mix oscillators together to build a sound. This was raw waveform synthesis. Its inverse (the deconstruction of a complex signal into its sine components) is the principle of the Fourier Transform, the heart of the audio analyzer used in optimization. Sine Wave A sine wave, the simplest waveform, is characterized as a single frequency with amplitude and phase values. A “pure” sine wave is a theoretical construct with no frequency content other than the fundamental. Generated sine waves have some unwanted harmonic content (harmonics are linear multiples of the fundamental frequency) but can be made pure enough for use as a test signal to detect harmonic distortion in our devices. The sine wave is the fundamental building block of complex audio signals, combinations of sine waves of different frequencies with individual amplitude and phase values. The steady state of the sine wave makes it suitable for level calibration in electronic systems (analog or digital) whose flat frequency response allows for a single frequency to speak for its full operating range. Speakers cannot be characterized as a whole by a single sine wave because their responses are highly frequency dependent and readings can be strongly influenced by reflections. Combining Unmatched Frequencies Multiple frequencies coexist in the waveform when mixed together (Fig. 1.11). The amplitude and phase characteristics of the individual frequencies are superpositioned over each other in the waveform. The amplitude rises in portions of the waveform where signals are momentarily phase matched, thereby increasing the crest factor beyond the 3 dB of the individual sine wave. It is possible to separate out the individual frequencies with filters or by computation (e.g. the Fourier Transform). The relative phase of mixed frequencies affects the combined waveform amplitude but does not affect that of the individual frequencies. In other words, 10 kHz cannot add to, or cancel, 1 kHz regardless of relative phase. The combined waveform, however, differs with relative phase. White and Pink Noise There are a few special waveforms of recurring interest to us. White noise is arguably the most natural sound in the world: all frequencies, all the time, with random phase and statistically even level. White noise is the product of random molecular movements in our electronics, well known as the noise floor and the sound of radio mic disaster. The energy is spread evenly over the linear frequency range, so half of the humanly audible energy is below 10 kHz and half above. White noise is perceived as spectrally tilted toward the HF (and called “hiss” because our ears respond on a log basis over frequency).

FIGURE 1.11 Waveform mixing: the combination of signals with unmatched frequencies

Pink noise, the most common audio spectrum test signal, is doctored white noise, filtered to sound even to our ears. High frequencies are attenuated at 3 dB/octave. This “logs” the linear noise and reallocates the energy to equal parts/octave, 1/3 octave, etc. Impulse We just met random noise: all frequencies, equal level, continuous with random phase. The impulse is another special waveform: all frequencies, equal level, one cycle, in phase. Think of an impulse as all frequencies at the starting line of the racetrack. The starter pistol goes off and everybody runs one lap and stops. In fact, a starter pistol is an acoustic impulse generator used by acousticians for analysis. Listening to an impulse in a room reveals the timing and location of reflection paths. The impulse generator that won’t get you in trouble with airport security is two hands clapping. Combining Matched Frequencies Signals with matched frequencies merge to create a new waveform differing only in amplitude from the original signals that comprise it (Fig. 1.12). The resulting waveform depends upon its relative amplitude and phase characteristics, and may be greater, lesser or equal to the individual contributors. The combination of equal level-matched frequency signals will generally be additive when the phase offset is < ±120° and subtractive when between ±120° and ±180°. It is not possible, post facto, to separate out the original signals with filters or computation. The combined waveform of two amplitude- and phase-matched signals is indistinguishable from that of a single signal at twice the level. Conversely, the combined waveform of two amplitude-matched signals with 180° phase offset is indistinguishable from unplugged. Combining Uncorrelated Waveforms (Mixing) Unmatched (uncorrelated) complex waveforms merge together to create a new waveform with a random association to each of the original parts. The combination of unmatched complex waveforms (multiple frequencies) must be evaluated on a moment-to-moment basis. The obvious example of uncorrelated signals is two different music streams: frequencies are in phase one moment and out of phase the next. These could be different songs, different instruments playing the same song or even violins playing the same parts in a symphony. In all cases the relationship between the signals is unstable and therefore incapable of consistent addition or subtraction at a given frequency. Combining unmatched waveforms is the essence of “mixing,” and thus is separate from the combinations of matched waveforms (correlated summation), a primary concern of system optimization.

FIGURE 1.12 Combining signals with matched frequencies (correlated summation): The effects of relative phase on the combined responses are shown Combining Correlated Waveforms (Summation) The combination of matched (correlated) complex waveforms also creates a new waveform, but with an orderly and predictable relationship to the original parts. Phase-matched signals create a combined waveform similar in shape to the individuals but with higher amplitude. If there is time offset between the signals, the new waveform will be modified in a stable and predictable way, with alternating additions and subtractions to the amplitude response over frequency (a.k.a. “comb filtering”). An example is copies of the same music stream combined in a signal processor. With no time offset,

the combination will be full-range addition. Comb filtering is created by time offset between the signals. The interaction of speakers carrying the same signal in a room is more complex because the interaction must be evaluated on a frequency-by frequency and location-by-location basis (covered in depth in Chapter 4). Analog Form Oscillation is a continuous function. We cannot get from Point A to Point B without passing through all the points in between. That is the essence of analog audio: the motion of a tuning fork, string, phonograph needle, speaker cone and more. A song, in analog form, is a series of movements in the waveform over time, always going one way or the other (+/-, in/out, etc.). If the movement stops, so does the song. If we can trace this continuous movement on one device and transfer it to another audio device, we will recognize it as the same song, even if one version came from magnetic flux (a cassette tape) and another came from the mechanical motion of a needle in a groove. Analog audio is like a continuous drawing exercise, a transcription that never lifts the pencil from the paper. Transferring a waveform between electrical, magnetic and acoustic transmission mediums is like redrawing that pencil sketch with a different medium such as paint, stone or whatever. Notice that the term “medium” has much the same meaning in both fields. Digital Form Digital audio is a non-continuous function. We can only get from Point 0 to Point 1 without evaluating any points in between. Listening to 0s and 1s sounds pretty boring, even for people at raves. Digital waveforms are copies of analog waveforms, but the operation differs from the transduction process between analog mediums discussed above. Analog-to-digital converters slice the continuous signal into tiny bits (ba-da-boom), each of which represent the best fit for the momentary amplitude value. The faithfulness of the digital rendering depends on how finely we slice both amplitude and phase (time). Amplitude resolution is defined by the number of bits (24-bit is the current standard), and temporal resolution (a.k.a. sample rate) is typically 48 kHz (approximately .02 ms). It’s as if we have a photograph of our original pencil drawing. If you look close enough at the photograph you can see that there are no continuous lines, just lots of little dots. That is the essence of digital. Audio pixels. The beauty of it is that once we have the digital copy we can send it around the world without changing it. As long as we are very careful, that is. This is not even hypothetically true of an analog signal because all audio mediums have some form of degradation (distortion, frequency response variation, phase shift, etc.). Bear in mind that we can never hear digital audio. We only get that pleasure after the waveform is converted back to analog, because at the end of the day

we have to use the analog air medium to get to our analog ears. This time the conversion requires our scribe to pick up the pencil again and draw one continuous connecting line between every dot in our digital photograph of the original line drawing.

1.1.9 Medium Analog audio waveforms propagate through a medium. Within this book air molecules will be the acoustical medium and electronic charge and magnetic flux serve as the electromagnetic medium. Each medium has unique properties such as transmission speed, propagation characteristics, loss rate, frequency response, dynamic range and more. Digital audio (between analog conversions) is transmitted over a medium (not through it). Propagation Speed Propagation through a medium is a chain reaction. Each energy transfer takes time (an incremental latency) so the more media we go through, the longer it takes. Propagation speed is constant over frequency but variable by medium. Electromagnetic propagation is so fast we are mostly able to consider it to be instantaneous. Acoustic propagation speed is related to the medium’s molecular density (higher density yields higher speeds). Sound propagates through a metal bar (very dense) faster than water (medium density), which is faster than air (low density). Sound propagation is the same speed, however, for heavy metal bars and air shows (ba-da-boom). Wavelength (λ) An audio frequency has physical size once it exists within a transmission medium. The wavelength (λ) is the transmission speed/frequency, or transmission speed × period (T). Wavelength is inversely proportional to frequency (becomes smaller as frequency rises). Audible wavelengths in air range in size from the largest intermodal-shipping container to the width of a child’s finger, a 1000:1 range. Why should we care about wavelength? After all, no acoustical analyzers show this, and no knobs on our console can adjust it. In practice, we can be blissfully ignorant of wavelength, as long as we use only a single loudspeaker in a reflection-free environment. Good luck getting that gig. Wavelength is a decisive parameter in the acoustic summation of speaker arrays and rooms. Once we can visualize wavelength, we can move a speaker and know what will happen.

FIGURE 1.13 Basic properties of audio transmission mediums Transduction, Transducers and Sensitivity Transduction is the process of waveform conversion between media (Fig. 1.13). Transducers are media converters. Examples include acoustic to electromagnetic (microphones), and vice versa (speakers). The amplitude values and wavelength in one media (e.g. pressure for acoustical) are scaled and converted to another media (e.g. voltage for electromagnetic). The scaling key for transduction is termed “sensitivity,” which inherently carries units from both sides of the conversion. Microphone sensitivity links output voltage to input pressure (SPL), with the standard units being mv/pascal. Speaker sensitivity relates acoustic output pressure to the input power drive. The standard form is dB SPL@1 meter with 1-watt input.

1.2 Audio Scales This book is full of charts and graphs, all of which have scales. The sooner we define them the easier it will be to put them to use.

1.2.1 Linear amplitude Amplitude is all about size. There are a million ways to scale it (or should I say there are 120 dB ways). Linear level units are seldom used in audio even though they correspond directly to electrical and acoustic pressure changes in the physical world (our level perception is logarithmic). Many engineers go their entire careers without thinking of linear sound level. Can we do a rock concert with a sound system that can only reach 20 pascals (120 dB SPL)? My mix console clips at 10 VRMS. Is this normal? (Yes, that’s +20 dBV.) I know a guy who sent +42 dBV @60 Hz as a test tone to some people he didn’t like. You might recognize the linear version of that: 120 VRMS @60 Hz. Voltage is found in many areas outside the audio path, so it helps to have bilingual fluency between linear and log. Let’s count in linear. Incremental changes from 1 volt to 2, 3 and 4 volts are sequential linear changes of +1 volt. If we started at 101 V and continued this linear trend we would see 101 V, 102, 103 and 104 V. Now let’s count the same voltage sequence in log (approximately): 1 V to 2 V (+6 dB), 2 V to 3 V (+4 dB), 3 V to 4 V (+2 dB). The total run from 1 V to 4 V is 12 dB. By contrast, the entire 4-volt sequence starting at 101 V would not even total 0.5 dB. Linear amplitude scales include volts (electrical), microbars or pascals (acoustical), mechanical movement (excursion), magnetic flux and more.

1.2.2 Log amplitude (20 log10 decibel scale) The log scale characterizes amplitude as a ratio (dB) relative to a reference (fixed or variable) (Fig. 1.14). Examples include dBV (electrical) and dB SPL (acoustical). Successive doublings (+6 dB) of sound pressure are perceived as equal increments of change (a log scale perception). Therefore acoustic levels (and the electronics that drive them) are best characterized as log. Successive linear doublings of 1 microbar (to 2, 4 and 8 microbars) would be successive changes of approximately 6 dB (94, 100, 106 and 112 dB SPL).

FIGURE 1.14 Decibel scale reference table showing ratio conversions for the 20 log and 10 log scales

FIGURE 1.15 Analog electronic operational levels: mic, line and speaker levels Electronic Decibel (DBV and DBU) One of the most heavily enforced standards in professional audio is to never have just one standard (Fig. 1.15). The dB scale for voltage has at least twenty. Only two are used enough any more to be

worth memorizing: dBV (1 volt standard) and dBu (0.775 volt standard). The difference between them is a constant 2.21 dB (0 dBV = +2.21 dBu and 0 dBu = -2.21 dBV). There is an extensive history regarding dB voltage standards going back to the telephone, which you can read somewhere else when you have trouble sleeping. The dB scale is favored because our audio signals are in a constant state of change. We can’t control the music or the musicians. We are constantly riding current levels relative to each other, to a moment ago or to the legal level allowed before the police shut us down. The dB scale is complicated but is easier than linear when trying to monitor relative levels that have a 1,000,000:1 ratio. We should at least know the maximum voltage level for our equipment, which is usually around 10 volts (i.e. +20 dBV, +22.21 dBu). The noise floor should be in the -100 dBV range. In between we find the “nominal” value of 0 dBV (or dBu), the placement target for the audio mainstream. This leaves 20 dB of headroom above and the noise 100 dB below. Level(dBV)=20×log10Level 1⁄1 V Level(dBu)=20×log10Level 1⁄0.775 V Acoustic Decibel (dB SPL) The common term for acoustic level is dB SPL (sound pressure level), the measure of pressure variation above and below the ambient air pressure. Level(dBSPL)=20×log10 P ⁄0.0002

where P is the RMS pressure in microbars (dynes/square centimeter). The reference standard is 0 dB SPL, the threshold of the average person’s hearing (Fig. 1.16). The limit of audibility approaches the noise level of the air medium, i.e. the level where the molecular motion creates its own random noise. It is comforting to know we aren’t missing out on anything. The threshold of pain is around 3 million times louder at 130 dB SPL. The threshold of audio insanity has reached 170 dB SPL by car stereo fanatics. The following values are equivalent expressions for the threshold of hearing: 0 dB SPL, 0.0002 dynes/cm2, 0.0002 microbars and 20 micropascals (μPa). dB SPL is log and all the others are linear. For most optimization work we need only deal with the log form.

dB SPL subunits dB SPL has average and peak levels in a manner similar to the voltage units. The SPL values differ,

however, in that there may be a time constant involved in the calculation.

FIGURE 1.16 Acoustical operational levels: quiet, loud and insane

■ dB SPL peak: The highest level reached over a measured period is the peak (dB SPLpk ). ■ dB SPL continuous (fast): the average SPL over a time integration of 250 ms. The fast integration time mimics our hearing system’s perception of SPL/loudness to relatively short bursts. It takes about 100 ms for the ear to fully integrate the sound level. ■ dB SPL continuous (slow): An extension of the integration time to 1 second that is more representative of heat load and long-term exposure. ■ dB SPL LE (long term): This is the average SPL over a very long period of time, typically minutes. This setting is used to monitor levels for outdoor concert venues that have neighbors complaining about the noise. An excessive LE reading can cost the band a lot of money.

dB SPL weighting There are filtered versions of the SPL scale (Fig. 1.17) that seek to compensate for the leveldependent frequency response variations in human hearing perception (the “equal loudness curves” described in section 5.1). ■ dB SPL Z (“Z” weighting): This is a recent trend to designate the unweighted response. Easier to spell. ■ dB SPL A (“A” weighting): Corresponds to the ear’s response at low levels. LF range is virtually nonexistent. Applicable for noise floor measurements. Often used as a maximum SPL specification for voice transmission. Subwoofers on or off goes undetected with Aweighted readings. ■ dB SPL B (“B” weighting): Corresponds to the ear’s response at intermediate levels. LF

range is rolled off but not as much as A weighting (-10 dB @60 Hz). Applicable for measurements using music program material. ■ dB SPL C (“C” weighting): Corresponds to the response of the ear at high levels. Close to a flat response. Applicable for maximum-level measurements. Used as a specification for fullrange music system transmission levels. Subwoofers on or off will have a noticeable effect when C weighting is used.

FIGURE 1.17 Frequency response curves for ABC and D weighting. Note: D weighting is shown here but not typically used for our applications (graphic by Lindosland @ en.wikipedia, public domain, thank you).

1.2.3 Power (10 log10 decibel scale) Power is derived from a combination of parameters (e.g. voltage and current or pressure and surface area). The 10 log10 formula is the log conversion of power ratios (also Fig. 1.14). It’s rarely used in system optimization because the analyzers monitor singular parameters such as voltage and SPL rather than power (in watts). It’s better to prioritize our limited memory space for the 20 log10 formula and leave the 10 log10 for Google.

1.2.4 Phase The standard scale for phase spans from 0° to 360°. The most common display places 0° at center and ±180° on the upper and lower extremes, but other options are available. Phase scaling is circular and therefore very different from amplitude. When we get more amplitude we simply expand the scale. We don’t go up to 10 volts and then start over at 0 if we go higher. Phase is different because it tops out at 360°. When phase shift exceeds 360° it wraps around as an alias value within the 360° limits (e.g. 370° reappears as 10°). This is similar to an automobile race where cars on the lead lap are indistinguishable from those a lap behind. We only know who will win the race from other evidence,

such as watching the entire race, not just the last lap. For audio phase our evidence will be similarly provided, but in this case it’s about watching the phase values over the whole frequency response rather than a single frequency. Phase serves us poorly as a unit of radial measure. Radians are the more “calculation-friendly” unit for radial angle found inside formulas that require accurate phasor positioning. Radians are rarely used by audio engineers for system optimization (or in a sentence for that matter). A complete cycle (360°) has a value of 2 π radians so beware of radian cancellation if two sources fall π radians out of sync (180°). We learn to memorize the 360° phase scale even though it seems an arbitrary division for a circular function. It is a vestige of a merciful rounding error by ancient Egyptian mathematicians. Let’s be grateful it’s not 365° with a leap-degree every four cycles.

1.2.5 Linear frequency axis Linear frequency scaling shows equal spacing by bandwidth (unequal spacing in octaves). The linear scale is an annoying, reality-based construction that corresponds to how frequency and phase interact in the physical world. The spacing between 1 kHz, 2 kHz, 3 kHz and 4 kHz (consecutive bandwidths of 1 kHz) is shown as equal spacing. Phase, the harmonic series and comb filter spacing all follow the linear frequency axis. The frequency resolution of the Fourier Transform (the math engine of our analyzer) is linear. For example, a set of 100 Hz wide filters with 100 Hz spacing is termed “100 Hz resolution.”

1.2.6 Log frequency axis A log frequency scale shows equal spacing in percentage bandwidth (octaves), and unequal spacing in bandwidth. This corresponds closely to our perception of frequency spacing. The equal linear spacing between 1 kHz, 2 kHz, 3 kHz and 4 kHz are percentage bandwidths of 1 octave, ½ and 1/3 octave respectively. A true log frequency response is made of log spacing of log filters. For example, a set of 1/3 octave filters at 1/3 octave spacing is termed “1/3 octave resolution.”

1.2.7 Quasi-log frequency axis The quasi-log frequency scale is a log/linear hybrid (log spacing of linear bandwidths). Stretching a single linear response over the full audio range is not practical for optimization because there is not enough data in the lows and/or too much in the highs. Instead a series of (typically) eight octavewide linear sections are spliced together to make a quasi-log display. This is implemented in almost

every modern analyzer used for optimization. (Full details are in Chapter 12.) Each octave of the quasi-log frequency response is derived from log spacing of the linear resolution (e.g. 1/48 octave spacing of 48 data points). This is termed “48 points/octave resolution.”

1.2.8 Time Tick, tick, tick. It seems strange to have to write that time is linear (evenly spaced increments) and not log (proportionally spaced increments). This is only mentioned because the frequency response effects of time offsets are entirely linear but are perceived by our log brains. So let’s put this to rest: There is no log time.

1.3 Charts and Graphs Let’s put the scales together to make the charts and graphs we use for design and optimization. Fluency in reading these graphs is a mandatory skill in this field. The graphs are 2-D, with an x-axis and y-axis. We generally find frequency and time on the x-axis and amplitude, phase and coherence on the y-axis.

1.3.1 Amplitude (y) vs. time (x) Amplitude vs. time is a peak–peak waveform tracing commonly seen on oscilloscopes or digital audio editors (Fig. 1.18). The amplitude scale can be linear or log but time is only linear.

1.3.2 Amplitude (y) vs. frequency (x) Absolute level over frequency is used to check the noise floor, the incoming spectrum, harmonic distortion, maximum output and more. The y-axis shows level against a fixed standard. We can observe the individual channels used to make transfer function computations (next item).

1.3.3 Relative amplitude (y) vs. frequency (x) Relative level over a quasi-log frequency scale is the most common graph in system optimization (Fig. 1.19). This transfer function response is used for level setting, crossover alignment, equalization, speaker positioning and more. The y-axis scales unity level to the center and shows gain above and loss below (in dB). The quasi-log frequency scale is preferred because of its constant high resolution (24 to 48 points/octave) and close match to human hearing perception. An alternative option, the linear frequency scale can help identify time-related mechanisms (such as phase, comb filtering, reflections, etc.).

FIGURE 1.18 Amplitude vs. time plots for various waveforms

FIGURE 1.19 Introduction to the relative amplitude vs. frequency (quasi-log) display

FIGURE 1.20 Introduction to the relative phase vs. frequency (quasi-log) display

1.3.4 Relative phase (y) vs. frequency (x) This is our standard phase display (Fig. 1.20). The y-axis shows a 360° span. The vertical center is typically 0° but can be normalized around any phase value. A flat phase response (horizontal line) indicates zero phase shift and zero time offset over the frequency range shown. Variations from flat phase response indicate some time offset, either full band (i.e. latency) or frequency–dependent (phase delay). A downward slope (left to right) indicates positive delay whereas an upward slope indicates negative delay. A constant phase rotation at linear frequency intervals indicates latency (frequency-independent delay). A quasi-log display shows the slope steepening with frequency (a linear function in a log display). Comb filtering appears as increasingly narrowing spacing of peaks and dips as frequency rises (again, linear function, log display). Filters create frequency-dependent delay. The phase response of a filter with a given Q will maintain the same shape (slope) as frequency rises. Note: Don’t freak out if you don’t understand phase yet. This section is intended to provide just enough info to help read phase traces as we go along. If phase were easy, this book would be five pages long. Relative phase on a linear frequency scale is less popular, but more intuitive than log. The phase

slope over a linear frequency scale clearly reveals the relationship of phase and time (a linear mechanism on a linear scale). A flat phase response indicates no time difference at any frequency, just as the log display. Latency creates a constant phase slope as frequency rises. Comb filtering appears as consistently spaced peaks and dips as frequency rises. The term “comb filtering” comes from its linear frequency scale appearance.

1.3.5 Impulse response: relative amplitude (y) vs. relative time (x) The impulse response is a favorite of the modern analyzer (Fig. 1.21). We can find delay offsets between speakers with extreme accuracy in seconds. Follow the dancing peak, read the number in ms. Done! Magic! And it seems like magic to most of us, even more so when we stop to think about what this computation is. The FFT analyzer impulse response is a mathematical construction of the picture we would see on a hypothetical oscilloscope (amplitude vs. time) with a hypothetical single pulse. In practice we get relative amplitude vs. relative time. The full story on what’s under the hood will have to wait until section 12.12. For now we will focus on how to read it. Relative level (y-axis) is not like our amplitude over frequency. The vertical center is silence, not unity gain. Unity gain (normal polarity) is shown as an upward vertical peak at a value of 1 (0 dB) and positive gain is a bigger peak (and loss is smaller). A downward peak indicates polarity inversion. Time (x-axis) is relative, a comparison of output–input arrival times. A centered impulse indicates synchronicity. The peak moves rightward when the output is late and vice versa. Yes, it is possible to have the output before the input in our measurements, because we can delay signals inside the analyzer.

FIGURE 1.21 Introduction to the impulse response display

FIGURE 1.22 Introduction to the coherence vs. frequency display

A perfectly flat frequency response (amplitude and phase) makes a featureless impulse shape (straight single line, up and down). The pulse will have ringing, overhang and various other distortions to its shape if the measured device’s response is not flat. Reflections appear as secondary impulses on the display. Like phase, this is enough information to enable us to read the displays going forward.

1.3.6 Coherence vs. frequency The coherence function is a data quality index that indicates how closely the output signal relates to the input (Fig. 1.22). Amplitude and phase data are deemed reliable when coherence is high and unreliable when low. Coherence alerts one to take the wooden shipping cover off the speaker front instead of boosting the HF EQ. Yes, a true story. Coherence is derived from averaging dual-channel frequency responses and is indicative of data stability. A value from 0 to 1 is awarded based on how closely the individual samples match the averaged value in amplitude and phase. Details are in section 12.11.

1.4 Analog Electronic Audio Foundation 1.4.1 Voltage (E or V) Voltage, electrical pressure, can be characterized linearly (in volts) or logarithmically in dB (dBV, dBu, etc.). The electronic waveform is a tracing of voltage vs. time. Voltage is analogous to acoustical pressure.

1.4.2 Current (I) Current is the density of signal flow through an electronic circuit. As current increases, the quantity of electron flow rises. Analog audio transmission between electronic devices (except amplifiers to speakers) requires minimal current flow.

1.4.3 Resistance (R) Resistance restricts current flow in an electronic circuit. As resistance rises, the current flow (for a given voltage) falls. Resistance is frequency independent (impedance is not).

1.4.4 Impedance (Z) Impedance is the frequency-dependent resistive property of a circuit, a combination of resistance, capacitance and inductance. Impedance ratings are incomplete without a specified frequency. Output/input impedance ratios play a critical role in the interconnection of analog electronic devices, amplifiers and speakers, determining the maximum quantity of devices, cable loss and upper and lower frequency range limits. An 8 Ω speaker (nominal impedance) illustrates the difference between impedance and resistance. The DC resistance is 6 Ω. The lowest impedance in its operating range is around 8 Ω. Impedance rises (and output level falls) at frequencies above its operating range. Capacitance (Capacitive Reactance) Capacitors pass signal across conductive, but unconnected, parallel plates. DC cannot flow across the gap in the plate. An AC signal, however, can flow across the plate (resistance falls as frequency rises). Capacitors approximate an open circuit (maximum impedance) to DC and short circuit (minimum

impedance) to AC. Capacitance in series rolls off the LF response (resistance is inversely proportional to frequency). Parallel capacitance (such as between wires of an audio cable) rolls off the HF via a shunt path to ground. Inductance (Inductive Reactance) Inductor coils resist voltage changes in the signal. An unchanging signal (DC) passes freely but the inductor becomes increasingly resistant as frequency rises (the rate of electrical change increases). The inductor’s response is the opposite of a capacitor: maximum impedance to AC signals and the minimum impedance to DC. Series inductance increasingly rolls off the HF response. Parallel inductance shunts the LF to ground.

1.4.5 Power (P) Electrical power (in watts) is the combined product of voltage, current and impedance. Ohm’s law expresses the relationship of these three parameters (Fig. 1.23). Most electronic transmission involves negligible power (low voltage, low current and high impedance). Speaker-level transmission requires substantial power (high voltage, high current and very low impedance). Acoustical power, also in watts, is produced by pressure, surface area and acoustic impedance (inertance). Our ears (and microphones) are pressure sensors, not power sensors, which characterize sound level by pressure only (SPL).

1.4.6 Operating levels Signal levels are divided into three categories by voltage and impedance. A lucky waveform can experience all three (return to Fig. 1.15) if beginning at mic level, going through a “pre” amp to line level and rising to speaker level in a power amplifier. Mic Level Mic-level signals are typically generated by small passive devices such as microphone coils, guitar pickups, phonograph cartridges, etc. Mic level is a matter of necessity, not choice. There are few advantages and many disadvantages to operating in the microvolt range. Signals are vulnerable to induced noise and other complications relating to extremely low voltage and current flow (e.g. jumping connector and relay gaps). The winning strategy is to preamplify mic-level signals to line level as soon as possible. The worst-case scenario, unbalanced, high-impedance mic level (e.g. a guitar pickup), requires the shortest possible preamp path. Mic-level sources generate signals in the

μV to 100 mV range with nominal source impedances of a few hundred ohms (microphones) and a few k Ω (pickups).

FIGURE 1.23 The Ohm’s law pie chart with some examples applicable to audio systems (chart authored by Matt Rider,’s_Law_Pie_chart.svg) Line Level Active devices, such as consoles, processor, instrument direct outs, playback equipment and more, usually generate line-level signals. This is the standard operating range, with nominal levels in the 1 V range, and maximum levels over 10 V. Balanced line-level low-impedance outputs (150 Ω typical) driving high-impedance inputs (10 kΩ typical) should have good noise immunity and minimal loss. Speaker Level Power amplifiers are the exclusive generators of speaker-level signals. There is no “nominal” speaker level. Instead, speaker level ratings refer to the maximum power capability (watts). Power ratings require two known parameters: voltage and impedance. For example, a 100-watt amplifier with an 8 Ω speaker load can generate 28.3 volts, whereas a 400-watt amp will reach 56.6 volts.

1.4.7 Analog electronic audio metrics This is an overview of some relevant standard specifications for professional-grade analog audio devices (Fig. 1.24). Frequency Response/Range (±dB/Hz) There are two main categories for frequency response: range and deviation. Range is the spectral area between the limits, typically the half-power points (-3 dB points) at the LF and HF extremes. Response deviations are given as ± × dB for the area within the optimal range. Electronic device deviations are normally 0 dB (+6 dB max). Summation gain is highest at XAB and lessens off center (Fig. 4.33). The summed coverage pattern is re-evaluated using crossover as the center reference. The combination is wider, narrower or the same as individual elements, depending on overlap percentage (majority overlap narrows, majority isolation widens). Highly overlapped systems might never reach the isolation zone (level offset doesn’t reach 10 dB).

Creating an overlap spatial crossover with A and B speakers ■ Determine a nominal reference standard: 0 dB. ■ Alternate soloing A and B to find the equal-level location (matched at >-6 dB from the

reference level). This is crossover location XAB (e.g. if the A and B levels are -2 dB then the summation will be +4 dB). ■ Adjust the relative phase until they match at XAB (delay the earlier one). ■ Drive both channels together. Summed response should exceed the individual levels by +6 dB. ■ If some frequencies are overlapped and others are not, then equalization can be applied to the overlapped ranges (same procedure as overlapped spectral crossovers). Gapped Spatial Crossover (A+B) The gap crossover adds A ( XOVR AS (coupling). ■ Isolation@ONAX >10 dB: ONAX A (isolation) => (transition) => (combing) => XOVR AS (coupling). Lateral ripple variance is minimized by absorption and pattern control, just as the forward reflections were. Pattern control has a much higher probability of success here because it’s easier to align the speaker coverage pattern to drop off before hitting the surface. The most favorable lateral surface is the outward splay wall (the hall gets wider from front to back) because this follows the speaker’s

coverage shape. The least is the inward splay wall, which runs directly counter to speaker propagation.

8.5.3 Vertical ripple variance Vertical ripple variance results from a reflection arriving from above or below. The progressions are the same as the laterals just described but there are some practical differences. Vertical ripple variance is distinctly asymmetric. It’s almost certain that we are much farther from the ceiling than the floor and that their acoustical properties are very different. The floor is almost guaranteed to be a hard surface because it must support the audience. Acoustic treatment ranges from zero to carpet, so only the HF range has a chance at absorption. The floor may be covered with highly diffusive seats and people absorbed in the show. The ceiling? Probably not. The angle of the floor will range from flat to inward splay (known as “raked”) to facilitate the visual sense. It’s virtually impossible to implement pattern control to reach the audience and avoid the floor. The coverage approach is also asymmetric. Pattern control is the minority player on the underside and we rely primarily on absorption. Top side we have a wide variety of possibilities: close by or far away, inward angle, flat or outward, absorptive material, reflective or diffusive. The worstcase scenario is close, inward and reflective.

8.6 Speaker Coverage Shapes Rooms are highly variable in shape, but are invariable over frequency. Speaker coverage patterns have limited variability in shape but are highly variable over frequency. Our goal of filling the room with even coverage over frequency can be met with a single speaker that matches the room shape and has a constant beamwidth, or via a speaker array that combines to create constant beamwidth. We begin with the single speaker in this chapter and the array in the next. The different ways of characterizing speaker cover were described in section 3.10.5. Here we apply them to minimum variance.

8.6.1 Radial shape The radial shape is coverage pizza, an equidistant arc from ONAXNEAR (0 dB) to OFFAXNEAR (-6 dB). This is constant distance (not constant level). It’s not a minimum-variance radial line but rather the maximum acceptable variance (6 dB variation), the worst we can stand before calling for backup. There’s a lot of open room here. The level along the radial line from ONAX to OFFAX might progress gradually through 0–1–2–3–4–5–6 dB. It could just as easily be 0–0–0–0–0–0–6 dB, 0–5–5– 5–5–5–6 dB or even 5–3–1–0– 1–3–6 dB. All we actually know is that OFFAX is 6 dB down from the highest level within the arc, which is assumed to be ONAX center. Is an 80° speaker the best choice for an 80° room? What’s an 80° room anyway? Speakers don’t fit into rooms like that, even fan-shaped ones (unless we put the speaker back at the apex of the fan behind the stage, ring!!!). The radial shape is poorly suited to minimum-variance design for a single speaker because it has no visual link to the minimum-variance shape.

FIGURE 8.15 A comparison of the three speaker coverage shapes: radial, forward aspect ratio (FAR) and lateral aspect ratio (LAR)

8.6.2 Forward shape (forward aspect ratio) The forward aspect ratio shape (FAR) outlines the single speaker’s minimum-variance shape in the forward direction (exclusive of rearward radiation). It is the rectangulation (depth by width) of ONAXFAR and OFFAXNEAR. The minimum-variance line connects between these points and can be simplified as a diagonal but in physical reality is variable and more likely rounded (Fig. 8.15). Speaker shape and room shape are linked via the aspect ratio, i.e. a 2:1 speaker may fit very well in a 2:1 room (a 60° speaker in a 60° room). The FAR helps us put a square peg in a square hole and a rectangular peg in a rectangular hole when we design our systems.

8.6.3 Lateral shape (lateral aspect ratio) The lateral shape is a straight line of coverage from ONAX to OFFAX. The line is not equidistant (like the radial) or equi-level (like the FAR) because we are moving away from the speaker as we approach the OFFAX position. Therefore the -6 dB point is reached by a combination of radial and forward loss. The lateral shape is relevant to minimum-variance design when the audience shape is a straight lateral line with minimal depth behind the start of coverage (e.g. a frontfill speaker). The speaker coverage width must (at least) match the audience width. Coverage width is a maximum acceptable variance specification, i.e. 0 dB to -6 dB. If we want to cover 3 meters of audience width we need at least 3 meters of speaker width. How do we get 3 meters of speaker width? They all do it. Somewhere. Just keep moving back till you get it. It’s a function of depth/width (see Fig. 3.41).

8.7 Room Shapes It’s tempting to think that the horizontal and vertical planes are simply two versions of the same story. Our approach to coverage belies the fact that these are different animals. The key difference is how we reach the people. In the horizontal plane we plow the coverage through the front rows to the back. The propagation path flows over the shape and the matching of coverage and audience shapes is of great importance. We want wide enough coverage to span the front row, while at the same time not overflowing too much at the rear. This may require a compromise shape that balances underflow and overflow. The vertical plane, by contrast, is only one person deep (lap child excepted). It doesn’t matter if our coverage is too narrow in the air above the audience or if we have too much coverage below them in the basem*nt. We need only lay down a line of even coverage at ear level. We evaluate the shapes in fundamentally different ways, and use different versions of the speaker shapes for filling them.

8.7.1 Horizontal The horizontal shape is evaluated as a solid, a container to fill. The target shape is the audience seating plan (not the walls). The macro shape is depth (length from speaker to last seat), by width. A single speaker perspective distills the shape as follows: distance to the last seat vs. audience width at the mid-point depth. The speaker’s coverage shape seeks to approach this dimensional ratio. A rectangular seating target is the easiest to evaluate because the beginning, middle and ending width are the same. A splayed room (trapezoid) uses the mid-point width as would other variations around the basic rectangle. A narrow, fan-shaped room can be approached this way, but a wide fan can’t easily be characterized as having a mid-point width. It can be evaluated as length by radial angle, but this approach is only effective for a single speaker placed at the apex of the fan. That’s behind the stage, so forget that. If a fan shape looks like a bad candidate for rectangular approximation then it’s probably a bad candidate for a single speaker (a helpful correlation).

8.7.2 Vertical The vertical shape is evaluated as a coverage line running from the front row to the last seat (Fig. 8.16). The target shape is the audience head height (not the air above or the floor below). The shape is evaluated by coverage angle and range ratio (top seat to bottom seat). If the shape is too complex to be evaluated this way then it’s probably a bad candidate for a single speaker (a helpful correlation).

FIGURE 8.16 Examples of horizontal and vertical shape evaluation: (A) Vertical coverage shape connects from the top row to the frontfill transition in a single slope. (B) A single slope approach applied to a room with a relatively short balcony. Deeper balconies require separation. (C) Horizontal coverage is evaluated by the FAR shape for a rectangle and (D) a trapezoid. (E) Overly wide fan shape that cannot be covered by a single speaker.

Chapter 9 Combination 9.1 Combined Speaker Variance Progressions This chapter explores minimum-variance behavior beyond the single-speaker response. The combined response is the sum of its parts. Therefore, our study of the single-speaker shape was time well spent. Let’s begin with a simple question: What is the combined coverage angle of two speakers compared with one? The answer is not so simple. The result can range from double to half of the individual element (Fig. 9.1). The decisive factor is the ratio of two opposing forces: overlap and isolation. These come in two forms: angular and displacement. We reduce overlap by isolating the combination n. 1. combining; combined state; combined set of things or persons. 2. united action. Concise Oxford Dictionary

sources, i.e. pointing them in different directions (angular), moving them apart (displacement), or both. The elements retain much of their individual character when overlap is low, melding into sculpted shapes unique to combined arrays. When overlap is high, the individual elements mostly disappear and merge to resemble a unique (and narrower) single speaker. Combined arrays have characteristics distinct from the elements that comprise them and are highly variable over space and frequency. The shapes are unique, but not random, and therefore understandable, predictable and, most importantly, tunable. Overlap and isolation are the shaping tools. Isolation increases our ability to create complex shapes but has minimal ability to couple for power. There is no power addition without overlap but this carries the risk of combing. This is the Yin and Yang of these powerful forces.

9.1.1 Level variance progressions We can extend the minimum-level variance line beyond the limits of a single speaker in three directions: forward, radially and laterally (sideways) (Fig. 9.2). Forward extension is accomplished by adding delayed speakers in the main speaker’s on-axis plane. The forward speaker fills in to offset the main’s standard progression loss. The combined signals

propagate onward from there and the loss progressions resume (unless another forward extension is added). The minimum-variance (MV) line runs forward (front to back). The forward propagation loss of the individuals (-6 dB) is offset by the summation of the pair (+6 dB). Forward extension has two basic forms: “delay” and “relay.” The former is found under and over balconies whereas the latter is found at festivals. The differentiating factor is relative level. We are in the “delay” paradigm when we meet the mains at equal level (or less). The main/delay combination does not exceed 6 dB of addition while also providing increased signal/noise ratio. Forward speakers that dominate the distant mains are power-boosting stations (like cell phone relay towers). Forward extension with delays has a limited practical range whereas extension with relays is limited only by budgets.

FIGURE 9.1 The combination question

Highly overlapped coupled systems create another form of forward extension. The combined pattern is a narrowed version of the individual elements and therefore has more depth for a given width. This gives the system more “throw,” a form of forward extension. This is often attributed exclusively to the coupled line source but also applies to the point source and point destination as long as angular overlap is dominant over angular isolation. Point source arrays create radial extension by fusing speakers at their angular edges. The MV line curves in an arc (from ONAX of one speaker to the next). The individual axial loss (-6 dB) is offset by summation of the pair (+6 dB). The upper limit of radial extension, for obvious reasons, is 360°. Uncoupled line source arrays are lateral extenders. Sources are spread along a line, as is the coverage. Again the loss of the individuals (-6 dB) is offset by the summation of the pair (+6 dB), but this time the MV line runs straight across (from ONAX of one speaker to the next). Lateral extension accrues incrementally as devices are added and, again, is limited only by our budgets. These extension mechanisms can also be combined. Underbalcony delays are an example of combined extension: forward (mains + delays) and lateral and/or radial (delays + delays).

9.1.2 Spectral variance progressions Spectral variation for a single full-range speaker is a fixed parameter. Equalization, level or delay will

not change the coverage pattern over frequency. Arrays, however, can be constructed with combined coverage shapes that differ from the individual components (Fig. 9.2). A coupled point source array of high-order speakers can combine to minimum spectral variance by spreading the isolated HF range and narrowing of the overlapping LF range. A very simplified example: A pair of second-order speakers are arrayed at their unity splay angle of 40°, which spreads the HF coverage to 80°. The individual LF coverage angle is far wider than the 40° splay angle. The resulting overlap couples at the center and narrows the LF coverage. The combined shape has lower spectral variance than the individual elements.

Spectral Tilt Speaker arrays share their LF range much more than their HF. The result is spectral tilt in proportion with quantity. Coupled arrays can tilt more evenly over the spectrum than uncoupled arrays, which are limited by ripple variance. We again see spectral tilt in favor of the lows as previously with added reflections (section 8.2.2).

FIGURE 9.2 Summation-related spectral variance progressions

Some representative samples of summation-based mechanisms are shown in Fig. 9.3, where spectral tilting trend lines are compared. The progressions show a consistent trend in favor of LF content over HF. This common trait can be exploited to create a consistent spectral tilt. Because most progressions lead to tilting, the spectral variance is reduced by matching tilts rather than by futile efforts to stop the progression. Tilt can be leveled by equalization to the extent desired. A frequency response need not be flat to be considered a minimum spectral variance. The decisive factor is consistency. Responses tilted in a similar manner are just as matched as flat ones.

FIGURE 9.3 Spectral variance progressions related to the combinations of speakers. Individual speaker responses are shown with representative tilt and level scaling. Combinations at various locations are shown for comparison.

9.1.3 Ripple variance progressions The standard ripple variance progression is shown in Fig. 9.4. The cycle is extendible and repeatable for multiple elements but follows a familiar pattern. The center point in the progression is the phasealigned spatial crossover point (XOVR). This is the point of lowest variance in the progression, and yet it is the center of the area of the highest rate of change in ripple variance. The coupling point is analogous to the point of lowest wind speed in a hurricane: the eye. The area just outside the eye, however, contains the highest variations in wind speed. That is the nature of the XOVR area, and we hope to escape the storm by isolating as quickly as possible. The ripple variance progression is present in all forms of speaker arrays. Any two speakers in a room will eventually meet somewhere, at least in the LF range, regardless of spacing, relative level or angular orientation. Our focus is on full-range spatial crossover transitions, so we will limit our discussion to those that fall within the coverage edges of the elements. The transitional points will fall in different areas for each array configuration and therein we find the key to managing ripple variance. Some array configurations confine the variance to a small percentage of their coverage. Others fall into the combing zone and never climb out again. The primary indicators are source displacement and overlap. The least variance occurs when both of these are low. The highest variance occurs when they are both high.

The Ripple Variance Progression

■ Frequency response ripple: Ripple increases with overlap and displacement as XOVR is approached. ■ From XOVR: coupling zone to combing zone to transition zone to isolation zone. ■ From isolation zone: to coverage edge, or transition zone to combing to coupling at XOVR. This cycle repeats for each speaker transition until the coverage edge is reached. Each XOVR location will need to be phase aligned. Phase-aligned spatial crossovers cannot eliminate ripple variance over the space. They are simply the best means to contain it. The progression rate is not equal over frequency. The HF range is the first to enter (and first to leave) each of the zonal progressions. Finalists in both cases are the lows. It’s possible, and quite common, to run the HF from XOVR to isolation before the lows have even left the coupling zone. The coupled point source array is the classic example of this. Therefore, our roadmap for the variance progression will need to factor in frequency, since the transitional milestones are not evenly spaced.

Ripple Variance Geometry

FIGURE 9.4 The standard ripple variance progression from off axis to on axis and through crossover

We have previously discussed how triangulation helps us visualize ripple variance over the space (review Figs 4.16 to 4.18). The four triangle types give strong indication of the spatial behavior of ripple variance. It is given that the ripple pattern is not stable over the space. Our primary concern is identifying areas where combing is strongest (the triangulation method) and most rapidly changing. The rate of change depends upon the spacing and angular orientation of the sources. The highest rate of change occurs when we are approaching one source while moving away from the other (the line between the sources). The rate decreases when we are moving away (or approaching) from both sources (the uncoupling line). The isosceles triangle, the representative of the coupling zone, has the lowest ripple but will usually have a high rate of change in the right and acute triangle areas nearby. The obtuse triangle will have reduced ripple due to isolation and will enjoy a low rate of change (moving away from both sources). These two scenarios represent the extremes of ripple variance over the space. Example arrays are shown in Fig. 9.5.

FIGURE 9.5 The spatial reference showing the rate of change for various array configurations

9.2 Level Variance of Coupled Arrays A series of scenarios follows to illustrate how individual speaker shapes combine into various arrays. Forward aspect ratio (FAR) icons are used in layout form to facilitate visualization of the individual contributors. These are compared with and contrasted to prediction plots of combined arrays in the same configuration. The comparison serves to visually deconstruct the combined prediction response and see the part played by each component. The icons closely match the combined prediction when isolation is dominant and far less so when overlap is dominant (coupling or combing). Coupling modifies the combined shape in a substantial, but consistent manner whereas combing causes frequency-dependent shape-shifting with only vague resemblance to the individuals. We will explore how the shape, spacing, relative level and angle play unique roles in the combination.

9.2.1 Coupled line source The coupled line source is inherently 100% overlapping (no angular separation and minimal displacement). We start with a minimally displaced third-order system with exclusively coupling zone summation (Fig. 9.6). The aspect ratio icons are overlaid in a vertical line, creating the visual impression of a laterally spread source with a combined coverage angle slightly wider than the elements. The actual combined pattern is narrower than the individual elements because they are most closely phase matched in the center. This shows a limitation in our aspect ratio approach: The simplistic rendering fails to see the phase-driven narrowing of coupling zone summation.

FIGURE 9.6 Minimum-level variance shapes for the symmetric coupled line source array. Left: a sixteen-element third-order speaker array with 0° splay angle and constant level. Right: a four-element second-order speaker array with 0° splay angle and constant level.

Also shown in Fig. 9.6 is a second-order system with enough displacement to create substantial combing. Large displacement and highly overlapped patterns make combing the dominant response over coupling. The resulting pattern has highly variable fingers and lobes over frequency, and therefore lacks resemblance to the individual icon responses. So far the aspect ratio icons have failed to accurately characterize either of the overlap zone behaviors (coupling and combing) because they lack phase information. Let’s look at the coupling zone again and try a different approach with the icons. It’s often claimed that arrays combine to resemble a single unified speaker. They can, under the following condition: 100% overlap between closely coupled sources. The composite speaker differs from its elements: it’s narrower. Doubling the elements reduces the coverage shape. It may be only a slight reduction but could bring it down to half as wide (2× elements = 2× FAR). Additional speakers lengthen the rectangular FAR pattern, without making it wider. Maximum narrowing (without combing) occurs when the sources are ½ λ apart. Smaller displacement results in proportionally less narrowing. Let’s move forward with the ½ λ spacing. Instead of stacking the aspect ratio icons side by side (like the physical speakers) let’s try stacking them forward (like the speaker pattern shapes). The forward extension of a fully coupled line source can be visualized by forward stacking of the FAR shapes (Fig. 9.7). A single 180° (FAR = 1) element provides the base shape. Each time a speaker is added to the coupled line source the combined pattern is equivalent to adding another FAR icon forward of the speaker. The FAR doubles with each quantity doubling, and the on-axis power capability rises 6 dB. In each case (two, four and eight boxes) three renderings are seen: the individual elements stacked forward, the combined FAR icon of an equivalent single speaker and a combined acoustic prediction of the given quantity of elements. Note: This effect will not occur until the array is fully coupled, i.e. all of the individual patterns have overlapped. The multi-element assembly process into a combined “single” speaker is shown in Fig. 9.8. Here we see two levels of addition from two pairs of speakers into a trio. Once all elements have overlapped the coverage angle is set and will continue forward in the same shape as a comparable single speaker. The number of assembly steps to reach full coupling (and a constant angular shape) rise with each additional element. Remember that coupled arrays are within partial wavelengths of each other, and uncoupled arrays are multiple λ apart. Displacement must be within ½ λ for FAR multiplication to occur. If not, we get combing instead.

FIGURE 9.7 Quantity effect on the combined aspect ratio for the symmetric coupled line source array. Elements are stacked along the width. Successive quantity doublings cause successive FAR doublings. The combined aspect ratio matches the shape of the individual element aspect ratios placed in a forward line. Coincidence?

FIGURE 9.8 Combined shape of the coupled line source. The combined coverage angle is found once the array has fully formed at the top of the parallel pyramid.

9.2.2 Coupled point source Angular separation opens an avenue to isolation, which improves the correlation between the FAR icons and the actual combined shapes. Isolation renders amplitude dominant over phase, allowing the individual icon shapes to hold up. The combined shape is expressed as a coverage angle, in degrees, rather than the rectangular shape of the aspect ratio. The shape of minimum-level variance is now an arc. Our pizza has finally arrived. The coupled point source spreads energy in radial form (Fig. 9.9). The individual aspect ratio

rectangles are spread along the arc like a fan of playing cards. The gaps are filled in by the shared energy and a radial line of minimum-level variance is created when the unity splay angle is used. The minimum-variance line runs between ONAX of the outermost elements. The line of maximum acceptable variance extends beyond the last elements until the -6 dB point is reached. An array for a given angular span of minimum variance can comprise a small number of wide elements or a large number of narrow elements. The latter scenario creates a higher ratio of coverage within the minimum-variance line compared with the outside edges framed within the maximum acceptable span.

FIGURE 9.9 Quantity effects on the combined aspect ratio for the symmetric coupled point source

An important feature here is the difference between the minimum-variance line (0 dB) and the maximum acceptable variance line (0 to -6 dB). The upper-right portion of Fig. 9.9 shows a single 180° speaker, and its standard square aspect ratio shape. The radial line of maximum acceptable variance spans 180°, and yet the radial line for 0 dB spans 0°. Why? Because a radial arc drawn with the speaker at the center would trace a line that immediately shows level decline as it leaves on axis until the -6 dB point is reached. Compare this with the 360° pattern (lower left), which has an FAR of 0.5. This speaker has a span of 180° on the 0 dB line and 360° on the 0 to -6 dB line. Compare and contrast the two singles with the four combined arrays. The 360° single and the four arrays all stay at 0 dB for a span of 180° (the minimum-variance line) but differ in how far they go before falling to -6 dB (the maximum acceptable variance span). The lesson here is that an array filled with narrow elements has a sharper edge than one comprised of fewer wide elements. The “edge” is revealed as the difference between the minimum-variance and maximum-acceptable-variance lines. If we want 180° of 0 dB level variance we will need to create an array with a 180° span between the on-axis points of the outermost elements. We can do this with three speakers at 90° (0 to -6 dB = 270°), 181 speakers at 1° (0 to -6 dB = 181°) or anything in between. If we are concerned with leakage we will

find advantage in keeping the individual elements narrow. An additional note concerns the last scenario, which shows overlap between the elements. Although the overlap will add power capability to the front of the array and may cause increased ripple variance, this will not change the ratio of the minimum-level variance and maximum acceptable variance spans. The combined FAR for our example point source arrays settles around 0.5, which is the FAR value of a 360° individual element. This is no coincidence. The array’s radial spread creates an equal-level arc with the same shape as the 360° single element. The symmetric coupled point source is essentially a sectional version of the 360° shape that can be filled as much as desired. Symmetric Coupled Point Source The symmetric coupled point source can provide curved lines of equal level radiating outward from the virtual point source. The combined response strongly resembles the radial spread of the individual FAR patterns as long as combing is minimal (Fig. 9.10). The correlation to the simplified icons is best with low angular overlap (isolation) and low displacement (coupling), and worst with high overlap and high displacement (combing). The left panels show a second-order array with minimum overlap. The symmetric coupled point source fills an arc. Two modes of overlap behavior characterize the symmetric point source array over quantity. The 100% overlap model is, of course, a line source, (not a point source). In any case it is the ultimate extreme limit, and its behavior is characterized by power and FAR multiplication. The other extreme is unity splay, which is characterized as coverage angle multiplication. This has the least on-axis power addition, but don’t forget that we are spreading power addition across the arc. The middle-ground behavior is the partial overlap configuration, which is characterized as splay angle multiplication. This is the middle ground in on-axis power as well. Note that coverage angle and splay angle are one and the same at unity splay, but not at others.

Angular overlap effects on combined coverage angle 1. 0–95% overlap: combined coverage = (quantity) × (splay angle between elements). 2. 100% overlap: combined coverage = (quantity) × (FAR). What are our options if we need 40° of coverage? There are countless ways to create a 40° pattern. The following example illustrates the role of overlap in the computation of combined coverage angle.

FIGURE 9.10 Minimum level variance shapes for the symmetric coupled point source. Left: constant speaker order, unity splay angle and constant level. Right: constant speaker order, 60% overlap splay angle and constant level.

Combined coverage angle of 40° ■ 1 × 40° speaker (FAR 3). ■ 2 × 20° speaker @20° splay: 0% overlap (coverage of 20° × 2 = 40°). ■ 2 × 27° speaker @20° splay: 25% overlap (splay of 20° × 2 = 40°). ■ 2 × 40° speaker @20° splay: 50% overlap (splay of 20° × 2 = 40°). ■ 4 × 40° speaker @10° splay: 75% overlap (splay of 10° × 4 = 40°). ■ 8 × 20° speaker @5° splay: 75% overlap (splay of 5° × 8 = 40°). ■ 2 × 80° speaker @0° splay: 100% overlap (FAR 1.5 × 2 = 3). There is always a tradeoff of power and ripple variance. Displacement must be kept small when overlap is high. The third-order speaker is our best choice in high-overlap designs. Low-overlap designs favor the first- and second-order plateau beamwidth with its extended isolation zone. Asymmetric Coupled Point Source Differing drive levels create an asymmetric coupled point source, bending the radial shape into an elliptical or diagonal contour. The aspect ratio icons scale with the level change allowing us to see the asymmetric shaping. A level reduction doesn’t change a speaker’s coverage angle but it does change its shape: It gets smaller (Fig. 9.11). A 6 dB loss reduces the icon to half-scale, which reveals its role in context to louder elements. An example asymmetric point source is shown in Fig. 9.12. Notice in panel “B” that the array configuration has highly overlapped coverage angles in the on-axis area of the 10° speaker. Once level is factored in we see that the inverse is true: This is actually the most

isolated area, a point made clear by the scaled FAR icons and the combined prediction. It’s easy to get worried about the interaction of wide speakers such as downfills, frontfills and delays when we view their overlapping radial coverage lines. It becomes clear they are no threat once level is factored in with FAR scaling.

Compensated unity splay angle

FIGURE 9.11 Level/range ratios for sources with matched origins, including the FAR scaling

The methods used to create the asymmetric point source array are shown in two scenarios in Fig. 9.13. The left panels show a log-level taper and constant splay angle (50% overlap) among matched components. Such an array aims the on-axis point of each succeeding element at the -6 dB point of the one above. Each layer is successively tapered by -6 dB resulting in a unity XOVR in front of each lower speaker. Offsetting the level taper and splay to achieve a unity XOVR is termed the “compensated unity splay angle.” The maximum amount of layer separation is used here (6 dB). The result is a curved minimum-variance region that continues indefinitely over distance. The right panel shows the same principles applied with unmatched elements that double their coverage with each layer. The compensated unity splay angle is applied again but each layer is separated by a larger splay angle in order to preserve the relationship of the on-axis aim point to the -6 dB edge of the unit above. The result is a diagonal minimum-variance line linking XOVR to XOVR, the product of the complementary asymmetry of changing splay and coverage angles.

FIGURE 9.12 Aspect ratio icons scaled with level in their individual and combined shapes

FIGURE 9.13 Minimum level variance shapes for the asymmetric coupled point source. Left: constant speaker order and splay angle, with tapered level. Right: mixed speaker order with splay angle tapered to provide a constant overlap ratio, with tapered level.

The asymmetric coupled point source is chosen for the same reasons as the symmetric point source but conforms to a different shape. The power addition of the asymmetric point source is self-scaling because the entire rationale for this array is that it fits into a variable distance shape. The levels are tapered as appropriate to scale the array to the distances presented. The unity splay angle must then be compensated as appropriate for the level offset factor. There is no “correct” element with which to start the asymmetric coupled point source. We must start with something and then the process begins as we see what is left. After the next piece is added we re-evaluate again until the shape is filled. The process can be restarted until a satisfactory assemblage

of puzzle pieces is found. It is simple to find the unity splay in a symmetric array. A pair of 30° speakers will be splayed 30° apart. We can just add them and divide by two, as long as the levels are matched. (Coverage1 + Coverage2)/2 = unity splay (30° + 30°)/2 = 30°

The spatial crossover is at the geometric and level mid-point: 15° off axis to either element. What about if we want to merge a 30° and 60° speaker? The same equation applies, as long as the levels are matched. (30° + 60°)/2 = 45°

The elements meet at the -6 dB edge of both elements: 15° off axis from the 30° element and 30° off axis from the 60° unit. The spatial crossover is the level center, but not the geometric center. The equation must be modified whenever levels are offset (Figs 9.14 and 9.15). There is no best splay angle between two speakers, until we know the distance/level relationship. If we take two matched elements and turn one down 6 dB, the standard unity splay angle will not provide unity results. The geometric center finds -6 dB from one element and -12 dB from the other. What is the unity splay angle then? A change of 6 dB is a level difference of 50%. The splay angle must be adjusted by the same ratio to return to unity performance at XOVR. A reduction of 6 dB shrinks the speaker’s range in half. That is the decisive number. The compensated unity splay equation ((Coverage1 + Coverage2)/2) × (Range2/Range1) = compensated unity splay* *assumes that levels are set in proportion to distance

Here is an example of two 30° speakers, with one covering half the distance of the other (-6 dB) ((30° + 30°)/2) × (0.5/1) = compensated unity splay (60°)/2) × (0.5) = compensated unity splay 0° × 0.5 = 15°

Next we join a 30° speaker with a 60° element that is covering 70% of the range (-3 dB). ((30°+ 60°)/2) × (0.7/1) = compensated unity splay ((90°)/2) × (0.7) = compensated unity splay 45° × 0.7 = 31.5°

FIGURE 9.14 Compensated unity splay angle examples for typical horizontal plane shapes

FIGURE 9.15 Compensated unity splay angle examples for typical vertical plane shapes

9.3 Level Variance of Uncoupled Arrays The minimum-variance behavior of uncoupled arrays differs fundamentally from the coupled arrays. The differentiating factor is range. The coupled arrays set their MV shape and hold it over an unlimited range, whereas the uncoupled arrays are in constant change. FAR icons help to visualize the changing shape over distance and we will again compare them to the combined predicted responses. The advantage of the FAR approach becomes clear when we need to cover a shape that’s wider than deep (an aspect ratio of line length, extra control when λ < line length and “just right” control when λ = line length (Fig. 9.22).

FIGURE 9.22 The effects of line length in the coupled line source, length is 2.76 meter (1 λ @125 Hz). The circles are sized to the wavelength of the frequency shown (lower left). Notice the front lobe follows the confluence of the circles and cancellation occurs in places where the circles diverge. The coverage angle is around 70° regardless of quantity when the line length equals the wavelength (upper panels). The coverage angle is reduced by half when λ equals half the line length (lower center and right panels). Element Spacing There is no theoretical limit to line length and its endless narrowing. The physical limit is reached when the element displacement is large enough to uncouple the array and lobe. This is a classic “catch 22” scenario. We can narrow the array forever if we have an infinitely small displacement. On the other hand, there won’t be any narrowing at all without displacement between the elements. Displacement is the steering mechanism.

Primary features of element spacing ■ Displacement provides phase offset in the uncoupling plane (the steering mechanism). ■ Displacement should not exceed 2/3 λ. A displacement of 1 λ produces fatal side lobes. ■ Low displacement and quantity minimize steering (high amounts maximize steering). Recall the spatial icon for summation (Fig. 4.16), which contains two primary features: the coupling and uncoupling lines. The line source concentrates its forward beam onto the coupling line while physically spreading along the uncoupling line. The coupling line has the least displacement/time offset between elements and the uncoupling line has the most. This causes maximum forward extension along the coupling line and lesser amounts along the uncoupling line. The sideways coupling will be substantially less than the forward coupling as long as the total phase offset between the first and last elements is 6 dB) are reserved for phase offsets >150°. The most well-known cancellation mechanism is polarity reversal between adjacent elements. Such reversals don’t necessarily reduce the sound level in the room, but they certainly move it around. Polarity reversals create a frequencyindependent phase offset of 180° at an otherwise symmetric spatial crossover, which splits the beam sideways into a figure eight.

FIGURE 10.1 Comparison of a two-element subwoofer array with normal polarity, reverse polarity and 8 ms delay between the elements

Delay offset at a spatial crossover can mimic a polarity reversal, but only at one frequency. Delay offset is frequency-dependent phase steering, whereas polarity is a full-range control. We begin by comparing polarity reversal and delay offset in a two-element array (Fig. 10.1). The polarity reversal vacates the crossover area at all frequencies. By contrast, a delay of 8 ms has three distinct frequency-dependent effects. A beam is steered toward the delayed element’s side at 31 Hz (λ, ¼ λ offset), identical to the polarity reversal at 63 Hz (180°, ½ λ) and the same as the original combination at 125 Hz (360°, 1 λ). Meet the players (in order): beam steering, cancellation and coupling. Let’s put them to work.

10.1 Subwoofer Arrays The normal placement rules for full-range speakers do not apply to subwoofer arrays. Two unique features create this opportunity: (a) separate enclosures with limited frequency range; (b) large wavelengths able to diffract around neighboring objects, most notably other subwoofers. No sane person places a full-range speaker facing directly into the back of another such speaker. We’d be crazy not to consider this option for subarrays. Individual subwoofers come in two basic flavors, omnidirectional and cardioid. The omnidirectional version, like its microphone counterpart, is not truly omni and narrows with frequency. Commercially available cardioid versions use phase offset to combine front and rear drivers to cancel behind and couple forward. It’s always best to meet the individual elements, before combining (Fig. 10.2). Our representative element would appear to have a flat beamwidth of 360° over its 31 Hz to 125 Hz range, if evaluated by the standard method (OFFAX = -6 dB). This should ring some alarm bells because we know the beamwidth of cone drivers naturally narrows with frequency (section This shows the vulnerability of angular specification for overly wide devices. The coverage angle of 360° is the loosest spec in all of audio. There’s a ±6 dB range of coverage patterns that meets this spec. Does it matter to you if the level at the rear is the same as the front, -6 dB or +6 dB? All qualify as 360°. Wait, there’s more! How about 0 dB front and back and +6 dB on the sides? Still 360° on the spec sheet. We won’t miss any of these differences if we follow the 0 dB contour instead of classifying the shape radially. The aspect ratio shape (the rectangulation of the 0 dB contour) sees it all. We’ve previously used a forward-thinking characterization (FAR), but now the back side is an equally important part of the shape. We include front/back and side/side in the total aspect ratio characterization. The shapes of truly omnidirectional 31 Hz and quasi-omni 125 Hz don’t appear significantly different at first glance but become clear as quantity increases.

FIGURE 10.2 The directional characteristics of “omnidirectional” subwoofers

Directional arrays can be created with combinations of omni elements. There are, as usual, various

options available and we will, as usual, isolate their effects and see where it leads. We’ll focus on coupled arrays, because uncoupled subwoofer arrays are degraded by ripple variance. There is simply too much overlap unless the uncoupled elements are cardioid.

Coupled subwoofer array options 1. Lateral arrays: the coupled line source. 2. Radial extension: the coupled point source. 3. Forward arrays: end-fire arrays, dual element in-line, gradient. Before we go any further I want to mention an excellent resource to aid your understanding of subwoofer arrays: SAD (subwoofer array designer) by Merlijn Van Veen (

10.1.1 Coupled and uncoupled line source Quantity and Spacing Effects The coupled line source is first, as usual, and we’ll use the familiar doubling technique to expose the trends (Fig. 10.3). Progressive narrowing over frequency becomes apparent even in small quantities. An interesting convergence appears as quantity doubling yields the same coverage at half the frequency. The parallel pyramid is back: full-range narrowing that preserves (and makes obvious) the spectral variance between 31 Hz and 125 Hz. Element spacing has a similar effect (Fig. 10.4). Array length is held constant while successively smaller quantities are spread apart to fill it. There are limits to this, because the wavelengths we are transmitting don’t rescale. The wide spacing scenario (3.2 m) shows evidence of combing zone summation, which would worsen as frequency and/or displacement rise. An important trend is seen in the progressively expanding base of the parallel pyramid. The coupling multiplication is reduced proportionally as displacement rises, i.e. we are in the process of uncoupling. Notice that element quantities of 4, 8 and 16 maintain a nearly constant coverage angle for a given frequency. Still no progress, however, toward reducing spectral variance.

FIGURE 10.3 Quantity effects at fixed spacing for subwoofer coupled line source arrays

FIGURE 10.4 Offsetting spacing and quantity for subwoofer coupled line source arrays Beam Steering by Delay The bass response peak at the center of L/R systems is so well known to concert sound engineers that it has its own name: power alley. We know it here as the coupling zone, spatial crossover XLR. Widely spaced L/R systems have small alleys and vice versa. The width shrinks with frequency and then the combing begins. It’s an overlap crossover that can’t stay in time as we move off center. Can

we do anything about it? We can steer the array physically by splaying the left and right channels apart. Alternatively we can steer the pattern electronically. The subwoofer’s limited spectrum enables simple beam steering, i.e. asymmetric delay between elements. “Simple” beam steering means a single delay value for an entire element: fixed time offset (and therefore variable phase). Complex beam steering uses frequency-dependent delay (e.g. all-pass and FIR filters), which can give variable time offset and fixed phase offset (or other options). Simple steering is inherently range limited (subwoofers only) but easier to comprehend and implement. Complex steering can be applied to subs or full-range systems, but requires adult supervision. Let’s move forward with simple beam steering, i.e. asymmetric time offset. There will be a 4× multiplication of effect for a subwoofer operating over a 30–120 Hz range. The multiplier reduces to 3× if we cut the subs off at 90 Hz. Restricting the range makes for more consistent effect over the sub as a whole. Let’s see if we can put this to practical use. Let’s first establish some beam steering goals. First we’ll try moving the coverage pattern off center. Then we’ll try to reduce spectral variance, i.e. achieve a constant beamwidth. We begin with a symmetric coupled line source, and the standard parallel pyramid comes into view (Fig. 10.5). The pyramid height rises with frequency, so the 125 Hz peak is the highest and narrowest. All frequencies share the same dead-center axial orientation. Now let’s linearly delay the elements in one direction. The pyramid’s equal timing lines moved sideways, but the equal-level lines stayed in place. The beam bends toward the synchronous time center unless the level side provides a resistive force, i.e. isolation. The extent of the movement will be proportional to the frequency and the time offset. The 30 Hz peak moves only 25% as far off center as the 125 Hz range. It is a lot easier to push a dog than an elephant. A log delay taper (Fig. 10.5) is more effective at maintaining a constant angular change. The effect is a half-conversion of the line source into a point source. The delay creates the arrival times of the point source (albeit with the amplitude aspects of the line source). The result is a partial steering outward. Delay-induced beam steering is essentially a willfully misaligned spatial crossover. The 0 ms time offset location is moved off the 0 dB level offset location. We see we can bend the beam, but we’ve made no progress toward flattening the beamwidth yet.

FIGURE 10.5 Lateral steering of the coupled line source with delay tapering

The next step is a double steer. Half the array steers north, while half steers south. We can now take advantage of the fact that the higher frequencies are more steerable. Double steering pulls the 100 Hz range outward (in both directions) more strongly than 30 Hz. Recall that 100 Hz was narrower than 30 Hz. The double steering reduces spectral variance by making the 100 Hz stretch look more like the 30 Hz response. Beam Steering by Level Tapering We can consider the possibility of level tapering. This is a revival of the asymmetric coupled line source that proved ineffective for full-range systems (section Can this work better for subwoofers? It’s actually even less effective because there is more overlap and less isolation in the subwoofer range. Level tapering without isolation does nothing but reduce headroom and widen the coverage (slightly). We are trying to steer the Titanic with a rudder that’s way too small, and we all know how that turned out. We can bring this down to a simple math equation: level tapering*(% isolation) = potential benefit.

10.1.2 Coupled and uncoupled point source Add splay and we have a coupled point source (Fig. 10.6). Angular isolation was the most effective means of reducing spectral variance in full-range systems. Can it work for LF devices? Let’s take a moment to consider why this is not common practice. It can be very difficult in practice

to find space to curve an array. Stage fronts are flat, the security perimeter is flat, and on it goes. We’ll have to make a strong case to get such valuable real estate set aside for us. Second, there is widespread belief that subwoofer responses are de facto “too wide,” which is often overcompensated by excessive narrowing. It takes a lot of subs to get 30 Hz to squeeze exactly into the room, but 100 Hz is very prone to over-steering. The spectral variance of proportional beamwidth strikes again. If we splay the array outward, a level reduction at the center is a certainty (and a step in the right direction toward reducing power alley). The third factor comes from the SPL Preservation Society. Anything that steers energy away from front of house (FOH) will reduce the dB SPL (at FOH). Enough said.

FIGURE 10.6 Radial steering of the coupled point source

FIGURE 10.7 TM array

10.1.3 TM array The TM array is a unique solution to the challenge of large-scale “in the round” applications. The TM array has many unique features, not the least of which is that it was not invented by Harry Olson before you were born. It was Thomas Mundorf who innovated the concept for the Metallica tour (Fig. 10.7). The design goal is omnidirectional coverage in the horizontal plane and a controlled beam in the vertical. The horizontal configuration is four boxes facing into each other (the ultimate symmetric coupled point destination). The principle is zero displacement in the horizontal plane, and can only work in the LF range where the physical obstruction of the speakers to each other is not destructive. A coupled line source is used in the vertical plane. It must be long enough, and high enough, to provide enough vertical steering to get over the stage (in the case of Metallica the drummer). The vertical beam can be steered downward by the beam steering described above. The TM array is the minimum spectral and ripple variance leader in the horizontal plane.

10.2 Cardioid Subwoofer Arrays The reasons for choosing cardioid arrays are obvious. Rarely do we suffer from insufficient back lobe! Reasons for not choosing them, not so much. There are benefits, but also costs.

Cardioid subwoofer considerations ■ Reduce stage leakage: Typical configurations can yield >20 dB front/back ratios. ■ Improved D/R ratio: Rear/side control reduces early house reflections. ■ Pattern optimization: Steering reduces horizontal coverage (not just rear). ■ Efficiency loss: reduced maximum SPL (compared with all subwoofers as a block). ■ Cost/practical issues: extra space required, special rigging, etc. ■ May be ineffective due to local acoustics, e.g. under stage, recessed in a wall. ■ Not always applicable: Why cancel the rear if we’re against a wall? ■ Time stretch: Delayed elements arrive behind the front (gradient). Two cardioid configurations are in common use: end-fire and gradient (2-element in-line and inverted stack versions). The end-fire is front steered: coherent phase on the front side and random in the rear. Gradients are rear steered: phase matched (and polarity inverted) on the backside, and quasi-coupled in the front. Cardioid steering is an active process that requires linear performance to maintain its pattern. The pattern will be dynamically re-steered if some elements limit before others. Therefore matched elements at unity level are recommended for all configurations. It can be tempting to taper levels to improve performance at a particular location but this can trigger unexpected steering at show time. Cardioid subwoofer arrays can be used as elements of the linear and radial extension arrays described above, even the beam-steered versions. It looks like a graveyard, but it works!

10.2.1 End-fire We can use the same language to describe the end-fire array and the most infamous sound engineer haircut, the mullet: business on the front side, party in the back. Elements form a forward line with predetermined spacing (typically around 1 m). The forward speakers are incrementally delayed to sync in front. A four-box array with 1 m spacing would delay the 2nd box by 1 meter, the 3rd box by 2 meters and the 4th by 3 meters. It sounds silly to delay a box 3 meters but that is exactly correct (it’s 8.82 ms, the propagation time for 3 meters). All elements arrive at the front at 3 meters (8.82 ms), but each uses a unique mix of acoustic and electronic distance. The rearmost box is 3 m of acoustic, the front is 3 m of electronic and the middle two are mixes of 2 + 1 and 1 + 2 respectively. What’s happening in the back room? Everybody’s out of it. The arrivals are equivalent to path totals of 0, 2, 4 and 6 meters. The forward and rearward acoustic paths are the same, but the timing chain that sync’d us in front puts us 2× out of sync in the back. The front speaker is delayed 3 meters and then has to travel another 3 to get back the rear speaker. The timing back there is 0 ms, 5.94 ms, 11.76

ms and 17.64 ms. It’s a party all right and the guests are staggering all over the place. Let’s digress for a moment to clarify rearward radiation from speakers. This is vital to grasp the endfire concept and often misunderstood. It’s easy to think that rearward propagation is opposite in pressure direction (polarity) from the front. This is true of a fan (high pressure in front, low pressure in the rear) but not for speakers. Waves emanating from the rear are the same polarity as the front, just a little later in time (assuming drivers in the front). There was one speaker model that blew everybody away because it had the propagation properties of a speaker and a fan. It was used in a famous advertisem*nt for Maxell, but sadly I have not been able to find one to measure.

FIGURE 10.8 The standard end-fire array with physical model, timing chain and coverage pattern

Fig. 10.8 shows the 1-meter, 4-element end-fire array described above. The timings for each element, and their resulting phase positions, are shown along with the physical model. There is zero time offset and zero phase offset in front. It’s their phase matching that’s critical here, not the absolute numbers. In practice the timings may be slightly larger than a straight line of sound propagation would suggest. That’s because it’s not a straight line! The wavefronts have to wrap around the cabinet(s) in front of them before they break into the clear. This is another case where size matters, this time the size of the speaker enclosures. We can start with these design values, but the optimization values should be found in the field by observing the phase response and matching them up.

FIGURE 10.9 A log version of the end-fire array with 90° relative phase spread at 125 Hz

The situation at the rear is staggered time offset, which creates a thoroughly scrambled mix of phase offsets over frequency. The front is exclusively full-range coupling zone summation while the rear is a punch bowl full of every summation zone for every frequency. Party on Garth! An alternate version can provide a wider horizontal spread by altering the timing (Fig. 10.9). The sync location is moved off to the side, which evens out the coupling effect across the front. There is 90° of accumulated phase offset in the front, which flattens the front a bit without raising combing concerns. The end-fire is a range-limited array configuration, but it’s not what you think. It gets better with age. It gains efficiency as we get farther away because the doubling distance effects become less significant. Maximum coupling and cancellation require matched amplitude. Our speakers are spread over a 3 meter depth, so it’s going to take some distance before we get the full effect. Close range knocks a little bit off the coupling, but it takes a lot off the cancellation. Our designs should consider how quickly we need a fully matured array. The lead singer won’t be too impressed with the cancellation if he’s only a meter behind the last box. The end-fire array eventually approaches 100% efficiency, but the maximum SPL of a standard coupled line source would be greater. Spectral variance of an end-fire array is a break-even with its elements. The pattern narrows at all frequencies, so the 100 Hz range will remain proportionally narrower than 30 Hz. Four elements is the most common end-fire quantity because it is effective and reasonably practical. Economizing to three units sharply reduces the randomization in the rear, leading to frequencydependent reduction. Never end-fire with just two elements. It’s a one-note-wonder on the back side. Use the gradient in-line instead (same physical, different settings). We don’t have to stop at four, bearing in mind that the horizontal pattern narrows with quantity. Get crazy! RF antennas will endfire 10+ deep.

The end-fire is the superior cardioid steering configuration for its high efficiency and minimum phase offset on the front side. The reasons to choose other configurations are usually practical (real estate, rigging, etc.) rather than performance. Harry Olson introduced the audio world to the end-fire array, but it was the antenna folks that got the party started.

10.2.2 Gradient (in-line) The 2-element in-line technique is a smaller-scale version with large-scale results. It’s not half an end-fire. It is synchronized going rearward, where the end-fire goes the other way. Some folks call it a “reverse end-fire” but there is a very important distinction: the end-fire won’t work with only 2 elements and the gradient only works with 2. You can use a million speakers, but it’s always a 2element configuration (AB). The end-fire can be a million elements (ABCD . . .). The gradient’s operating principle is to sync by delay and cancel by polarity. It’s two forward-facing speakers spaced 1 meter or less apart. Delay the rear speaker by the distance between them, add polarity reversal and serve. The front/back ratio can exceed 20 dB over the full operating frequency range. This can be more effective than an end-fire on the cancellation side because the tuning is based on time offset rather than scrambled phase. A gradient requires far less real estate than an endfire and matures at closer range because it only spans 1 meter. These practical benefits are weighed against the side effects on the front side. Or should I call them “front effects”? We’ve delayed a speaker that was already late and then polarity reversed it. What could go wrong? Let’s use 1 meter spacing and take an inventory. The delay is 3 ms (I turned the heat up to get the sound speed to a round number) so the rear speaker arrives at the front side 6 ms late, the equivalent time offset to 1 λ at 166 Hz. 83 Hz is ½ λ so it looks like it’s going to cancel, but wait, polarity reversal saves the day! That’s 180° (time) and 180° (polarity) so we are on the coupling side. Meanwhile 166 Hz is going to cancel because it’s 360° (time) and 180° (polarity). Our subwoofer is already finished by 100 Hz so we escape (if not, we’ll need to reduce the spacing and try again). Our final stop is 42 Hz, which has 90° (time) and 180° (polarity). We’ve reached the frequency of polarity indifference, because +90° is the same as 90°. The polarity reversal helps us between 42 Hz and 83 Hz. Below 42 Hz it hurts as it brings the back lobes more in phase (we are trying to cancel, remember). The 90° phase offset allows us to net just 3 dB addition in front (instead of the 6 dB we would get with a standard coupling). All told we get a frequency-dependent coupling efficiency in the front in exchange for broadband cancellation in the rear. The front side sees gains of 3 dB or more for about 1.5 octaves (in this case centered at 83 Hz). The efficiency loss will diminish over distance, but its frequency-dependent aspects will remain because the phase offset is constant over distance.

FIGURE 10.10 The gradient in-line cardioid configuration

There is another side effect to this staggered timing. There is still 6 ms of time offset between two speakers running at equal level. We tricked the rear speaker into not combing in front but it’s still 6 ms late, which stretches the time response. An impulse reproduced through this system will be de facto stretched in time, making the LF response less “tight.” This may be a worthwhile tradeoff because the reflections it reduces might be as strong as the one we inherently create here. This is a classic TANSTAAFL choice. This is a minimum spectral variance configuration. The pattern is almost identical over frequency. The element coverage angle widens in the LF, which enables substantial cancellation on the sides. The sides are less affected as the element pattern narrows and the factors offset beautifully. The lowest/widest frequencies are steered more while the highest/narrowest are steered less, which nets a balanced cardioid pattern. The gradient method was originated by, yeah you got it, Harry Olson (Fig. 10.10).

10.2.3 Gradient (inverted stack) The inverted stack works under the same principles as the 2-element in-line, but with two twists that give it a substantial advantage. The inverted stack has it all: time offset, polarity reversal and level offset (the new ingredient). The physical configuration (typical) is two speakers facing forward with a rear-facing unit in the middle. Why turn a 360° speaker around? Because it’s one of those -6 dB at the rear-type of 360° speakers.

FIGURE 10.11 The gradient inverted stack cardioid configuration

FIGURE 10.12 Cardioid master reference table

The forward speakers are 0 dB in the front, so the pair add to +6 dB (1 + 1 = 2). They are each -6 dB in the rear, so they add to 0 dB (0.5 + 0.5 = 1). The rear-facing speaker is -6 dB in the front and 0 dB in the rear. We now have 0 dB + 0 dB in the rear. The timing sequence is done exactly the same as the in-line array and the rear cancellation yields 0 dB (0°) + 0 dB (180°) = -inf (1+ (1*-1)) = 0. There’s a difference on the front side. The rearward speaker is 1 against 2 and it’s backwards so it loses another 6 dB. It’s +6 dB vs. -6 dB in front (2 + 0.5 = 2.5). The time-stretching part of the equation moved from 0 dB (in-line) to -12 dB (gradient), which means the tightness we gain from controlling the pattern far outweighs the tightness we lose from a latecomer that’s 12 dB down. The gradient’s time offset comes from the displacement between the forward and rearward drivers. The displacement must net at least 3 ms to prevent the back side from cancelling below 30 Hz in front (the phase offset would exceed 120°). The in-line configuration must be kept close enough to escape trouble at the top. The gradient must be spaced far enough to prevent trouble at the bottom. The propagation paths physically wrap around the sides of the box to reach their front and back meeting points. Deep/wide boxes are range limited at the top, but have more coupling at the bottom.

Shallow/skinny boxes are the opposite. The optimization process uses the phase response to maximize the cancellation. This is also a minimum spectral variance configuration for the same reasons as the in-line above. The final note about the gradient is that it’s extremely practical to stack or fly, which adds to its allure all the more (Fig. 10.11). A reference chart for typical configurations of the end-fire and gradient in-line arrays is found in Fig. 10.12.

Chapter 11 Specification The sound design process cannot begin until we receive answers to some specific questions. What are you going to use a sound system for? What is the room shape? Where is the stage? Where can we put speakers? The list goes on forever but we can focus on some of the most important ones. specification n. detailed description of construction, workmanship, materials, etc. of work (to be) undertaken by architect, engineer, etc. specify v.t. name expressly, mention definitely (items, details, ingredients, etc.). Concise Oxford Dictionary

■ Budget: Do you have any money to spend? This sets the bar on quality, power and complexity. ■ Power scale: Do we want loud, stupid loud or insanely loud? ■ Spectral allocation: Voice, concert, musical theater, DJ, worship? Translation: How many subs? ■ The room: Was it designed by someone who understands loudspeaker systems? ■ Locations: Where are the performers and the audience? Where can we put speakers? ■ Architectural police: You seriously want to hide the speakers in a soffit behind a scrim? ■ Channels: Left, right, vocal surrounds, effects? ■ Mains: How many main systems do we need? L/R pair or more? (Don’t tell me less.) ■ Fills: What areas need to be covered with fills? That’s the tip of the iceberg, but it’s enough to get us started. The sound design process is always one of compromise, but that does not mean we need to compromise our principles. There is no job with unlimited budget and placement options. We are the sound experts. We have a duty to inform others about how their design decisions affect the sound quality. It’s up to us to inform them that hiding the speakers in soffits is the most expensive downgrade available. Then we can reach a suitable compromise or creative solution based on the weight of all factors. Our decisions have both positive and negative effects. How can we make these choices? Let’s take some guidance from others.

TANSTAAFL Science fiction author Robert Heinlein penned TANSTAAFL in his 1966 novel The Moon Is a Harsh Mistress. This term (pronounced “t-ahn-st-ah-ful”) is an acronym for “There ain’t no such thing as a free lunch.” The concept is applicable to decision-making in all walks of life. Every decision comes with a cost, some form of tradeoff. A simple illustration: two overlapping speakers in a point source array. Increased overlap yields higher power capability but raises the ripple variance. Decreased overlap yields less power addition and minimizes ripple variance. The free-lunch option (maximum addition with minimum variance) is not on the menu. Take the power option for AC/DC and opt for minimum variance for Les Mis. Sales and marketing departments are trained to suppress our consideration of TANSTAAFL. Each day an assortment of free offers is presented to us. All have strings attached. If it seems too good to be true, it is.

Acoustic triage The medical community uses a resource allocation and prioritization system under emergency conditions known as “triage.” Resources are distributed based on a combination of probability of success and degree of need. Triage prevents excessive resource allocation for lost causes, and prioritizes those with minor problems to the waiting room. It’s pointless to use the entire blood bank for a single doomed patient when this limited resource could save hundreds who have a fighting chance. It would be equally irresponsible to treat those with bruised toes ahead of those in critical, but recoverable, condition. Our acoustic triage model identifies areas that require special care. We allocate resources in search of solutions for one area that don’t create problems for others. We use triage to help our decisionmaking when conflicting solutions arise. An example: A small distant balcony area needs coverage. The main cluster can be aimed upward, resulting in excellent imaging for the balcony seats. Did I mention the 10 meters of mirrored wall directly above the top row? Optimal imaging for the upper seats results in strong-strong reflections-reflections for everybody else. Triage principles direct us to abandon this strategy and prioritize clarity for many over image for a few. Overbalcony delay speakers can cover this area with minimal risk to others. The principles of minimum variance, TANSTAAFL and triage are the philosophical underpinnings of our specification strategies. They combine to create an approach that seeks the maximum benefit for the majority, is realistic about the tradeoffs required and is committed to allocation of the resources to the areas where tangible benefits can be assured.

11.1 System Subdivision Transmission subdivision begins with channels and passes through arrays and single speakers to the listener. Channels are waveform streams and the speakers are the delivery system.

11.1.1 Channels The left channel is a unique waveform stream sent to all of the “left channel” speakers. Members of the left channel maintain a correlated stable summation relationship with each other. The same goes for all right channel speakers, but not so for the relationship between left and right. Their relationship is uncorrelated and subject to change without notice. Therefore we design and optimize our systems on a per/channel basis. Everything left links to left and so on. The relationships between left and right, music and vocal systems, side and rear surrounds are placed under artistic control. We need to agree on the macro allocation of channels and design the system around that. Coverage of the space is divided between the mains and fills carrying the same channel. This serves as a guideline but is not a rigid rule. The frontfill, for example, could be linked to the mains even if it receives a matrix feed that contains less than the full mix. The system is designed for minimumvariance transmission for correlated signals, even if the actual mix deviates from this. The modern sound system has lots of opportunities for cross-pollination in the matrix. The system is calibrated to achieve minimum variance with a unity matrix. Channels can play an infinite number of roles but the usual suspects are pretty straightforward.

Typical Channel Types ■ Main: left/right or mono mix of the complete show. ■ Voice: separate channel for voice only. Typical of musical theater systems. ■ Music: left/right system for music only (no voice). Musical theater again. ■ Subwoofer: the subwoofers driven as an independent channel, because . . . subwoofer. ■ LFE (low-frequency effect): the cinema name for subwoofer, the “point one” in 5.1. ■ Surround: sound image sourcing on the sides, rear and overhead and spatial immersion. ■ Special: effect sources to localize a sound cue to a particular location.

11.1.2 Systems and subsystems A channel stream can be divided into subsystems to manage different speakers within the transmission chain. The purpose of subdivision is to tailor the response to create the desired coverage

shape. Simple shapes covered by a single system require no subdivision. More complex shapes require subdivision to customize the fit. It’s not always necessary to subdivide to the component level. The additional expense of subdivision is not required when all elements are identically driven. The subdivision decision rests on symmetry. Asymmetric speaker groupings require subdivision, whereas symmetric ones don’t. System subdivision concerns four differentiation layers: speaker type, level, delay and equalization. Processor function subdivision is mandatory when different speaker models are used. Subdivision has merit when matched speakers need to cover different shapes or have asymmetric splay or spacing (Fig. 11.1). Subdivision is a countermeasure to asymmetry and complexity. How much asymmetry is required before subdivision is merited? Let’s be practical and apply TANSTAAFL. Every subdivision costs material, installation, ongoing maintenance, calibration time and opens up the door for human error. A range ratio of 1.4:1 (3 dB) is a practical threshold for subdivision decisions. We will again use alphabetical hierarchy to denote subdivision levels. Two matched speakers covering different depths would form an AB subdivision. A three-element combination could use AAA, AAB or ABC depending on the range ratios faced.

FIGURE 11.1 System subdivision strategy detailing the number of processing channels and calibration mic positions

Once level asymmetry is introduced, delay and equalization follow suit. We must assume a speaker driven at reduced level is operating at closer range. Why else reduce it? Such elements are EQ’d separately because they encounter different local acoustical environments. Level asymmetry forces the spatial crossover off the equidistant geometric center to a point closer to the lower-level element. Delay can be used to phase align the crossover, in proportion to the level offset and displacement. Such delay may be deemed impractical with small-level offsets and/or when the speaker geometry maintains extremely low displacement (such as third-order proportional beamwidth units). In such cases the potential benefits may not be worth the expense, trouble and (most important) risk of error.

11.1.3 Subsystem relationships Let’s divide the subsystem family tree into three principal parts: mains, fills and low-frequency systems (subwoofers). Each has sibling relationships and also to the larger groups. Members of a main array are a tightly knit family. They stick together through their entire range to make a composite main element. Main systems such as left and right relate on equal terms. Frontfills and underbalcony delay elements also relate as peers but they are an extended family spread over the space. They also join the mains, but not as equals. Mains are the dominant players over the various fill types. The LF devices can be left on their own because there is safety in numbers. They do better in packs (coupled arrays), which affects our strategies for combining them with the mains. Every relationship between speaker elements can be characterized as one of the previously studied arrays. We have elements within arrays, and arrays as elements within arrays. For example, a coupled point source main and uncoupled point source frontfill can join together as an uncoupled point destination.

Subsystem Families ■ Mains: principal sources to cover the majority of the room. Single speakers or coupled arrays. ■ Coupled fills: radial extenders coupled to the mains (sidefills, downfills and rearfill). ■ Uncoupled fills: auxiliary sources to cover leftover areas. Single speakers or uncoupled arrays. ■ Subwoofer: spectral extension to the mains. Coupled arrays.

11.2 Power Scaling We need to set a scale for the system level. Are we doing rock and roll in an arena or a modern worship service? Sorry, those are both at the same level. Try again. Is it poetry readings or electronic dance music? We set the size from the top down: mains to fills and mains to subs. We are applying the principle of the analog transmission pipeline. We need to scale the pipeline for the program material and for its relative distance.

11.2.1 Main systems How much power do we need? The answer is more. We have previously classified the speaker systems by power (section 2.7.6). Here we determined four general orders of magnitude for systems in 10 dB steps (Class 4 = 140 dB SPL @1 m, Class 3 = 130 dB SPL and so on). We also discussed the expected SPL levels for various program materials (Fig. 2.24). This leads to a fundamental choice in the system design: the power scale of main system elements. There are no exact answers here so we have to make do with guidelines based around “typical” program material and venue size. Speaker element classes are in 10 dB steps, which correspond to 10× multiples of power (recall that power is a 10 Log10 formulation). We can scale crowd size for equivalent acoustic power (Fig. 11.2). A given program material needs a certain class of element for a given venue size, which rescales from there with different venues. For example, standard pop in a 5000-seat venue would do well with Class 4 elements but could use Class 3 for a 500-seat venue. The coverage needs of the main system can play a role in the element selection. Let’s say our budget will allow 8/side speakers of one type or 12/side of another. The former has 3 dB more SPL capability and 3 dB more per/element cost. Each group will generate the same SPL and debt payment. The best choice depends on coverage spread and the range ratio. Quantity is our friend when we need to cover a highly asymmetric and/or wide shape. The advantage moves to fewer higher-power elements when the spread is small and/or the range ratio is low. We will dig deeper into this soon. For now, we have a basic plan to choose a main speaker element.

FIGURE 11.2 Power scaling recommendations for main systems. Table is arranged by program material for audience sizes ranging from 1 to 100,000.

FIGURE 11.3 Power scaling recommendations for fill systems relative to the mains. Axial differences are not included here.

11.2.2 Fill systems We can move on to fill speakers once we have established the power scale for the mains. The smaller fills need to keep up with the mains. They do it by operating at a shorter range and/or by letting the mains do half the work. The range ratio gives us the answer. If the fill speakers are 10 dB closer (range ratio = 3), we can safely use a model that’s 10 dB less powerful (e.g. a Class 4 mains with a Class 3 fill). Higher-range ratios allow for larger drops in power and so on (Fig. 11.3). SPL calculations for combined arrays in the far field are very complicated. SPL ratings represent the entire frequency range with a single number. Such simplification serves us poorly when comparing two very different animals, e.g. off axis, coupled mains against on axis, uncoupled fill. Such frequency responses are highly unmatched (mains are heavily pink-shifted while the fills are flat). The fills will never keep up with the mains LF response (and we don’t need them to). It’s a merger of equals only in the isolated HF range, while the fills happily freeload off the mains in the ranges below. We can ease the mental burden by focusing on the mains/fill meeting point: XOVR. It is here that the isolated range of the fills must keep up with the mains. It comes down to a one-on-one relationship, e.g. the bottom element of the mains and a frontfill. The calculation factors reduce to three items: difference in SPL capability, range and axial orientation. Let’s start with max SPL and range first and add the axial part later. If the main element has twice the SPL (+6 dB) and has to go twice the distance (-6 dB), then we are all even (as long as both speakers are on axis at XOVR).

Mains and Full Power Scaling Example 1. Mains element (A) maximum SPL is 132 dB SPL @1 m (Class 3). 2. Propagation distances to XOVR AB are 12 m (main) and 3 m (fill), i.e. range ratio = 4:1 (12 dB).

3. Fill could be a Class 2 speaker (120 dB SPL) because 132 - 12 = 120. The third factor, axial orientation, plays out differently with each pairing. No additional calculations are required if the mains and fills have axial equality (ONAX–ONAX or OFFAX–OFFAX). Main and underbalcony speaker aim into the same area, an ONAX–ONAX pairing. Main + coupled sidefill combinations target different areas but join their radial edges (OFFAX–OFFAX). The main + frontfill combination joins the main’s off-axis bottom edge (VBOT) to the frontfill’s ONAX. This gives the frontfills a 6 dB advantage (maximum), which is added to the range ratio used for power scaling.

Main and Frontfill Power Scaling Example (Includes Axial Effects) 1. Mains element (A) maximum SPL is 138 dB SPL @1 m (Class 3). 2. Propagation distances to XOVR AB are 12 m (main) and 3 m (fill), i.e. range ratio = 4:1 (12 dB). 3. Add the frontfill’s 6 dB axial advantage (ONAX–OFFAX) to make the total 18 dB. 4. Frontfill could be a Class 2 speaker (120 dB SPL) because 138 - 18 = 120. Three main possibilities are found for main/fill pairings: ONAX–ONAX, OFFAX–OFFAX and ONAX–OFFAX. Some typical configurations are shown below.

Level Scaling Effects of Axial Orientation of Mains and Fills (Typical) ■ Main + frontfill: OFFAX main (vert) and ONAX frontfill. ≤6 dB advantage for frontfill. ■ Main + centerfill: OFFAX main (hor) and ONAX centerfill. ≤6 dB advantage for centerfill. ■ Main + downfill: OFFAX main (vert) and OFFAX (vert) downfill. No advantage. ■ Main + sidefill: OFFAX main (hor) and OFFAX (hor) sidefill. No advantage. ■ Main + infill: OFFAX main (hor) and OFFAX (hor) infill. No advantage. ■ Main + delay: ONAX main and ONAX delay. No advantage.

11.2.3 Low-frequency systems (subwoofers) The LF power scaling must keep parity with (or exceed) the mains. This is as much about quantity as the particular element. Small main systems in small spaces can use baby subwoofers with 10” and 12” drivers, but the majority of applications select from a pool of 2 × 15” and 2 × 18” elements. This is not to say that there is equivalence between Brand X and Brand Y or even Model 2–18 and Model 18–2 by the same maker. Linearity, maximum levels, frequency response and harmonic distortion are all over the map. You must conduct your own evaluations. The choice between 15” and 18” seems to be driven more by program material and acoustics than by SPL. I’ve been told that 15” drivers are

“punchier” and have more . . . (hits self on chest). Such descriptions may be realistic, but I’ve never been able to verify these qualities in a prediction plot or analyzer display. In general, smaller drivers don’t go quite as low as the larger ones, which may be desirable for certain music and/or when the LF reverberation is long. We previously evaluated fill system scaling on a one-to-one relationship with a main element at XOVR. The mains + LF junction is the polar opposite: All the mains join all the subs in all the room. Conventional SPL data is a very poor fit for comparison here. SPL is a single-number answer to a spread spectrum, and these two devices are spread over vastly different spectrums. They only overlap for one octave (at most) and far less for lovers of double black diamond crossover slopes. We need the subs to keep up with the mains at the crossover frequency and take it down from there. The MAPP program calculates the spectrum of main and subs at their maximum SPL capability. This can be used to evaluate the LF to main ratio in a familiar three-step process: look at A (main) solo, B (sub) solo and A + B. The combined response in the LF range should comfortably exceed the base level for the midrange (+6 to 10 dB). An example is shown in Fig. 11.4.

FIGURE 11.4 Power scaling example for an LF system relative to the mains

11.3 Single Speaker Coverage and Aim Our speaker has scale. Now it needs shape. How wide do we need? This depends on our target shape and our perspective toward it. Where do we aim it? Same answer.

11.3.1 Horizontal coverage angle Let’s start with a speaker on the centerline of a rectangular room (Fig. 11.5). There are three logical break points from which to evaluate coverage: start, middle and end. If we specify a speaker wide enough to cover the earliest rows, it’s too wide for every other one. This provides minimum level variance for all rows but steadily rising ripple variance as we go back (if the room happens to have side walls). TANSTAAFL and triage. The opposite extreme is found by using the last row as the coverage target. We get just enough there and not enough for every other row, a steady rise in level variance as we move forward. Ripple variance is minimized, but tell that to the half of the audience without coverage. The choices so far are perfect front with over coverage or perfect rear with undercoverage. We can even out the under- and over-coverage errors by designing around the mid-point width. The minimum-variance coverage angle is found by matching the speaker’s FAR value to the depth/width ratio of the shape. The result is 6 dB level and spectral variance across the mid-point depth of the room. The front half is under-covered and back half over-covered in equal proportion when the target is a simple rectangle. Notice how the area of the FAR shape is equal to the area of the room shape (the missing coverage in front is exactly the same shape as the overage in the rear). Ripple risk rises in the rear while gap risk rises in front. The front corner gaps can be reduced by raising up the speaker (which widens the effective coverage width by the lateral shape). Alternatively we can add fill speakers to plug the front corner gaps. Both the underage and overage errors are reduced if the room has expanding splay walls.

Horizontal Coverage Angle Determination (Single Speaker) ■ Find the front/back depth along the centerline (speaker to last row). ■ Find the width along the mid-point depth. ■ Depth/width is the FAR. Convert FAR to angle (again Fig. 11.5). Front/Back Asymmetric Shapes Not all listening spaces are rectangles. Even so, the best fit for coverage angle links the mid-point

depth and width. Horizontally symmetric shapes with matched depth and width will need the same coverage angle despite differences in details. Many performance spaces are wider in back than front. We only have one speaker to cover this shape so we can’t sweat the details. Some typical shapes are shown in Fig. 11.6.

FIGURE 11.5 Single-speaker coverage in the horizontal plane (fully symmetric). (A1) The required coverage angle for this narrow rectangle varies from 100° to 22° depending on the depth used for calculation. (A2) The FAR method is applied to the same shape to find the average coverage angle of 45°. (B) The FAR method is applied to a wider rectangle.

FIGURE 11.6 Single-speaker coverage in the horizontal plane (front/back asymmetric). (A) 120° trapezoidal shape with mild asymmetry. (B) 100° trapezoidal shape where the rectangulation is clearly wider in front and narrower in back. Left/Right Asymmetry Left/right asymmetry adds another layer to the search for the middle/middle. Examples include an off-center speaker location into a symmetric shape, or a centered speaker into an asymmetric space. The concept is still the same: Aim the speaker through the middle of the middle and find the FAR from the depth/width ratio. The key to maintaining minimum-level variance is to match symmetry with symmetry, or to compensate asymmetry with a complementary asymmetry. The speaker placed in the corner of a rectangle is aimed at the opposite corner to maintain symmetric balance.

FIGURE 11.7 Horizontal aim from an off-center location in a symmetric shape

11.3.2 Horizontal aim The shape is defined as depth (from speaker to last seat) and width (outermost seat to seat at the mid-point depth). Aim target is defined as the middle/middle (middle seat at the mid-point depth). A centered speaker obviously aims along the front/back centerline, which includes the mid-point center. Off-center speakers aim through the mid-point center to offset their asymmetric orientation (Fig. 11.7).

11.3.3 Vertical coverage angle There’s no point in discussing symmetry for vertical plane coverage until they start selling seats on the ceiling. The vertical plane is the inherently asymmetric plane and it’s only one seat deep. It’s a single line of coverage, not depth and width. Coverage angle is determined by the difference between the uppermost seat (VTOP) and the lowest/nearest (VBOT). Minimum-variance coverage angle specifications incorporate both the angular and range differences between these locations. Let’s use 50° as our example angular spread from VTOP to VBOT (Fig. 11.8). Our job is done if the two locations are equidistant (range ratio = 1). A centered 50° speaker fills the symmetric shape from top to center to bottom with -6 dB, 0 dB and -6 dB respectively. The situation changes radically when the range ratio rises to 2:1. VTOP is now twice as far, so we lose another 6 dB in transit. The response is -12 dB at VTOP and still -6 dB at VBOT. We can get 6 dB back at VTOP if we aim the speaker upward, but then we fall below -6 dB at VBOT. We need twice the speaker (100°) aimed directly at VTOP to maintain minimum variance. It’s no coincidence that range ratio doubling is compensated by coverage angle doubling. The minimum-variance coverage angle for the vertical plane is found by multiplying the angular spread by the range ratio. We have just met the outermost extremes, range ratios of 1:1 and 2:1 yielding coverage angles of 50° and 100°. A range ratio of 1.1:1 (+1 dB) moves the minimum coverage angle upward by a factor of 1.1 (+10%) to 55°. The speaker is aimed above the vertical mid-point, therefore extra coverage is needed to reach the bottom. Speaker coverage angle rises proportionally with range ratio until the

limit is reached at 2:1 (100° speaker aiming at VTOP). Both level and spectral variance are minimized while the potential for ripple variance from surfaces above the target is maximized. A single-speaker approach should be abandoned in favor of an array if the ripple variance is too great or range ratio exceeds 2:1. It may help to visualize this as a waste/efficiency model. Filling a 1:1 shape allows for maximum efficiency. At the other extreme is the 2:1 shape, which requires twice as much speaker as we use for actual coverage. That 50% waste product is going to cost us ripple variance etc. It’s time to make an AB array when the waste gets too high.

11.3.4 Vertical aim Aim is found by the range ratio compensated coverage method (Fig. 11.9). Let’s standardize VTOP as 0° (relative). We will use 50° as an example coverage target, so the range is from 0° to -50° (mid-point of -25°). A 1:1 range ratio leaves the aim point at the vertical center (-25°). Aim point rises proportionally toward VTOP with range ratio. A range ratio of 1.1 (1 dB) moves the aim point upward by a factor of 1.1 (+10%) to -22.5°. Increasing the ratio raises the speaker farther until the limit is reached at 2:1 (speaker is aiming at 0°, the top of the target).

FIGURE 11.8 Compensated unity aim in the vertical plane. (A) The speaker is aimed to compensate for three levels of asymmetry but the coverage angle is uncompensated, resulting in up to 10 dB variance for the 2:1 shape. (B) Both aim and coverage angle are compensated for the three shapes, reducing variance to around 3 dB for the 2:1 shape.

FIGURE 11.9 Vertical aim and speaker coverage reference chart for target angles between 20–90°. The chart shows the range ratio compensated aim (down from VTOP) and speaker coverage angle (compensated) required to fill the shape.

11.3.5 Horizontal/vertical coverage width The width of effective coverage across the room for a given speaker is a function of both the vertical and horizontal planes (Fig. 11.10). We just determined the horizontal coverage angle and now we can see whether we need fill speakers in the front.

Effective Coverage Width Determination 1. Section view: Find the range to the closest seat (coverage start). 2. Find the lateral coverage width for that range from the lateral aspect ratio (Fig. 3.41). 3. Plan view: Draw a line of the determined coverage width along the closest row. Fill speakers are needed if the coverage width line does not reach across the shape.

FIGURE 11.10 Example application for coverage width determination. The question is whether the center main can cover the whole width or if it needs sidefills. (A) Start of coverage is found and width is calculated. (B) Width is not long enough and areas needing fill are defined. (C) The required width and depth of the sidefill is found. (D and E) Sidefills are added.

11.4 Coupled Array Coverage, aim and Splay 11.4.1 Coupled point source: horizontal plane We can look at filling these same shapes with more than a single speaker. Coverage subdivision allows for greater flexibility and detail. The coupled point source is the array of choice here. Its inverted form, the coupled point destination, is specified only under duress because of some physical requirement. The use of multiple elements makes coverage angle and aim become interdependent variables. The symmetric coupled point source should be aimed at the middle of a symmetric coverage shape. If the coverage shape is asymmetric then the array should be aimed at . . . no wait, the array should be asymmetric. Once we’ve gone asymmetric we have aim points, which are analyzed on a case-bycase basis depending on levels, order and splay. Symmetric Symmetric versions subdivide evenly. An example 80° horizontal coverage target (FAR = 1.55) can be covered with 2 × 40° elements, 40 × 2° elements or any combination between that multiplies to 80°. Symmetric subdivision transforms the coverage shape from the single-speaker rectangle to the radial fan (in the HF at least). The fully symmetric coupled point source is a one-trick pony. It improves the fit into radial fan-shaped rooms, but worsens it for rectangular rooms. The symmetric radial array spreads an MV line across the middle depth. This is ideal for a fan-shaped room because it effectively spreads an MV line at all depths. Rectangular rooms face the under/over coverage tradeoff we previously solved for a solo with the mid-point compromise. An 80° symmetric point source has the same area of under-coverage as the single, but a lot more over-coverage. The single speaker did not point at the side walls. Many, if not all, of the elements of the coupled point source are aimed at the side walls, substantially raising the ripple risk. This does not eliminate the coupled point source for rectangular horizontal shapes. We just need to use the asymmetric version. The combined coverage angle can be approximated as splay angle × quantity. The approximation is closest when angular isolation is dominant or when the quantities are high enough to force the response into the shape enclosed by the total angular splay (Figs 11.11 and 11.12).

Horizontal coverage angle determination (symmetric) ■ Find the mid-point depth along the centerline (mid-point from speaker to last row). ■ Find the radial angle that best follows the mid-point depth from edge to edge. This can be

used for a rectangle, a fan-shaped room or even a complete circle. ■ Divide the coverage angle by element quantity (or multiply the elements until they fill the coverage). Asymmetric We can customize the shape with asymmetric levels or elements. We’ll use the compensated unity splay angle as the shaping agent. There is no “correct” element with which to start. The process begins with the “A” element, the speaker in charge of the largest area. The next element covers the majority of the remainder and proceeds from there until the shape is filled or other systems take over.

FIGURE 11.11 Symmetric horizontal coverage examples. A single 80° element (A) is compared with various arrays that create 80° of radial coverage. (B–H) Arrays ranging from 2 elements @40° to 40 elements @2° are shown. Both the FAR shape and radial minimum variance lines are shown to emphasize the transformation from single speaker (rectangular) to coupled point source (radial shaping). Rising quantity and overlap along with falling element coverage angle increase the uniformity along the radial line and sharpen the edges.

FIGURE 11.12 Example application for the symmetric coupled point source design procedure: (A) coverage target is defined, (B, C) the design process for a narrow element, (D, E) the same process using wider elements

FIGURE 11.13 Asymmetric coupled point source compensated unity splay design reference

Asymmetric coupled point source design procedure (Fig. 11.13) 1. 2. 3. 4.

Coverage shape is defined by angular spread and range ratio. Ratio ≥2:1 is assumed. Aim A toward the farthest area. A should have the narrowest coverage angle in the array. Select a coverage angle for A that fits the local area. Estimate location for XOVR AB (the transition into the next element). Start with the unity splay location and compensate for shorter ranges if applicable. The B element continues coverage from here. 5. Select coverage angle of B as needed to continue (or finish) coverage. Calculate the compensated unity splay if the range for B < A ((coverage of A + B)/2* range ratio). 6. Position B at the compensated angle and appropriately scaled range. Assess and fine tune as required. 7. Continue process with the third element (C) and so on until the shape is filled. One additional consideration is the physical size of the array. The boxes take up space. An array gets closer to people as it gets larger, which affects both the range ratio and the coverage target. Doubling the element size, or quantity, will require a revisit to the design, a classic “chicken and egg” problem. All I can suggest is to be as realistic as possible during the design process about the physical size and reassess as needed when things change a great deal.

11.4.2 Coupled point source: vertical plane (constant beamwidth elements) The asymmetric coupled point source lays a line of coverage over the vertical plane. Range ratio and angular spread between VTOP and VBOT remain the primary design factors. The process is similar to the horizontal plane, but typically with higher-range ratios and narrower elements. Range ratio,

our asymmetry gauge, provides a convenient indicator of when to subdivide. Just look at the number. A 2:1 ratio needs two sections (AB), 3:1 needs three (ABC) and so on (Fig. 11.14). The process is top/down, with the narrowest element covering the longest distance and the widest covering the shortest. An AB array will have a single partition located well past the mid-point depth. Putting the break around 2/3 depth allocates the resources fairly evenly because the upper systems have farther distance to cover. The coverage angle of the A element should also be narrower than B, also helping to level out the resource allocation. Recall that we needed a 100° speaker to cover a 50° angular spread with a 2:1 range ratio. An AB array can cover this same shape with a 75° of speaker (A = 25° and B = 50°). The AB approach doubles the number of devices and reduces the waste, both of which result in more SPL at the seats and proportionally less on the ceiling (in this case just 12.5° above VTOP instead of 50°) We can subdivide further, use smaller elements and trim down even more of the overage. Higher range ratios require more subdivision. There are more partitions and a wider range of individual element angles, but the principle remains the same. Element A has the longest throw, narrowest coverage and smallest depth. Then comes B, then comes C and so on.

FIGURE 11.14 Example application for AB and ABC vertical combinations of the asymmetric coupled point source

11.4.3 Coupled point source: vertical plane (proportional beamwidth elements) We don’t have to stop at ABC. Proportional beamwidth elements allow us to run the entire alphabet (if we want to). These elements provide increased flexibility in shaping because we can custom mix

angular overlap and isolation to our needs, even in the face of very high-range ratios. The target coverage shape is characterized by its angular spread and range ratio, e.g. 20° of coverage with a range ratio of 2:1 (6 dB). We can fill shape with speakers, dividing the coverage into finer slices as quantity rises. There is a minimum number required, which depends on the element and target shape. The maximum is limited only by budget and rigging standards. We can find the absolute minimum quantity required once we know (a) the element maximum splay, (b) target angle and (c) range ratio. Let’s start by filling a simple 60° shape with a 1:1 range ratio. Our element is 10° so we need at least six boxes (6 × 10° = 60°). Take note of the fact that the “average” splay angle for this array is 10°. That seems silly now but its importance will become clear soon. We can add boxes and reduce the splay proportionally (e.g. 12 @5°, 15 @4° and so on as long they multiply to 60°). The average splay angle is falling as quantity rises. The design process is simple with a 1:1 range ratio: quantity × splay = coverage angle. Now let’s double the range ratio (2:1, 6 dB) and see how this affects the minimum box count for our 60° example. We get the array to follow the 2:1 shape by having more overlap in the upper boxes (i.e. reduced splay). We can’t get to the bottom with six boxes now because we can’t splay beyond the element limit of 10° without gapping. How many more do we need? The answer is eight boxes, which will require some explanation. We filled the 1:1 shape with matched elements at matched level and splay. The 2:1 shape can be made from matched elements, matched level and unmatched splay. We need to see at least a 1:2 ratio between the narrowest and widest splay angles. If we stick with six boxes we still have an average splay of 10° with narrower splays above and wider below. We could solve this with angles ranging from 7.5° (top), 10° (middle) and 15° (bottom) except for one small problem: Our 10° element leaves gaps in coverage. The maximum splay at the bottom is 10° and we can achieve our desired splay ratio with 5° at the top. The average splay, then, is 7.5°, which gives the eight-box count (60°/7.5 = 8). The intermediate splays fill in between the milestone values, e.g. 5°–5°–5°–7.5°–10°–10°–10° or 5°–6°– 7.5°–7.5°– 9°–9°–10. We can add more boxes and keep the same shape by maintaining the same top/average/bottom ratios. This doesn’t mean a 2:1 range ratio always requires at least eight boxes. If we reduce the target angle to 30° we need three boxes at 1:1 (10°–10°) and four boxes at 2:1 (5°–7.5°–10°). A target angle of 15° needs two boxes at 1:1 (7.5°) and three for 2:1 (3.75°–7.5°). Three rules are followed in all cases here: (a) quantity × average splay = coverage angle, (b) splay ratio is the inverse of range ratio and (c) the widest splay does not exceed the element limit. Let’s return to the 60° target and extend the range ratio to 3:1 (10 dB). We need a splay ratio under 1:3. A minimum splay of 3.3°, maximum 10° and average 6.6° satisfies our needs with a box count of ten (60°/6.6°). Reducing the coverage target angle by half and the box count does the same. Reduce the element maximum by half (to 5° maximum splay) and the box count doubles.

The saga continues with other coverage targets, range ratios and element limits, but this is the basic outline. Note that this represents the minimum box count to create these shapes without help from level tapering. Results in the field will vary from model to model depending upon the quality of the beamwidth shaping in the elements. It is not necessary to drive every element individually. The array can be broken into sections (ABC etc.) that contain multiple elements. The channel quantity should at least match the range ratio (rounded upward). As a general rule, the upper sections should contain the most elements and gradually fewer as we go down. This mimics the ABC approach we established for single elements and allows for more level tapering at the bottom if needed. The choices for where to segment the array will be strongly influenced by the room geometry, such as a balcony or a change in the seating rake. We have now constructed a segmented asymmetric point source (elements ABC etc.) out of unlimited quantities of single-speaker sub-elements, which puts it into the realm of a practical, tunable system (Fig. 11.15).

11.4.4 Asymmetric-composite coupled point source: Vertical plane At the end of the day we are going to need to place mics in the hall and optimize the array. We have previously established a systematic approach with the key locations: ONAX, OFFAX and XOVR. These locations are easily found for single elements. Things get more complex when the element “A” is a composite made up of four speakers. Finding the center of a composite is not as simple, especially if there are three different splay angles between the four boxes. Let’s look a bit deeper. Onax and Onax-ISH

FIGURE 11.15 Composite point source element examples. (A) High overlap and low quantity can create a narrowed composite, i.e. 5 × 2° would be expected to create 10° but instead creates 8°. (B–C–D) Progressively wider splays proportionally expand the composite coverage

angle and soften the edges. (E) Excessive splay leads to gaps within the composite. (F) Increased quantity adds enough isolation to overcome the narrowing of highly overlapped elements, i.e. 10 × 2° = 20° (compare with panel A).

On axis is important because it’s the key reference point for predictable spatial behavior. We literally cannot find off axis if we haven’t found on axis first. A splayed pair (a two-element composite) has three on-axis points (one for each box and the center of the pair). We create the composite using the individual element on axis locations. The composite center then becomes the reference for aiming the combined assembly and combining it with other elements. The composite’s ONAX location is clearly identifiable as long as all the relationships inside it are symmetrical. We can keep adding elements but the center remains the center for the full frequency range and everything is referenced from there. ONAX is king of the hill with the familiar progression toward OFFAX in either direction. The center cannot hold once asymmetry in level, spacing, splay or phase is introduced. The asymmetry initiates a steering force that moves things off center. The greater the asymmetry, the stronger the steering. The situation is further complicated by the fact that the steering is likely frequency dependent, i.e. the lows, mids and highs are moved by different amounts (and to different places). Our central area has changed from ONAX to ONAX-ish (Fig. 11.16). Let’s consider a few scenarios to bring the point home. Would you be comfortable driving two different speaker models with the same processor channel? Of course not. We all accept that different speakers require different tunings. How about a single matched pair? Fine. A matched trio? Still fine. What exactly is a composite of five matched boxes with unmatched splay angles? It’s kind of matched and most definitely not. Where is the middle of a composite with 1°–2°–3°–4° splays? We can find ONAX (for all frequencies) of a symmetric composite with 5 × 2°. Not so for the previous. The HF is steered more upward with the tight angles and the LF doesn’t notice the difference between any of the splays and heads to the middle. We are back to ONAX-ish again. An ONAX-ish location doesn’t make optimization impossible. It just makes it more challenging, because there is less certainty that the mic position is the best representative for the composite response. Asymmetric composite elements have approximated centers providing approximated answers. The more asymmetric the more approximate (and margin for error). Tuning work is difficult enough already without wondering if we are in the middle or fringe of the element. We want the clearest viewpoint because a processing channel affects the entire composite. A symmetric composite has a 100% identifiable center (the loudest, flattest location), and a predictable progression of responses from there. We can place mics knowing A, B, C and D are directly in our sights and then combine them to uniformity. We are building an asymmetric coupled point source macro array out of symmetric elements ( just as we would with single speakers). Defining the Composite Element

FIGURE 11.16 Symmetric and asymmetric composite elements. Notice that the symmetric version (A) remains centered. The asymmetric version has a moving center and highly unpredictable sides.

A symmetric composite can be defined by quantity × splay = composite angle, e.g. 6 × 10° = 60°. We can describe the eight-element AB array from earlier as A: 4 × 5° (20°) over B: 4 ×10° (40°). Notice that A + B adds up to 60°. Also note that 7.5° (the average angle in the middle) is missing in action. The seven actual splay angles are 5°–5°–5°–7.5°–10° –10°–10°. Remember that splay angles don’t make sound. Speakers do (there are eight of them). We hear four elements splayed at 5° when we turn on the A section. Think of each speaker as a 5° element, i.e. ±2.5° from center. Their combined 20° coverage results from 3 × 5° splays and 2.5° above and below the outer elements. The B section gets it 40° from 3 × 10° splays and a 5° remainder on the outer edges. The transition splay angle (7.5°) connects the bottom of A (2.5° underneath) and the top of B (5° overhead). All told the array spans 60° of coverage from +2.5° to -57.5°. You might wonder how the same speaker could transform from ±2.5° to ±5°. It can’t as a soloist but it can’t resist in the face of all the overlap with its neighbors. It’s array groupthink. Overlap sharpens the edges (recall Figs 11.11 and 11.15). Finally notice that the center of A (-7.5°) is 30° away from the center of B (-37.5°). I guess it’s just a coincidence that this is exactly the unity splay angle for pairing a single 20° and 40° speaker.

Asymmetric composite coupled point source design procedure (Figs.11.17 to 11.18) 1. Define target top and bottom (VTOP and VBOT) as angle and range (e.g. -5° @20 m). 2. Define the target shape from VTOP to VBOT as coverage angle and range ratio (e.g. 30° @2:1). 3. Define maximum splay (depends on element) and minimum (max/range ratio). 4. Calculate the average splay angle ((max + min)/2), e.g. (10° + 5°)/2 = 7.5°. 5. Calculate minimum element quantity (target coverage angle/average splay). 6. Subdivide the array into composite segments with splay ratios scaled to the shape. Preventing and Inducing Shock

FIGURE 11.17 Composite builder examples for showing 60° and 30° total coverage of 1:1 to 3:1 range ratios. Element limit is given as 10°. (A) Fully symmetric, minimum quantity. (B) Failed due to insufficient quantity to cover the target coverage angle. (C) Failed due to over-splay between elements (15° splay for a 10° element) and shock (splay ratio between adjacent composites exceeds 2:1). (D) Minimum quantity for 2:1 range ratio at 60°. (E–H) Examples of 30° coverage at 3:1 range ratio with various quantities.

Splay angle asymmetry is the main ingredient of this type of array, but too much, too quick can shock the system. It is important to remember that we are combining sets of proportionally narrowing elements. Differences of a few degrees are a big deal to the highest frequencies and then provide proportionally less impact as we go down. The key to splay angle change is to think proportionally, i.e. in ratios. Simply put, a 3° change can be major or minor, depending on what it changes from. Transitioning from 1° to 4° splay is major, whereas moving from 9° to 12° may be barely detectible. The former changes by a factor of 3.0, whereas the latter only differs by 0.3. We are seeking to make a flattened beamwidth from proportional elements. If the transition is too rapid we will “shock” the combined response with oversteered HF and with lower ranges that fail to follow. This can be prevented by keeping the transitions gradual (as ratios). Let’s set a goal of never exceeding a 2:1 change and see how that plays out. From a 2° splay we can go down to 1° or up to 4°. From 4° we can go up to 8°. Those are ratio changes and represent, in my experience, the absolute upper limit. My personal approach favors limiting changes to

FIGURE 11.18A Composite combination examples to create 30° coverage at 2:1 range ratio with various element quantities

FIGURE 11.18B Composite combination examples to create 30° coverage at 3:1 range ratio with various element quantities

FIGURE 11.19 Shock induction and prevention. Incremental splay ratio changes will minimize risk of shock. Large changes can induce shock for good (balcony avoidance) or bad (hot spots and gaps).

≤1.5:1 for adjacent splays and 2:1 changes for composite splays. The following sequence illustrates this approach for an eight-element ABC array: 2°–2°–3°–4°–4°–6°–8°. That’s 3 × 2° (6°), 3 × 4° (12°), 2 × 8° (16°). No adjacent splay change exceeds 1.5:1 and the composite to composite increments move at a 2:1 rate from 2°–4°–8°. You will see variations of this approach over and over again as the example arrays are shown here (Fig. 11.19).

On the other hand, we can choose to purposefully induce shock in order to gap the response and avoid balcony fronts and other undesirables. Once again it is the change in splay ratio (both above and below) that is decisive.

11.5 Uncoupled Array Coverage, Spacing and Splay Uncoupled arrays are range limited regardless of propagation plane. Therefore the specification process does not differ between the planes in the room, but rather the planes of the arrays. We focus on two coverage line milestones: unity and limit (Figs 11.20 and 11.21). Note: Daniel Lundberg created a program that performs these calculations. It’s available at

11.5.1 Uncoupled line source (symmetric) Used for multiple mains, overheads, surrounds and various fills (frontfill, underbalcony, etc.).

Symmetric Uncoupled Line Source Design Procedure 1. The target shape is a straight line of coverage with matched starting depth (the unity line). 2. Define range to start of coverage: unity line depth, e.g. 3 m. 3. Define spacing: Element lateral aspect ratio × unity depth, e.g. 1.25 (80°) × 3 m = 4 m spacing. 4. Define quantity: Total coverage length/spacing, e.g. 14 m length/4 m spacing = quantity of 4. 5. Define limit depth: Limit depth for an uncoupled line source = 2× the unity depth.

FIGURE 11.20 Symmetric uncoupled line source design reference

11.5.2 Uncoupled line source (asymmetric) Used for multiple mains, overheads, surrounds and various fills (frontfill, underbalcony, etc.).

Asymmetric Uncoupled Line Source Design Procedure 1. The target shape is a bendable line of coverage with variable starting depths (the unity line). 2. Select the longest-range element (the highest-order element if different models are used). 3. Calculate the element coverage by its lateral width. This is the unity spacing between elements. 4. The lateral width lines of attenuated elements must be scaled down proportionally (compensated unity). 5. Place additional elements at the compensated unity spacing until the line length is filled. 6. The limit line is found at twice the various unity line ranges.

11.5.3 Uncoupled point source (symmetric) Used for multiple mains and various fills (frontfill, underbalcony, sidefill, rearfill, etc.).

Symmetric Uncoupled Point Source Design Procedure (Fig. 11.22) 1. The target shape is an arc of a given radius with matched starting depth (the unity line). 2. Select an element and place it in the central area of the arc. 3. Calculate the element coverage by its lateral width. This is the unity spacing between elements. 4. Space/splay additional elements so that the lateral width lines connect radially until the arc is filled. 5. The limit line range is highly variable. Low angular overlap extends the limit line.

11.5.4 Uncoupled point source (asymmetric) Used for multiple mains and various fills (frontfill, underbalcony, sidefill, rearfill, etc.).

Asymmetric Uncoupled Point Source Design Procedure 1. The target shape is an elliptical segment with a given radius with unmatched starting depth

2. 3. 4. 5. 6.

(unity line). Select an element and place it in the deepest area of the arc. Calculate the element coverage by its lateral width. This is the unity spacing between elements. The lateral width lines of attenuated elements must be scaled down proportionally (compensated unity). Space/splay additional elements so that the lateral width lines connect radially until the shape is filled. The limit line range is highly variable. Low angular overlap extends the limit line.

11.5.5 Uncoupled point destination (symmetric) Standard applications: infills, monitor sidefills.

Symmetric Uncoupled Point Destination Design Procedure 1. Target shape resembles a pair of rectangles connected with a hinge, or a boomerang. For real. 2. Unity line is at half the distance to the location where the ONAX aim points cross. 3. Select an element for its ability to cover its local area as a soloist. 4. Calculate the element coverage by its lateral width. 5. The inner edges of the lateral shape must meet at center (unless covered by others such as frontfill). 6. The limit line is the point where the ONAX aim points cross.

11.5.6 Uncoupled point destination (asymmetric) Standard applications: combinations of arrays (mains + delays, mains + frontfill, etc.).

FIGURE 11.21 Symmetric uncoupled line source design procedure. Example frontfill and underbalcony applications.

FIGURE 11.22 Uncoupled point source design reference with example frontfill application

Asymmetric Uncoupled Point Destination Design Procedure 1. The target is an overlap region of main and fills. Its shape is already pre-formed by the

elements. 2. The unity line is XOVR main–fill. Its location depends on the fill level in the overlap region. 3. Aim the fill system to maximize forward coverage extension while remaining linked to the main. 4. The fill system must have enough coverage and power scale to match the main at XOVR.

11.6 Main Systems Main systems range from multi-element stadium arrays to single ceiling speakers and every size and quantity between. If it’s at the top of the food chain in the venue, it’s the main.

11.6.1 Center main (C) Center main systems are the principal element for mono systems and the voice channel for L/C/R systems. The coverage requirements are the same in both cases but the power scale changes (the mono main needs more power because it will have music and voice).

Standard Features of the Center Main ■ Elements: (H) first, second order, (V) mostly third or second order. First order only as a soloist. ■ Array type: (H) solo, coupled point source, (V) asymmetric (composite) coupled point source. ■ Coverage range: unlimited. ■ Sonic image: centered, but usually high because of stage sightline limits and gain before feedback. ■ Timing: arrives very late to the floor. High location requires anti-delay to sync to analog live sources. ■ Range ratio: low(er). High location keeps the closest listeners fairly far away (compared with L/R). ■ XOVR to frontfill: reduces image distortion in early rows and leakage onto stage. ■ XOVR to sidefill: typically L/R deck speakers to fill coverage gaps in the near side areas. ■ XOVR to underbalcony delays: High main position means we often lose sightline under the balcony. ■ Form before function: Architects want to hide our unsightly speakers. Don’t fence me in! ■ Not popular: Nobody wants to mix on them. Specify only when the L/R option is unavailable. The mono center main is your grandfather’s array. There is little arguing that it’s optimized for maximum vocal intelligibility. This may be the best choice for voice-only applications, but not necessarily. Intelligibility is a very high priority but feedback always comes before it. And we are in big trouble if feedback comes before we have enough gain. A high center cluster with less-thanstellar directivity poses a clear and present danger to acoustic gain. Our triage rating can evaluate the L/R signal degradation against the potential for higher gain and rental charges when the band arrives.

Typical Reasons to Specify a Center Main ■ Part of an L/C/R system. ■ 360° seating or extreme wraparound balconies that leave no place for an L/R system. ■ Heritage hall that won’t allow L/R hang points. ■ Someone has fond memories of Altec speakers. The height of a center system is sometimes an open variable. In most cases our answer is “as low as possible,” usually limited by follow spot sightlines or fashionistas. I suppose it’s possible for a center cluster to be too low, but I don’t think I’ve ever seen it. This leaves us to specify coverage, aim, splay and power scale. Refer to Fig. 11.23 as we step through the design process. Horizontal aim 1. Multiply the compensated unity FAR by the minimum-variance combing zone ratio. 2. Aim it at center. Horizontal Coverage Angle Use the single-speaker coverage method (section 11.3.1) for any single-wide horizontal configuration (e.g. stripe of line array speakers). Use the symmetric or asymmetric point source methods (section 11.4.1) for horizontal arrays. Coverage is analyzed at mid-point depth. Variance between ONAX, OFFAX should be 6 dB then specify a wider main or supplement with sidefill. Vertical Aim and Coverage Use the single-speaker (section 11.3.3), asymmetric point source (section 11.4.3) or composite point source method (section 11.4.4). Horizontal Coverage Width We will need to calculate the coverage width on the floor under the array to see how many seats must be covered by the sidefill deck system. Use the lateral coverage width method (section 11.3.5) from the bottom of the array to the frontfill/main XOVR.

11.6.2 Left/right mains (L/R) Room shape plays a small part in the choice of L/R but a big part in who hears both of them. The basic rectangle is the most friendly and the wide fan, the least. Wide fans often require sidefill, centerfill and/or infills to extend coverage and close gaps.

Standard Features of the Left/Right Mains ■ Elements, array type and coverage range: Same as the mono main. ■ Sonic image: Localizes strongly to each side due to precedence effect (except for a small center area). ■ Timing: Can arrive early to the floor if hung low. Add delay to sync to analog live sources. ■ Range ratio: variable. Can be high if mains are low or low(er) if they are hung high(er). Really. ■ XOVR to frontfill: Same as center main above. ■ XOVR to sidefill: Can be flown (coupled) or deck speakers (uncoupled), if needed to extend coverage. ■ XOVR to centerfill: Flown above center stage to fill the gap between L/R and frontfill. ■ XOVR to underbalcony delays: Low main position can ease the need for underbalcony systems. ■ Popular: Nobody gets fired because they had the crazy idea of spec’ing L/R.

FIGURE 11.23 Center main system design examples for the horizontal plane Mains Height Left/right mains have a wider range of height options than the center. Proscenium stages present the opportunity to move them lower. Therefore this is the time to discuss how high to hang the mains. The mains height has a range of design implications. There are tradeoffs at every turn. Low mains have advantages in sonic image, but potentially high range ratios that make it loud in front. High mains have image issues and can be late to the party, but have a big advantage in the level and spectral variance categories. Gain before feedback can go either way. In the ancient days of sound reinforcement our choices were image-killing center clusters or ear-killing L/R mains on the deck. Modern systems can pick and choose the optimal compromise altitude. The room shape plays a part, of course. Finding the middle ground in this case means finding the middle height. We seek to spread the vertical coverage evenly. It’s not as simple as shooting from the middle height because the upper-level seats are very often farther than the lower-level ones. This leads us upward, which turns out to be a safer and more reliable position to go for the long throw. Let’s compare the two extremes before we work our way to the middle: speaker at the height of the top row versus speaker on the deck. Listeners below versus above the speakers. The high main option has lower-range ratio, better defense against ceiling reflections, worse imaging, is late on the floor and risks spilling onto the stage. The low main (no, it’s not for lunch) has to kill the floor to get to the back, seriously risks lighting up the ceiling (if it can actually cover up there), but has great imaging.

Smiles and frowns A speaker with a 40° vertical pattern can give or take a few degrees of its pattern without getting emotional. The same cannot be said about today’s super-directional proportional beamwidth speakers. If we change them a few degrees they get happy or sad. Their patterns can either frown or smile, depending on their orientation to the shape. This is 3-D geometry so let’s take it step-by-step (Fig. 11.24). The vertical ONAX aim of a speaker is flat across the horizontal plane (unless it’s a wacko horn). Put a speaker on a turntable and spin it, and ONAX circles the room at the speaker height (if the speaker is not angled up or down). That’s a big “if” there. An upward-angled speaker will circle the room above the speaker height (the easy part) but it may be at different heights all over the room (the tricky part). A speaker in the center of a circular room draws 360° at the same height for a given tilt. But the height is going to vary if the speaker is not centered, and/or the walls are different distances from it. The aim angle is constant but the height above the speaker accrues with distance (e.g. 1 m rise for every 10 m distance). An up-aimed speaker that hits seats at 10 and 12 m ranges arrives at

heights of 1 and 1.2 m. How does that happen? The left main is closer to the leftmost seat in the last row than the middle seat. The seats are the same height above the speaker but not the same angle above the speaker. We now see that ONAX is a constant angle but not a constant elevation. Smile. Increased up-tilt and and/or horizontal range ratio make the smile even bigger. Down-tilt makes it frown and a flat orientation leaves a blank stare. This has substantial practical implications for speaker aiming, whether the intent is coverage or gapping (e.g. avoiding the balcony). We might not hit what we aim for, and we might not miss what we hope to. A center speaker can be up-tilted into a fan-shaped room (or balcony) and maintain a constant ONAX on the listener plane. ONAX rises if the back wall flattens. L/R mains are virtually assured of smiling when up-tilted, unless the back wall is convex (i.e. deeper on the sides than the center). Yes. That would be crazy. Balcony avoidance strategies have the same issues; it’s just that we are aiming XOVR (the gap) at the balcony. Our balcony avoidance becomes audience avoidance if the gap frowns or smiles.

Speaker height effects (smiles and frowns) ■ Flat: Speaker is on the listener plane, regardless of depth. ■ Smile: Speaker is below the listener plane. Listeners are at different depths (from speaker point-of-view).

FIGURE 11.24 Smiles and frowns. Pattern bending due to differences in path length.

Our designs need to see this coming. It’s far too easy to specify surgical-level shaping on a 2-D

section view that will end up smiling its way to places unknown in the venue. We can’t stop it, but sometimes we can minimize it by limiting the tilt on speakers throwing the longest distances (with the narrowest angles). Mains height has the strongest effect on the bending errors. The smile/frown geometry is symmetric, but the risks are not. A smile above the last row poses more danger than some frowning down in front. The rear is where a degree or two matters due to the long throw and narrow speakers. Minimizing the mains up-tilt reduces the risk of over- or under-coverage. This potential benefit can be factored into cluster height decisions. Coupled main systems that want to gap around the balcony should seek a height that places the gap in the array around the balcony height. This ensures that the gap stays on the balcony. Gapping will be less effective (and potentially degrading) if the gap angle has to tilt up or down to the target. Prove it yourself: 1. Take a laser pointer and aim it flat onto a wall. Swivel side to side. The pattern is a flat line. 2. Return to center. Aim up at an identifiable line, such as the wall/ceiling joint. Swivel again, being careful to maintain the same angle on the laser. Notice that light does not stay on the joint as you swivel horizontally. It smiles upward. It’s not you, dear. It’s geometry. 3. Next aim to a line below you and repeat the process. You made it frown now, you meanie. Horizontal Aim Left covers left and right covers right. We have previously discussed the sad reality of how well stereo works in concert applications (section 5.4). Therefore, we will bisect the room and aim each main through its respective middle-middle. This approach creates a balanced ratio of under and overcoverage from front to back. In this case there are approximately equal amounts of actual wall reflection and virtual wall reflection (the leakage from the other side). The sources of horizontal ripple variance are balanced. There is a justifiable fear of excess horizontal coverage splashing onto the side walls. This can lead designers of L/R systems to turn inward to avoid them. TANSTAAFL shows us that we just added certain inward leakage in exchange for (potential) outward leakage, i.e. the wall reflection. We must ask the question: Which is worse? It’s a real reflection against a faux-flection (the late arrival from the opposite side). Let’s tally things up. There is HF air loss in all paths so that evens out. Is the side wall absorptive at all? Will we hear its reflection directly or have to wait until it comes off the back wall? We have little to fear if the HF is absorbed and we don’t hear it until it’s come off the back wall. The “wall” for the faux-flection is the centerline of the room. It’s a perfect mirror, absorbing nothing. The more we turn L and R inward, the more we are sending on-axis HF into the mirror. As I said before: potential from the wall versus certainty from the opposite side? Neither wall is going

away. The middle-middle approach minimizes the horizontal level/spectral variance and balances the horizontal sources of ripple variance (Fig. 11.25). Horizontal Coverage Angle Coverage is analyzed on the bisected space (as above). Left: Use the single speaker coverage method (11.3.1) for a single stripe of line array speakers (first-order). Use the symmetric or asymmetric point source methods (11.4.1) for horizontal arrays. Coverage is analyzed at mid-point depth. Variance between OFFAX L, ONAX L and XOVR LR should be 2:1). Coverage can be divided between the two slopes.

We start with a pair of similar slopes, stacked directly on top of each other (Fig. 11.29 (B)). Each has a 20° spread and 2:1 range ratio. Our speaker is in the middle so it covers from VTOP1 (+20°) to VBOT2 (-20°), with a 2.8:1 range ratio (9 dB). In between are VBOT1 (+0°) to VTOP2 (-0°), which has a 1.4:1 range ratio. We know how to solve a 6 dB range ratio spread over a 40° angular spread. How do we solve a 3 dB change that happens in a 0° spread? OK. It can’t be 0° because the balcony has to be thick enough to hold people, but it can be very, very small. Return Ratio Listen up main. Stick together. Here is our mission: Go deep then gradually come closer for 20° and then instantly go deep again and repeat. If you are not convinced yet that this is mission impossible then add range ratio until you surrender. A wider balcony front gives more angle to work with, but with friends like this, who needs enemies. Let’s make a single modification to the shape and do the exercise again (Fig. 11.29 (A)). Angle the lower floor upwards so its top aligns with the upper floor’s bottom. What’s different? VBOT1 and VTOP2 are still both at 0° but they now have a 1:1 range ratio. There is no longer a zig-zag in the middle. We would surely cover this with a single main. It’s also not a balcony any more but it reveals the mechanism, the return ratio, the primary indicator for splitting the array. Every inch we slide the upper floor forward increases the discontinuity between VBOT1 and VTOP2. Such sharp turns in coverage require angular isolation and we don’t have it. Return ratio (in dB) quantifies the level difference the balcony forces us to overcome. We can saw a line of best fit through a shallow balcony

with a small return ratio and keep the array together. Return ratios of 6 dB or more cannot be smoothed over (Fig. 11.29 (C)). Secondary Options for Upper/Lower Mains There are still options short of breaking up. We can outsource coverage to others, specifically the underbalcony area, which can be covered by delays. The area covered by the delay is taken off the custody requirements of the main. VTOP2 moves closer, reducing the range ratio and opening up angle, a double bonus. If the delays can bring the return ratio in bounds we may be able to tough it out as a single main (Fig. 11.29 (D)).

FIGURE 11.29 Upper/lower decision examples. In all cases there is 20° of coverage required above and below the speaker location. (A) Single main can cover the continuous slope. (B) The 3 dB return ratio is low enough that splitting is not required. (C) The 6 dB return ratio indicates splitting is best. (D) Underbalcony speakers reduce the return ratio to 3 dB (no splitting required).

The mains height also plays a role. We have looked at mains in the middle. Going upward reduces the angular spread between VBOT1 and VTOP2 (as if it wasn’t small enough already). Going higher leads to occultation (the blocking of the sightline to the speaker) underneath, which reduces return ratio by coverage reduction. Delays have moved from optional to mandatory. Occultation seriously downgrades the underbalcony area and should not be considered fair trade for return ratio gains. Moving the mains under the balcony line ensure sightlines to the back and opens up the angular spread (the mains can see the underbalcony ceiling now). Return ratio shows no improvement and balcony coverage will become more challenging. The upper-level slope is flattening (from the mains POV). The upper level needs more severe shaping due to reduced angle and rising range ratio. Such severe asymmetry is difficult with a single slope. Asking the mains to do that upstairs and on the

floor is a very tall order. There is one more height-related consideration: the coverage pattern bending we previously termed “smiles and frowns” (recall Fig. 11.24). This is relevant to high and low orientation to balcony fronts (and our attempts to avoid them). Only a centered main can be precisely steered to overcome a high return ratio. Upper or lower positions cause the coverage transition to appear at different heights across the room. Here is the consolation for this exercise. All is not lost when we raise the white flag and divide the array into upper and lower sections. Instead of trying to make one array do something it hates to do (double sloping) we get two arrays being what they love to be: asymmetric coupled point sources drawing a single slope. The ripple variance in the low end might be a very small price for this payoff. The design process is basically a two-layer cake. Upper and lower are separately analyzed and comparably power scaled. In some cases it’s possible to move the upper main deeper into the house (because it starts at the balcony). This is free money in terms of power and signal/noise ratio, as long as the image is not compromised (Fig. 11.30). Balcony Fronts There’s an old expression to sum up the sound engineer’s perspective on this: The only good balcony front is a dead balcony front. Balcophobia is a serious malady in the sound community. Designers go to great lengths to avoid what acousticians go to great lengths to install: lively balcony fronts. We’ve all been burned by this, so it’s worth a few paragraphs to put things in proper perspective. Some balconies are poisonous whereas others are harmless, but many of us run from both kinds. Bad balconies are tall, lively, featureless (single angle, not diffuse). Glass and steel: bad. They have bad angles with respect to our speakers, sending sound back on stage or onto paying patrons. Worst is the flat, curved balcony in a fan-shaped room with the stage as its focal point. Been there, done that. Good balconies are short, dead, diffuse, filled with lighting gear, multi-angled and inclined to send our sound harmlessly into the open air such as the ceiling. Size up the live surfaces of your local balcony. It’s harmless below 500 Hz when it’s less than 0.5 m tall. The key is not to hurt the design over avoiding something that won’t hurt you.

Standard features of the upper/lower mains ■ Application: separate coverage of over and underbalcony areas. Horizontal design is same as

L/R. ■ Elements: (V) coupled point source (large), second order (small). ■ Array type: (V) uncoupled line or point source. ■ Coverage range: VTOP1 to ONAX1 to VBOT1 to VTOP2 to ONAX2 to VBOT2. ■ Sonic image: can be generally low for both levels.

11.6.6 Relay mains Relay mains are satellite power boosters for forward extension. They are powerful enough to take over from the mains, hence the term relay. There is no limit to the quantity or scale of relay mains. The limiting factor is the amount of destructive interference between all the various mains and subsystems.

FIGURE 11.30 Upper/lower-main design examples. (A) U-balc reduces the return ratio enough to use a single main. (B–D) Upper lower mains are used.

The macro shape of relay mains is the same as delay fills (section 11.7.2), an uncoupled point source or line source in the horizontal plane. Maximum range is the top priority of relay mains, so the uncoupled point source is preferred for its limit line extension. Relay mains will ideally reconcile back to the stage mains as “spokes on a wheel.” Additional sets of relay towers will continue forward as spokes with the stage mains as the common hub.

Backflow is the limiting issue. Delay offsets between the stage mains and relay mains rise at the maximum rate behind the relay towers. There is no better application for cardioid low-frequency arrays and all manner of beamwidth control.

Elements Height and directional control in both planes are required to achieve the maximum B4MBNW4U ratio. Not familiar with this metric? It’s the “better for me but not worse for you” index. The AES standards committee is working on the details but I think it speaks for itself. Height gives us a running start by keeping the closest listeners (and fastest doubling distances) out of the coverage. Vertical control extends the coverage start and horizontal control extends the range to the stop. It’s easy to see that more is better for height and vertical control, within reason. We need more towers if we narrow the horizontal, whereas overly wide towers drive up the annoyance factor with overlapping arrivals that are 10s or 100s of milliseconds late. The point of diminishing returns is found by 90°, which means a typical single element is suitable, but doublewide arrays are trouble. We have only one delay setting available for these satellites, which means we are destined to fall out of time in the horizontal plane. The wider we go, the sooner the delay offsets tip the balance in favor of annoyance. The rearward path has the fastest rate of time offset change. Anything we can do to dampen the back lobe is a plus. Acoustic absorption, cardioid steering and active noise cancelling are all options worth exploring.

Forward Spacing Relays are power-boosting systems. The stage mains start to sag and Tower 1 pumps it up. The cycle repeats at Tower 2 but there’s a twist. The later links don’t have to go it alone. They are riding on the backs of the earlier mains, which have gone a longer way. This cuts both ways. Signal from the stage main has fallen the most, but its rate of loss is the slowest. This is a real-world application of staggered starts and their different doubling distances (Fig. 9.18). We don’t seek unity line in this application. We are boosting. The residual level of the earlier systems allows us to taper the power scaling for relays as we move outward. We won’t bring it down to unity scaling (like our fill systems) but we won’t make every tower a clone of the stage. Dropping 3–6 dB per relay link takes advantage of the piggyback opportunity and reduces the risk of backflow. It is a ratio proportional scaling, of course. Each tower is scaled down in level and distance between in proportional measure. For example, a four-link relay chain downscaling in 3 dB increments can be spaced at 70, 49 and 35 meters (Fig. 11.31). The 6 dB alternative would be 70, 35 and 18 meters. The former has more boost and range, and the latter has less backflow.

This is one area of system design where we must get specific about scale. We could hypothetically use big relays in big spaces and small relays in small spaces, but in practice we avoid relays in small space (by using big systems). In really big spaces, especially outdoors, the relay is mandatory. A large line array of Class 4 power-scaled elements should be viable up to 75 or 100 meters. We are at the mercy of air loss and weather. The weather is a liability in two ways: acoustical degradation and the safety limits of our speaker tower height. Sadly the latter issue has had devastating consequences when limits are exceeded. The relay towers (or crane lifts) will typically have lower height and weight limits than the mains. This is fine because the relays will be power scaled under the mains anyway (in combined quantity if not individual elements).

Standard features of the relay mains ■ Application: Power boosting forward extension. ■ Elements: (H) first, second order, (V) coupled point source (large), second order (small). ■ Array type: (H) uncoupled line or point source, (V) uncoupled point destination with stage mains. ■ Coverage range: unity line (ONAX to XOVR to ONAX) to limit line (triple speaker coverage or more). ■ Sonic image: You’re a giant tower in plain view. Just add 10 msec of extra delay and you’ll disappear.

11.6.7 Overhead mains (ceiling speakers) It seems ridiculous to call a ceiling speaker a main system, but tell that to engineers in hotels, convention centers, theme parks, restaurants and retail. They are a bigger market than concerts, by the way. Overhead mains are typically a matrix of uncoupled line sources. A hallway might use a single line, whereas an open room needs a two-dimensional matrix of line sources. What should we call it, an uncoupled waffle source? A room with flat floor and ceiling can be designed with equally spaced and power-scaled elements. The spacing is found by the uncoupled line source calculator (Fig. 11.20) for sitting (or standing) head height. It is mandatory to use the lateral-width shape for overhead system spacing. The listener plane is a flat line of coverage, not radial unless your audience is in a bowl. The lateral width is derated from the radial width (e.g. a 90° radial coverage yields 70° of lateral). We can use the derated lateral values

and draw coverage lines from the speakers to the listener plane. Just stack the triangles side-by-side until the space is full. The spacing and power scaling must adapt to the depth changes in room with sloping floors or ceilings, balconies, etc. The example system shown in Fig. 11.32 is the real-world application of the asymmetric uncoupled line source with unity-compensated spacing shown back in Fig. 9.19.

Standard features of overhead mains ■ Element: first order only except for very high ceilings. ■ Array: uncoupled line source. ■ Range: listener head height.

FIGURE 11.31 Relay mains design examples. The range between successive relays falls incrementally

FIGURE 11.32 Overhead main design examples. Spacing and power scaling adapt to the ceiling height.

Overhead Mains Specification ■ Spacing: compensated unity spacing using the lateral aspect ratio reference (Fig. 3.41) ■ Power scaling: application dependent but rarely used for high-power systems.

11.7 Fill Systems Fill systems supplement the mains. They have less power, cover fewer people and yield to the needs of the mains. Where do we draw the line between mains and fills? If I turn the lower speaker in my main array down 1 dB is it now a downfill? Hardly. Recall that the range ratio breaks for main-array subdivision were 6 dB, so this gives us a good guideline. We call it mains and sidefills when our side array only goes half as far as the front array. Likewise for a downfill or rearfill. All of the other fill types are uncoupled from the mains, so no incremental differentiation is required. Frontfills are fills no matter what you try to do with them. We specify these systems to fill the gaps. We define their start and stop points, which gives us the range info we need to power scale them to keep up with the mains. There are lots of ways to fill the shapes. We will select the element coverage angle and quantity to get the combined shape we need.

11.7.1 Frontfill (FF) Frontfills are largely one-dimensional, making them the easiest fills to specify. The horizontal plane is where the work is done. The mixture of element horizontal angle, spacing and splay determines the start and stop of optimal coverage. There is only a single vertical plane element and its coverage angle barely matters. The speaker is aimed point blank into the faces of listeners on a flat plane. It’s hard to miss as long we don’t go too narrow. There is little to gain and much to lose from narrow vertical frontfills. The vertical spill from wide frontfills goes harmlessly into the laps of the front row and the air above. We should be OK as long as the speakers can see the last row of seats they cover. Speakers placed so low that they can’t see past the first row are known as footfills.

Standard Features of Frontfill Systems ■ Application: coverage and image source for seating on the stage perimeter. ■ Elements: (H) first, second order, (V) all orders. ■ Array type: (H) solo speaker, uncoupled line or point source, (V) solo speaker. ■ Coverage range: unity line (ONAX to XOVR to ONAX) to limit line (triple speaker coverage or more). We don’t really ever need frontfills, but they are so nice to have. The mains could cover the front but there are leakage risks. Frontfills are a coverage relief valve, providing a buffer zone between the mains and the open mics on stage. They also serve as image sources toward the stage. The default answer is yes, we need frontfills unless both coverage and imaging are handled by others, e.g. stage sources, actors, etc. The frontfills are redundant when abundant stage sound leaves us unconcerned

about gain and image in front (Fig. 11.33).

Frontfill (FF) Specification Summary ■ Frontfill need: Yes, except for small venues with light reinforcement and high ratios of stage sound. ■ Coverage start: unity line at first row. ■ Horizontal width: OFFAX edge at first-row last seat. Sooner if other low sources cover outer areas. ■ Horizontal spacing/splay: unity line connecting the lateral width. Uncoupled array calculator (Fig. 11.20). ■ Power scaling: Find range ratio between mains (-6 dB) and FF at the coverage start, and derate the FF. ■ Vertical aim: Aim at head height of listeners at the limit line depth. ■ Vertical coverage: only a few degrees needed. Minimal benefit (and real risk) to using narrow elements. There is an alternative frontfill solution for stages too low to give us the required sightline. Go up: frontfills on the ceiling aiming down at the front rows. This option applies to venues with low ceiling and low stages. The horizontal coverage/spacing is calculated the same way and vertical aiming is again at the limit line.

FIGURE 11.33 Frontfill design examples. Spacing and power scaling adapt to the starting depth.

11.7.2 Underbalcony (UB) and Overbalcony (OB) fill (delays) There are innumerable possibilities for “delay” speakers, just add delay. The default meaning for “delays” in our industry (and this section) is a row of under- or overbalcony fills. The standard configuration is like frontfills moved into the house and attached to the ceiling. This adds the vertical plane into the equation. The unity and limit lines are still set by the mix of element angle, spacing and splay, but the height of the speakers above the listeners adds triangulation into the range ratio. Needs Assessment Do we really need these delays? The answer is easy if we can’t see the mains. Otherwise it’s a very gray area. It helps to remember the primary challenge for underbalcony areas: strong early reflections from multiple surfaces. Low-clearance underbalcony spaces will have stronger and earlier reflections than those with more air space overhead. Clearance is a key factor for evaluating delay necessity. Changing the direct sound transmission distance minimally affects the timing of reflections from nearby underbalcony surfaces. By contrast, transmission distance strongly affects the direct/reflectedlevel relationships there. Propagation loss doubles with distance, including the reflection paths. More distant sources have longer doubling rates, so their reflections don’t lose as much level. The loss rate for reflections from close sources is much greater (the reason why delay speakers improve D/R ratio here). Sources close to the balcony front have more of the positive qualities that make underbalcony delays work. A distant source has a D/R ratio disadvantage underneath, even though the surfaces are the same. It is often mistakenly believed that underbalcony areas require restoration for HF loss. And yet direct sound transmission under a balcony is no different than outdoors (unless the path is blocked). If being under a balcony changed the direct sound, then why wouldn’t over the balcony be the same? It’s actually LF range buildup and combing from strong early reflections. The HF is the least affected, not the most. The overall effect is a pink-shifted frequency response, hence the perception of HF loss. This can be fixed by a jackhammer removing the balcony. All of this discussion holds for overbalcony spaces as well and helps to evaluate delay for the uppermost areas of single-slope spaces. This is relevant wherever there’s a roof. Clearance and Return

Two key variables have been revealed: underbalcony clearance and mains transmission range. High clearance means the mains can do it alone unless they are extremely far. Low clearance means trouble and we’ll need to bring the mains in close to overcome the damage. Our second round of balcony battles involves ratios as well. The primary indicator is the clearance ratio, the shape of the underbalcony air space (height/depth). The return ratio, which helped decide when to split the mains (section 11.6.5.) returns in a secondary role here. The balcony range ratio is the difference between covered and uncovered transmission lengths. We need two lengths and one height: (a) main to balcony front, (b) balcony front to last row and (c) the average clearance above our ears under the balcony.

The Ratios Driving Delay Necessity ■ Clearance ratio: height/depth of the air under the balcony (head height to ceiling/VBOT1 – VTOP2 ). ■ Return ratio: main to last seat/main to balcony front (M–VTOP2 /M–VBOT 1 ). Even a shallow underbalcony area needs delays if the clearance is inches above our heads. We literally sense the need for more air under a low ceiling. We can go deeper without a delay if we get more breathing room. This is the clearance ratio: vertical air space to depth. Take a meter off the floor–ceiling height and you’ve got the seated head clearance. Underbalcony areas with clearance ratios below 50% (height/depth) are flagged as contenders for delays, sending us to the second round of qualification (the return ratio). We will need a final score above 50% to be exempt from delays. The range ratio raises the clearance score; the question is will it be enough to reach the combined threshold of 50%? We can reframe our decision as “can the main overcome the clearance challenge?” We look at the underbalcony area from the main’s perspective to assess the chances. The return ratio, which links inversely to D/R ratios in the underbalcony area tells us a lot. A main speaker parked near the balcony front has a D/R ratio advantage over a distant source. It can penetrate the underbalcony space more effectively than one that’s traveled further. These two ratios combine to give us a composite quality score for the main system with a threshold at 50%. Note that this index is statistically based. “Needs delays!” is not likely to show up in our design data, but sure makes itself clear during optimization. The quality threshold is derived from some 100 case studies analyzed for their delay decision and the optimization field results. The delay decision results are sorted into three categories: needed and specified, not needed but specified, and needed but not specified (in best to worst order).

Composite Score Results of the Case Studies ■ 70%: Delays should not be specified (may end up turned off during optimization). The custody of the underbalcony area is reallocated once delays are added. The area covered by the delays is removed from the mains scope of work and the quality score is recalculated. The underbalcony depth is effectively reduced, raising both ratios, hopefully combining to exceed the 50% threshold. If not, a second set of delays must be considered. The composite score also gives insight about delay placement and power scale. A score just under 50% indicates to us that we should place the delays at a depth appropriate to cover the last few rows. The answer “yes delays” does not mean the entire underbalcony area needs coverage from speakers mounted on the balcony, please! This all too common practice degrades seats that didn’t need help and is ineffective where needed most. It is worth noting the return ratio’s role in both balcony decisions (high return ratios lean us toward splitting the coverage in either case). Mains with a short throw to the balcony are pushed toward an upper/lower split (section 11.6.5), whereas distant mains are pushed toward adding delays.

FIGURE 11.34 Needs assessment for balcony fills. A variety of scenarios show the process of determining whether or not fills are needed over and under a balcony. Tertiary Factors Affecting Delay Systems We can also add some short subjects that influence the decision to lesser extents.

Tertiary considerations for composite scores in the grey zone (50-70%) ■ Main directivity: A highly controlled main improves the quality by reduced reflections. ■ Main height/aim: The most favorable orientation is a flat main skimming under the balcony ceiling. ■ Upper/lower mains: Split mains often have more favorable orientation to the underbalcony area. ■ Room acoustics: Strong room reverberation weighs in favor of adding delays (under and over). ■ Overhead angle: Lively back wall, down-angled balcony ceiling weigh in favor of adding delays. ■ Program material: Voice transmission weighs in favor of delays more than music (e.g. LCR systems). Field Examples Our first room (Fig. 11.34) has a single balcony with 2 m × 8 m air space. Delays are very likely with such a low clearance ratio (25%). We’ll start with the mains placed 8 m from the balcony front (Fig. 11.34 (A)). This yields a 2:1 return ratio (16 m/8 m). Multiply the ratios to get the quality composite, 0.25 × 2 = 0.5 (50%). We’ll add delays and recalculate the score based on the depth remaining in the main’s custody (Fig. 11.34 (B)). Only the first 4 m are left to the mains, which improves the clearance ratio to 50% (2 m/4 m). The return ratio falls to 1.5 (12 m/8 m). The composite is 75% (0.5 × 1.5) so we know we don’t need a second set of delays at the balcony front. Let’s reset and illustrate the other mechanism with another scenario. We’ll increase the return ratio by moving the main closer to the balcony (Fig. 11.34 (C)). Halving the distance to 4 m increases the return ratio to 3:1 (12 m/4 m). This brings the composite to 75% again (.25 × 3), but this time without delays. The mains can overcome an unfavorable clearance ratio, but they have to be in your face to do it. The closer we get, the more we are taking on the same underbalcony perspective as a delay. Let’s reset again and move the speaker farther away, doubling the distances to the balcony front. Delays are a certainty now; the only question is a second ring. Range ratio falls as the mains move further away, reducing their penetration depth under the balcony. Our example hits a tipping point at around 36 meters, and a forward set of underbalcony delays would be helpful at the very front of the balcony. Return again to the original placement and consider what would happen if we double the clearance under the balcony to 4 m. Clearance doubles to 50% raising the composite score to 100% (0.5 × 2), which makes the specification a no-brainer. We could move the mains out to 28 meters before we enter the gray zone under 70%.

FIGURE 11.35 Needs assessment for under/overbalcony fills for two sample halls. The rooms are evaluated before and after delays have been added.

This brings up an interesting aspect of our example application. The overbalcony area is the same depth and twice the height of the underbalcony. We can (and should) apply the same equations to the upside as the downside. As the saying goes “One man’s ceiling is another man’s floor.” The clearance has doubled to 50%, raising the composite score to 100% (0.5 × 2), an easy decision. We could move the mains out to 28 meters before we enter the gray zone under 70% for the overbalcony area. The exact same thing would happen if we doubled the clearance under the balcony. Fig. 11.34 (D) shows a modified ceiling structure for the second floor that drops the average height to 2 m, making it functionally identical to the underbalcony area. Don’t think it hasn’t been done just because it’s a bad idea! The next hall is also two floors with a deep overbalcony (Fig. 11.35, upper). The composite scores for both the overbalcony (63%) and underbalcony (40%) areas call for delays. The low clearance above leaves us with a lot to cover, raising concerns about getting too loud in the closer rows. The overbalcony speakers are power scaled for much longer range than the underbalcony. The front half of the balcony now has a composite score of 120% so a second set is not needed. A third hall (Fig. 11.35, lower) has an easy call for the underbalcony system (47% composite) but the overbalcony area was just over the line at 75%. A marginal case like this can opt for a small-scale delay system to cover the last rows. Features and Specifications

The horizontal spacing and splay of the delays is just like the frontfills except that the unity line length is triangulated in the vertical plane. The process is about placement and spacing once we have assessed the depth where coverage must start. The delays are placed ahead of the unity line so they can be aimed favorably to reach the rear. Keeping the unity line at 110 dB Test: latency Source: noise or music Units: T (ms) Setup: #2A (dual) Resolution: 0.02 ms or better FFT window: Hann Averaging: any 1: Drive DUT at nominal level with broadband signal. 2: View impulse response and compensate I/O time offset with analyzer delay. 3: Verify flat phase (therefore latency is frequency independent). 4: Result specified as time (x ms), e.g. 2.5 ms. PASS/FAIL: latency budget is application dependent—stage monitors = 1/24 FFT window: Hann Averaging: any 1: Drive DUT at nominal level with broadband signal. 3: Gain result specified for nominal value in dB (+ or -). 4: Range result specified from F Lo to F Hi within 0 to -3 dB. 5: Variance result specified as range from ±dB within nominal 0 dB passband. PASS/FAIL: electronic device should be flat +0.25 dB 20-20 kHz FAIL: discrepancy between user interface and actual gain or features Test: phase response over frequency (range and variance) Source: noise or music Units: phase0, T (ms) @F Setup: #2A (dual)

Resolution: > = 1/24

FFT window: Hann

Averaging: any

1: Drive DUT at nominal level with broadband signal. 2: XFR function phase measurement between input and output of DUT(s). 3: Range result specified as phase deviation in passband (or specified frequency range). 4: Variance result specified as range from + degrees within nominal passband. PASS/FAIL: electronic device phase should be flat ±30° dB 20-20 kHz Test: phase delay over frequency Source: noise or music Units: T (ms) @F span Setup: #2A (dual) Resolution: > = 1/24 FFT window: Hann Averaging: any 1: Drive DUT at nominal level with broadband signal. Compensate latency. 2: XFR function phase measurement between input and output of DUT(s). 3: Select frequency range of interest. Measure phase shift between F Hi and F Lo. 4: Apply phase delay formula (section 12.8.3) to convert to time. 5: Variance result specified as T (ms) within the specified frequency range. PASS/FAIL: latency budget is application dependent—stage monitors = 1/24 FFT window: Hann Averaging: any 1: Drive DUT at nominal level with broadband signal. Compensate latency. 2: XFR function phase measurement between input and output of DUT(s). 3: Store and recall XFR phase trace as time/stability reference. 4: Phase trace should be time invariant unless there is clock drift or jitter. 5: Result specified as range and frequency of phase/clock cycle deviation. PASS/FAIL: stability with ½ clock cycle in the phase response (pass) or beyond (fail)

13.5 Acoustic Verification Procedures Test: noise over frequency Source: none Resolution: > = ⅓

Setup: #1B (single ch.) Averaging: 8 or FFT window: Hann more Units: volts (dB SPL)

1: Mute speaker to acquire baseline of noise in the room. 2: Measure, store and recall ambient noise floor with 1 ch. FFT. 3: Unmute speaker(s) and measure noise floor with 1 ch. FFT. 4: Result specified as dB NC (ambient) and dB SPL A weighted (speaker system). PASS/FAIL: speaker noise should be < ambient noise in listening area (at all frequencies) Test: hum over frequency Setup: #1B Source: none Units: volts (dB SPL) (single ch.) FFT window: Averaging: Resolution: > = 1/24 flattop optional 1: Mute speaker to acquire baseline of noise in the room. 2: Measure, store and recall ambient noise floor with 1 ch. FFT. 3: Unmute speaker(s) and measure hum spectmm with 1 ch. FFT. 4: Result specified as dB NC (ambient) and dB SPL @F (speaker system). PASS/FAIL: speaker hum should be < ambient noise in listening area (at all frequencies) Test: T HD + n over frequency Setup: #1B Source: pure sine tone Units: %THD + n (single ch.) FFT window: Averaging: Resolution: > = 1/24 flattop optional 1: Drive DUT at level >20 dB below rated max, e.g. 100 dB SPL for 124 dB spkr. 2: Measure the fundamental frequency level and mark as reference. 3: Measure harmonic level relative to fundamental. Estimate THD (-20 dB = 10%, -40 dB = 1%, -60 dB = 0.1%, -80 dB = 0.01%). 4: Result spec'd as %THD @dB SPL @F, e.g. 1 %THD @100 dB SPL @1 kHz. PASS/FAIL: speaker THD + n should be = 1/24 flattop optional 1: Drive DUT at low level (>20 dB below max rating), e.g. 80 dB SPL @1 kHz.

2: XFR amplitude. Store and recall trace for reference value @F, e.g. 0 dB @1 kHz. 3: Increase drive level until XFR amplitude value begins to fall (limiting onset). 4: View output (single channel) to find onset level. 5: Prorate SPL value by distance to obtain 1 m equivalent (if desired). 6: Result is specified as max dB SPL @F @m, e.g. 120 dB SPL @1 kHz@l m. PASS/FAIL: speaker max capability @f should be within 20 dB of overall rating Test: maximum SPL capability (full range) Units: SPL (pk, cont, Setup: #1B Source: pink noise A, C, Z) (single ch.) FFT window: Averaging: Resolution: > = 1/24 flattop optional 1: Follow steps 1 to 5 of above test but use full-range source. 2: Result spec'd as max dBSPL (pk, cont, weight) @m, e.g. 120 dB SPL pk "Z" @ 1 m. PASS/FAIL: speaker max capability should be within 3 dB of overall rating Test: polarity (single driver) Setup: #2B (dual Source: noise or music Units: + or ch.) Averaging: Resolution: > = 1/24 FFT window: Hann optional 1: Drive DUT at nominal level with broadband signal. 2: XFR function measurement between input and output of DUT. 3: View impulse response and compensate I/O time offset with analyzer delay. 4: Outcome specified as + (impulse upward) or - (impulse downward). 5: Verify relatively flat phase along 0° or 180° line within speaker's operating range. 6: Polarity result specified as + (phase at 0°) or - (phase at 180°). PASS: polarity consistently adheres to pin 2 hot standard (+) FAIL: inconsistent or non-standard polarity Test: polarity (two-way speaker) Setup: #2B (dual Source: noise or music Units: + or ch.) Averaging: Resolution: > = 1/24 FFT window: Hann optional 1: Determine HF driver polarity by single-driver method above. 2: XFR function phase measurement between input and output of DUT. 3: Store and recall solo HF driver amplitude and phase response. 4: Mute HF and measure solo LF. Compare phase responses in the XOVR region. 5: Adjust delay and/or polarity for best phase correlation between HF and LF. 6: Result specified by maximum coupling and minimum phase delay through XOVR. PASS: XOVR summation approximates unity FAIL: XOVR summation < 3 dB

Test: amplitude response over frequency (gain, range and variance) Source: noise or music

Units: +dB @F

Setup: #2B (dual ch.)

Resolution: > = 1/24

FFT window: Hann

Averaging: Optional

1: Drive DUT at nominal level with broadband signal. Compensate latency. 2: XFR function amplitude measurement between input and output of DUT(s). 3: Range result specified from F Lo to F Hi within 0 to -6 dB. 4: Variance result specified as range from +dB within nominal 0 dB passband. PASS/FAIL: (Range) Speaker should extend over published range (within ⅓ oct) PASS/FAIL: (Variance) Speaker nominally covers published range (beware of reflections)

13.6 Additional Verification Options There is no limit to the level of verification we can perform on the system. It’s not practical or necessary to perform a full set of verifications each night for ongoing operations such as a repeating show or touring system. We can move safely forward with a reduced verification menu when the system has only been power cycled or reconnected, rather than rewired, since yesterday. We can always retreat to verification procedures to locate problems found in the calibration stage. A thorough verification stage provides the solid foundation required for the critical calibration decisions. It’s tempting to view the available time and conclude it would be better spent doing calibration rather than verification. Do so at your peril. There is no glory in a great verification, but plenty of shame in a bad one. In the end we each arrive at a verification level we can live with. For touring systems, a thorough verification in the shop reduces the requirements on the road. A permanent install has the highest requirements because of how many disciplines are involved, and the long-term impacts of any errors. Nobody wants to be the one who missed

FIGURE 13.18 Field example of on- and off-axis microphone responses

FIGURE 13.19 Example application of the post-calibration verification

FIGURE 13.20 Example application of the post-calibration verification

FIGURE 13.21 Example application of the post-calibration verification

the polarity reversal that gets discovered years later. Some examples of post-calibration verification in the field are found in Figs 13.19 to 13.21. Verification was explained to me like this on my first day in professional audio: “Assumption is the mother of f@$%-up” (Tony Griffin, 1976).

Chapter 14 Calibration The system is verified, speaker positions are roughed in and we are prepared to set signal processor parameters. The stage is set to complete the optimization process. Calibration operations progress from simple to complex. Each verified subsystem is individually calibrated and combined into larger subsystems, where secondary adjustments are made to compensate for the summation effects. The process is complete when all related subsystems are grouped into an aggregate combined system with minimum response variance throughout the listening space. calibrate v.t. find calibre of; calculate irregularities of before graduating it; graduate with allowance for irregularities; correlate readings (of instrument etc.) with a standard. Concise Oxford Dictionary

The calibration process, like verification, is a series of controlled procedures designed to give specific answers such as speaker position, delay time, equalization and level setting. Calibration answers are never as cut and dried as those found in verification. There is not a one-size-fits-all, step-by-step set of procedures for all system designs. Each application has unique combinations of speakers, processing and room acoustics. We need an adaptable cookbook ready for the hundreds of contingencies that arise. This section contains the standard procedures for calibration and enough support and explanation to provide a firm but flexible foundation to adapt them to each application. Calibration is the process of proofing and fine-tuning. Parts of the design must be proven (such as aiming and splay angles) and others require on-site adjustment (such as delay and EQ). We expect an EQ to be properly wired (verification) but don’t expect it have our desired settings when unpacked (calibration). Speaker placement is a variation on this theme. Verification: checking the initial placement/aim against the drawings. Calibration: finding the best placement/aim. Calibration goals of uniformity, efficiency, clarity and plausibility are fairly universal. They are not unique or original to this text or to dual-channel FFT analysis. Calibration’s challenges are also universal. We must control each speaker’s direct sound distribution and its summation with others. We use summation to maximize the power addition and directional control, and seek to minimize its combing effects. Dynamic conditions require ongoing monitoring and active calibration to maintain consistency over time. Calibration inevitably boils down to a series of decisions and signal processing settings. A monkey

can place the speakers, turn the knobs on the processor and somebody can still mix a show on it. The monkey’s settings are unlikely to create uniform response over the space. It takes very evolved techniques to create uniformity. We are well on the way if we have adhered to the minimumvariance principles during the design and verification stages. Calibration is the final push in the process of achieving uniformity.

Calibration goals ■ Minimum variance: level, spectral and ripple (same sound everywhere). ■ Maximum coherence: highest intelligibility, direct/reverberant ratio, clarity, etc. ■ Maximum efficiency: full use of coupling capability for maximum power addition. ■ Sonic image control: the sound image appears where we want it.

Calibration challenges ■ Direct sound distribution: even level of direct sound everywhere. ■ Speaker/speaker summation: maximum coupling and minimum combing. ■ Speaker/room summation: minimize reflections except constructive LF coupling. ■ Dynamic conditions: wind, changing humidity, temperature and room empty to full.

Calibration strategies ■ Philosophy: guiding principles for gray-area decisions (choosing the winner when all win– win options have been exhausted). ■ Data access: the critical transmission path probe points required to make informed decisions —console out, signal processor out and the speaker’s response in the room. ■ System subdivision: appropriate separation of the signal path into the optimal number of processing channels and speaker subsystems. We need enough flexibility to independently set parameters such as equalization, level, delay, etc., without wasting money on redundant channels. ■ Methodology: a set of procedures, playbook or roadmap. The methods for reaching the goals can be reduced to a series of specific tests for finding the relevant answers. ■ Context: data for any given point is neither good nor bad in its own right. Context gives us expectations, an adjustable standard to judge the result. Is this the expected response? For example, extensive pink shift is expected at OFFAX, but unexpected at ONAX.

Calibration decisions ■ Single speaker (or system) aim: optimal vertical and horizontal focus. ■ Array splay angle: optimal angular orientation between elements. ■ Array spacing: optimal spacing between elements. ■ Acoustical modification: Identify and treat acoustic problems (i.e. reflections). ■ Level setting: Scale the relative levels to create minimum-level variance. ■ Delay setting: Phase align crossovers to minimize ripple variance. ■ Solo EQ: Minimize spectral variance with a matched standard response. ■ Combined EQ: Compensate for LF coupling to minimize spectral variance.

Calibration subdivision ■ Source: Everything before the speaker system processing. The art side of the system . ■ Processor: The signal processing used for calibrating the sound system. ■ Speaker/room: The sound system in its acoustic environment

FIGURE 14.1 Calibration test reference

14.1 Approaches to Calibration We interrupt this exercise in objectivity and science with a philosophical discussion. Calibration includes many extremely gray, “lesser evil” decisions. Verification, by contrast, had clear-cut answers. We fix a speaker with reverse polarity or distortion, knowing everybody benefits. Calibration adjustments that help one location often degrade it at others. Sadly, this is virtually assured with EQ, level and delay setting, the cornerstones of calibration. This presents an ethical dilemma: How do we decide the winners and losers? We strive toward universal solutions but inevitably the win–win options are exhausted and we face win/break-even/lose. Let’s interpret our ethical dilemma in sociopolitical terms and see which model provides the best decision-making directives.

Anarchy The anarchy political model is structured on a total lack of structure. Governing authority is absent and every individual is essentially self-governing, with no common goals. Whoever grabs the controls can calibrate the system for their location, which just might happen to be the mix position. There’s no need for an acoustical analyzer’s objectivity because it’s every man for himself. This would be comical if not so often true.

Monarchy In this model, a single party makes decisions without having to answer to facts and science, and little or no regard for its effects outside the royal circle. The mix position is the castle and the inner circle includes the mix engineer, band manager and court jesters. The mix area is monitored with the finest analyzer available and calibrations can ensure the maximum power concentration there. A regal decree states that all seats in the house benefit from concentrating all resources at the castle.

Capitalism Capitalism promotes the idea that quality should be commensurate with price (the “money” seats over the “cheap” seats). It’s easy because the expensive seats are closer, have higher levels and directto-reverberant ratios. The “cheap” seats are disadvantaged in both categories but there is a whole lot more of them (which add up to a lot of money). Distant seats will almost always be at a sonic and visual disadvantage but we don’t have to settle for widespread input inequality. A more uniform sound redistribution strategy can reduce inequality without compromising the highly advantaged seats.

Democracy In the democratic model each seating area is given equal representation. When decisions benefit one seating area above the next, the effects are evaluated on a majority basis. Two principal factors to consider: quantity affected (large or small) and symmetry of effect (positive vs. negative). We go forward if quantities are equal and positive effects outweigh the negative. The majority rules when effects are equal. Otherwise we use the “triage” method. We don’t give up a 3 dB improvement for 1000 seats because it creates a 20 dB hole in ten. Conversely, we don’t make ten seats perfect if that screws up 1000 (see monarchy above). We view the population asymmetry vs. that of positive and negative effects. Democracy seems the best model for design and optimization strategies. To implement this requires us to do more than measure one position and issue proclamations. We must survey every population sector. It’s not practical to measure all 12,000 seats in an arena. Certain seats are carefully selected to be local area representatives. We’ll meet them later.

TANSTAAFL and triage TANSTAAFL (“There ain’t no such thing as a free lunch”) and the decision-making structure of acoustic triage, introduced in Chapter 11, are equally applicable to calibration. TANSTAAFL comes into play at each calibration decision: There are no effects without side effects. A setting that improves one location must affect others. Perhaps it’s a big upside/small downside, vice versa or a stalemate. It’s very tempting to delude ourselves that a single-position solution has a global benefit. TANSTAAFL underscores the need to measure in multiple locations to monitor changes. Remember: If something seems too good to be true, it probably is. The solution for one area may simply be transporting the problem to a new location, which sounds like a break-even proposition but not necessarily. Taking out the garbage does not eliminate it, but moves it to a far preferable location. Solutions for highly populated areas may outweigh the minority area side effects. For example, the high ripple variance near a spatial crossover can be placed down the length of an aisle. We move the problem to the ushers instead of paying customers. TANSTAAFL keeps us on the lookout for side effects. Acoustic triage helps us decide what to do about them.

14.2 Measurement Access 14.2.1 Access points: console/processor/mic The signal flows serially through three distinct sections: source, signal processing and the speaker system in the room. Signal is created at the source, the processing manages it and the speaker system delivers it. We need an access point at the output of each section.

FIGURE 14.2 Flow block of the electronic and acoustic measurement access points (console, processor and microphone) and the three transfer function responses (room/ processor/result)

Access points for measurement ■ Console output: the art/science reference standard. Connection point is post-matrix and prespeaker system processor. Parallel feeds are sent into the analyzer and processor. ■ Processor Output: dual-use test point—processor output and speaker system reference signal. Connection point is post-processor and pre-crossover. Reference signals must be full range. Parallel feeds are sent into the analyzer and speaker system. ■ Microphone: The surrogate ear at the end of the measurement chain. Used to monitor the speaker in the room. Can be compared to console output or processor out.

14.2.2 Three transfer functions: room/processor/result The three access points yield three distinct two-point transfer function results. Their roles in verification and calibration are outlined in Fig. 14.3.

The Three Transfer Functions 1. Room/speaker: processor output (speaker system in) vs. mic (speaker system out). 2. Processor (EQ): source (processor in) vs. processor output. 3. Result: source vs. mic (speaker system out).

14.2.3 Alternative access options The best-case scenario is direct physical access to the processor input and output connections. Analog transmission allows a parallel split but not so in digital. Next best is a substitute test point that delivers a copy of the desired signal, such as an unused output channel, network port or other surrogate. Naturally we want this to track as closely as possible to the signal flowing into the speaker system.

FIGURE 14.3 Flow block of measurement access points of the three transfer functions and their roles in the equalization process

FIGURE 14.4 Measurement access points for various system configurations. Access to processor input and processor output measurement points are indicated by “Y” (parallel analog), “2” duplicate AES output/network port or “A” dedicated pre-crossover access point.

14.3 Microphone Placement Let’s begin the hunt for the perfect mic placement. It will be harder to find than a unicorn (at least we know what a unicorn looks like). What would a perfect mic placement even look like? It would be one that accurately represents the system response over the whole room. But one position can only speak for all when the sound is the same everywhere (so every position is perfect). Even the sound of a perfect speaker in a perfect room differs at almost every location (so any mic position is imperfect). Maybe it doesn’t matter. It does, because the differences over location aren’t random. They follow patterns. Mic positions sort out the patterns and connect them together. Each mic position has specific roles to play with purposeful (not random) placement. There are best locations for equalization, best for speaker positioning and best for delay setting, but they are not the same. The approach proposed here uses individual responses with clearly predefined context. Instead of seeking to find a common average response, we seek out the differences between the key locations that characterize the expected system behavior. The mic positions are at the critical milestones of the standard variance and summation progressions. We can interpolate the response between the milestones based on the known progressions.

14.3.1 Placement classification There are six classifications of calibration mic positions (Figure 14.5). These positions have specific locations and roles in the calibration procedures. ■ ONAX: located “on axis” to the speaker. ONAX is used for equalization, level setting, architectural modification and speaker position. ONAX is found at the point of maximum isolation from neighboring speaker elements. The mic will be most literally “on axis” to the element when the speaker has a symmetrical orientation to the space. When asymmetrically oriented, the ONAX mic is located at the mid-point between the coverage edges, rather than the speaker’s on-axis focus point. ONAX locations for arrays are on a per-element basis. Spatial averaging techniques can be used for multiple positions inside the general ONAX area, or for multiple ONAX locations in a symmetrical array. In order to be classified as ONAX, a mic position needs significant isolation between its target and other elements in the array. ■ OFFAX: located at the intended horizontal coverage edge and/or “off axis” to the speaker. The position is defined by the listening space shape (e.g. the last seat in the row), not the speaker. This may or may not be at the actual edge of the speaker’s coverage. OFFAX positions are analyzed in relation to ONAX data. Our OFFAX goal is variation of 120° then combined EQ should not be used. ■ Best-case MV scenario: low overlap, low range ratio and low phase offset. ■ Worst-case MV scenario: high overlap, high range ratio and high phase offset. ■ Minimum benefit: combined response at ONAX A matches A solo (target curve). ■ Maximum benefit: combined responses at ONAX A and ONAX B match the solo reference. Let’s start with one extreme: no isolation. If the speaker is 100% omnidirectional then it’s 0 dB down at all angles. Splay angle doesn’t matter if the two patterns are circles. There is no isolation, so the closer you are, the louder it will be. It will be louder at the B speaker in direct scaling with the range ratio. Filters on either speaker will bring down the levels at both (by the same amount). The only thing to be done is to restore the response at ONAX A to the target curve (and let B go above that). See Fig. 14.21. Let’s go to an opposite extreme: high isolation (12 dB). The response can be restored to the original target curve in both locations at all range ratios with OFFAX, then aim speaker more toward OFFAX1 If OFFAX1 < OFFAX2 then aim speaker more toward OFFAX2 PASS/FAIL: Fill speaker required if OFFAX,-ONAX-OFFAX, variance >6 dB

FIGURE 14.31 Single speaker aim (horizontal)

FIGURE 14.32 Left/right speaker aim (horizontal)

Procedure 14.2: Aiming the left (or right) mains (horizontal) Goal: Left main is aimed for minimum variance in the horizontal plane (same for right). The procedure is similar to procedure 14.1 (aiming a single speaker) but the room center is considered a virtual wall (Fig. 14.32). Procedure 14.2: Aim the left main speaker (horizontal)

Speaker elements: left solo Mic #1 (OFFAX L): last seat on the left side (mid-point depth) Mic #2 (ONAX L): middle seat between center and outermost left (mid-point depth) Mic #3 (XLR): room center (mid-point depth). XOVR for left/right Procedure: 1. Define L/R mains coverage depth (same method as 14.1) and place mics. 2. @ONAX L: solo EQ (not required, but helps finding the off-axis edge) 3. Compare the ONAX L response to OFFAX L and XLR. 4. Aim speaker to minimize variance between ONAX L, OFFAX L and XLR. Outcomes and actions: If OFFAX L > XLR then aim speaker toward XLR (inward) If OFFAX L < XLR then aim speaker toward OFFAX L (outward) PASS/FAIL: Fill speaker required if OFFAXL-ONAX-XLR variance >6 dB

Procedure 14.3: Aiming a solo element (vertical) Goal: Solo speaker vertically aimed for minimum variance (e.g. solo main, surround) (Fig. 14.33). Procedure 14.3: Aim a single speaker (vertical)

Speaker elements: a solo Mic #1: (VTOP), uppermost coverage area (on axis horizontally) Mic #2: (ONAX A), mid-point depth (on axis horizontally) Mic #3: (VBOT), bottom of coverage (on axis horizontally) Procedure: 1. Define the intended vertical coverage limits and place mics (e.g. from the third-row frontfill XOVR to the twenty-third-row overbalcony delay XOVR). 2. @ONAX: solo EQ (optional, but eases evaluation of other responses). 3. Compare the ONAX response to VTOP and VBOT. 4. Aim speaker to minimize variance between ONAX, VTOP and VBOT. Outcomes and actions: If VTOP > VBOT then aim speaker down If VTOP < VBOT then aim speaker up PASS/FAIL: Fill speaker required if VTOP-ONAX-VBOT variance >6 dB

FIGURE 14.33 Single speaker aim (vertical)

Procedure 14.4: Aiming the top element (A) of an AB array (vertical) Goal: Upper speaker of a coupled point source array is vertically aimed for minimum variance in the upper portion and preparation for connection to the B speaker below it (Fig. 14.34). Procedure 14.4: Aim the top element of an AB array (vertical)

Speaker elements: speaker A of array AB, or ABC, etc. Mic #1: (VTOP), uppermost coverage area (on axis horizontally) Mic #2: (ONAX A), mid-point depth (on axis horizontally) Mic #3: (XAB), bottom of A coverage, future connection point to B Procedure: 1. Define the uppermost vertical coverage limit and place mics. 2. @ONAX A: solo EQ (optional, but eases evaluation of other responses). 3. Compare the ONAX response to VTOP. 4. Aim speaker to minimize variance between ONAX and VTOP. 5. Move XAB mic until -6 dB point is found. Will be used for aiming B. Outcomes and actions: If VTOP > ONAX A then aim speaker down If VTOP = ONAX A then minimum variance has been achieved PASS/FAIL: Delay fill speaker required in rear if VTOP-ONAX variance >6 dB Pass/Fail: If ripple variance is too strong from above then maximize ONAX A-VTOP variance (6 dB) by aiming speaker down

FIGURE 14.34 Compensated unity splay (AB vertical)

Procedure 14.5: Find the unity splay angle (AA symmetric) Goal: Find the unity splay angle for minimum variance of a symmetric pair, trio, etc. Array elements cover the same depth and are driven at the same level (an A1 + A2 combination). The procedure assumes the horizontal plane but works for either. The goal is a radial minimum variation line between ONAX A1, XAA and ONAX A2. It’s not always practical to solo the individual elements because they may be wired in parallel etc. In such cases the responses are only seen in combined form. Both are covered here (Fig. 14.35). Procedure 14.5: Find unity splay angle (AA symmetric)

Speaker elements: A1 and A2 (matched) speakers Mic #1 (ONAX AJ: on axis to A1 at mid-point depth Mic #2 (XAA): radial mid-point between A1 and A2 (equidistant to ONAX AJ Mic #3 (ONAX A2): on axis to A2 at mid-point depth (can be a SYM verification) Procedure (if individual elements can be muted): 1. Drive A1 solo. 2. @ONAX Ax: A1 solo EQ (optional, but eases evaluation). 3. @XAA: Move mic until A1 solo @XAA is -6 dB from A1 solo @ONAX A1. 4. @XAA: Mute A1 and drive A2 solo. 5. @XAA: Adjust splay until A1 solo = A1 solo @XAA (-6 dB re. ONAX A1.) Procedure (if individual muting is not possible): 1. All speakers driven. Place mics as described above. 2. @ONAX A1: solo EQ A1A2 (speakers are combined but 1 channel of EQ). 3. @XAA: Compare speakers A1A2 to A1 A2 @ONAX A1 4. @XAA: Adjust splay until A1A2 @XAA = A1A2 @ONAX A1 Note: Gross adjustments may require updating mic positions. Keep XAA, ONAX A1 and A2 on their centers to maintain accuracy. Outcomes and actions: If XAA > ONAX A then increase splay If XAA < ONAX A then decrease splay PASS/FAIL: If splay is too narrow but overall array is too wide and hits reflective walls, then compromise is required (a ripple variance tradeoff). Reduce splay until the array overlap is proportional to room/speaker overlap PASS/FAIL: If splay is unity but overall array is too narrow (OFFAX A is down more then 6 dB) then fills are required

FIGURE 14.35 Unity splay (AA symmetric)

Procedure 14.6: Find the unity splay angle (AB asymmetric) Goal: Find the compensated unity splay angle for an asymmetric pair, trio, etc. Array elements cover different depths and are driven at different levels (A + B combination). The procedure assumes the horizontal plane but works for either. The splay angle goal is a minimum variation line between ONAX A, XAB and ONAX B (Fig. 14.36). Procedure 14.6: Find unity splay angle (AB asymmetric)

Speaker elements: A and B (A is the longer throw system) Processing: channels A and B for EQ, level and delay Mic #1 (ONAX A): on axis to A at mid-point depth Mic #2 (XAB): mid-point between A and B (exact location TBD) Mic #3 (ONAX B): on axis to B at mid-point depth of its coverage Procedure (if individual elements can be muted): 1. Prerequisite: A and B are level set and EQ'd to match in their zones. 2. @ONAX A: Measure A solo for reference. 3. @XAB: Compare A solo here to A solo @ONAX A. 4. @XAB: Move mic until A solo response = -6 dB. 5. @XAB: Mute speaker A and drive speaker B solo. 6. @XAB: Adjust splay until B = A solo response (both -6 dB re. ONAX A). 7. @XAB: Delay the B speaker (if needed) to phase align B to A. Outcomes and actions: PASS: If XAB = ONAX A then splay is optimized FAIL: If XAB > ONAX A (or B) then increase splay. If less then decrease splay PASS/FAIL: Ripple variance tradeoffs as described above in the symmetric version

FIGURE 14.36 Compensated unity splay (AB horizontal)

Procedure 14.7: Unity gain uncoupled array spacing (AA symmetric) Goal: Find the unity spacing for minimum variance of a symmetric pair, trio, etc. Array elements cover the same depth and are driven at the same level (an A1 + A2 combination). The procedure assumes the horizontal plane but works for either. The goal is a linear minimum variation line between ONAX A1, XAA and ONAX A2. This procedure is functionally analogous to the symmetric unity splay (procedure 14.5), with linear spacing providing the isolation instead of splay angle. Differences are shown here for brevity (Fig. 14.37). Procedure 14.7: Find unity spacing (AA symmetric)

Speaker elements: A1 and A2 (matched) speakers Mic #1 (ONAX A1: on axis to A1 at intended unity coverage line Mic #2 (XAA): linear mid-point between A1 and A2 (unity line depth) Mic #3 (ONAX A2): on axis to A2 at unity line depth (can be a SYM verification) Procedure (if individual elements can be muted): 1. Follow steps 1-5 of procedure 14.5 (unity splay AA, solo muting). 2. @XAA: Adjust spacing until A, = A1 solo @XAA (-6 dB re. ONAX A1. Procedure (if individual muting is not possible): 1. Follow steps 1-3 of procedure 14.5 (unity splay AA, no solo muting). 2. @XAA: Adjust spacing until A1 + A2 @XAA = A1 + A2 @ONAX A1 Note: Gross adjustments require updated mic positions. Keep XAA, ONAX A1 and A2 on their centers for best accuracy. Outcomes and actions: PASS: If XAA = ONAX A then spacing is optimized FAIL: If XAA > ONAX A then increase spacing. If < ONAX A then decrease spacing PASS/FAIL: If spacing is unity but overall array is too narrow (OFFAX A is down more then 6 dB) then more elements are required

FIGURE 14.37 Unity spacing for the symmetric uncoupled line source (AA)

Procedure 14.8: Find the compensated unity spacing (AB Asymmetric) Goal: Find the compensated unity spacing for an asymmetric pair, trio, etc. The array elements cover the different depths and are driven at different levels (an A + B combination). The procedure works for either plane. The spacing goal is a custom minimum-variance line between ONAX A, XAB and ONAX B (Fig. 14.38). Procedure 14.8: Find unity spacing (AB asymmetric)

Speaker elements: A and B (A is the longer throw system) Processing: channels A and B for EQ, level and delay Mic #1 (ONAX A): on axis to A at mid-point depth Mic #2 (XAB): mid-point between A and B (exact location TBD) Mic #3 (ONAX B): on axis to B at mid-point depth Procedure: 1. Follow steps 1-3 of procedure 14.7 (compensated unity splay AB). 2. @XAB: Adjust spacing until B solo = A solo @XAB (-6 dB re. ONAX A). 3. @XAB: Delay the B speaker (if needed) to phase align B to A. Outcomes and actions: PASS: If XAB = ONAX A then spacing is optimized FAIL: If XAB > ONAX A then increase spacing. If < ONAX A then decrease spacing

FIGURE 14.38 Compensated unity spacing for the asymmetric uncoupled line source (AB)

Procedure 14.9: Unity spectral crossover: main + sub (A + S) Goal: Join a main element(s) and subwoofer(s). Assumed that elements would overlap in frequency if left unfiltered. Responses are filtered to create a unity gain combination in the crossover region. Mic location is usually ONAX to mains. Ground plane placement helps clarify the phase response without compromising the procedure (Fig. 14.39). Procedure 14.9: Unity spectral crossover (A + S)

Speakers elements: Main (A, HF) + Subs (S, LF) Signal processing: LF and HF channels for LPF, HPF, level and delay Mic #1: ONAX, mid-point depth, on axis horizontally to mains. Can be ground plane Procedure: 1. Prerequisite: Solo EQ is complete on both LF and main systems. If crossover frequency is already decided then go to step 5. 2. Drive main (A) solo. Determine approximate LF cutoff frequency (where the response starts to slope downward). 3. Drive subwoofer (S) solo. Determine approximate HF cutoff frequency. 4. Select crossover frequency (FX), which must be inside the overlapped frequency range, preferably near the mid-point (e.g. if overlapping from 60-120 Hz then the mid-point frequency would be around 90 Hz. 5. Adjust levels to match individual systems in the crossover frequency range. 6. Drive main (A) solo and adjust processor HPF until FX response is -6 dB. 7. Drive subs (S) solo and adjust the processor LPF until FX response is -6 dB. 8. Phase align the crossover. Determine which arrives first (the one with less downward phase slope by comparing the solo phase responses). 9. Delay the leading system until phase matched in the crossover range. 10. Combine systems together and verify coupling. Outcomes and actions: PASS: Crossover is optimized when combination adds to unity FAIL: If combination rises above unity then corner frequencies are too close together. Lower the LPF corner frequency and/or raise it on the HPF, then re-align phase. If below unity then vice versa.

FIGURE 14.39 Unity spectral crossover (LF + Mains @100 Hz)

Procedure 14.10: Overlap spectral crossover: main + sub (A + S) Goal: Join a main element(s) and subwoofer(s) with overlap in the crossover range. Most aspects of the previous main/sub combination apply (Fig. 14.40). Procedure 14.10: Overlap spectral crossover (A + S)

Procedure: 1. Perform steps 1-5 of the unity crossover procedure (14.9). 2. Phase align the crossover. Determine which arrives first (the one with less downward phase slope by comparing the solo phase responses). 3. Combine systems and observe the approximately 6 dB peak at FX. The bandwidth depends on the size of the overlap range. 4. Add combined EQ to both channels to flatten the peak (6 dB cut at FX). Outcomes and actions: PASS: Crossover is optimized when equalized combination adds to unity FAIL: If combination has peaks off center from FX then EQ filter bandwidth is too narrow. If combination has dips, then BW is too wide

FIGURE 14.40 Overlap spectral crossover (LF + mains @100 Hz)

Procedure 14.11: Relative-level setting of isolated systems (A vs. B) Goal: Set the unity level for the lesser system (B) of an asymmetric pair. The B element covers a different depth and/ or may be a different model or quantity, and therefore may need drive-level adjustment to scale the level with the A system. The procedure works for either plane. The levelsetting goal is a unity link between ONAX A and ONAX B (Fig. 14.41). Procedure 14.11: Level setting of isolated systems

Speaker elements: A and B (B is the minority system) Processing: channels A and B for level Mic #1 (ONAX A): on axis to A at mid-point depth Mic #2 (ONAX B): on axis to B at mid-point depth Procedure: 1. @ONAX A: solo EQ speaker A. 2. @ONAX B: solo EQ speaker B. 3. Compare B solo response to A solo response. 4. @ONAX B: adjust level until B solo @ONAX B = A solo @ONAX A. Outcomes and actions: PASS: Level is optimized when ONAX A (A solo) = ONAX B (B solo) FAIL: If processor levels appear wrongly scaled for distances/devices then check for wiring errors (unbalanced lines) or physical issues (blockage, poor aiming, etc.)

FIGURE 14.41 Level setting of isolated systems (AB)

Procedure 14.12: Relative-level setting of non-isolated systems (A vs. B) Goal: Set the unity level for a fill system (B) that supplements coverage of a dominant system. The goal is that the combined level of A + B in the fill area (ONAX B) will equal the solo response of speaker A at ONAX A. ONAX B functions also as XAB because of the high overlap. This contrasts with the isolated approach (procedure 14.11 above) where A and B each reached unity as soloists at their ONAX locations. The combined response of A + B has very little effect in the ONAX A area and a very large effect in the ONAX B/XAB area (e.g. we expect to hear the mains under the balcony but not the underbalcony speakers out in the center of the main system’s coverage). This procedure works for either plane (Fig. 14.42).

FIGURE 14.42 Level setting for non-isolated (overlapping) systems Procedure 14.12: Level setting of overlapped systems

Speaker elements: A and B (B is the fill system) Processing: channels B for level and delay (A level is already set) Mic #1 (ONAX A): on axis to A at mid-point depth (for reference only) Mic #2 (ONAX B): on axis to B at mid-point depth (also functions as XAB) Procedure: 1. Prerequisite 1: Main system (A) has been previously level set and EQ'd. 2. Prerequisite 2: If the fill system is an uncoupled array the element spacing/splay has already been set. 3. @ONAX B: Measure B solo, EQ. Set level to match A solo reference. 4. @ONAX B: Compare A solo to A solo @ONAX A. This tells us how much fill we need (e.g. -6 dB from A needs -6 dB from B to reach 0 dB, -3 dB from A needs -10 dB from B etc.) 5. @ONAX B: Set delay on speaker B to synchronize A and B (procedure 14.13).

6. @ONAXB: Combine A + B. 7. @ONAX B: Adjust B level until A + B @ONAX B = A solo @ONAX A. Outcomes and actions: PASS: Level is optimized when ONAX A (A solo) = ONAX B (A + B combined) PASS/FAIL: Coherence response @ONAX B (A + B) should be a large improvement from A solo there. Should be comparable to coherence @ ONAX A

Procedure 14.13: Delay setting for the spatial crossover (A + B) Goal: Synchronize arrivals of two system elements covering approximately the same frequency range. Applicable on multiple levels, from individual speakers to large arrays. Intended to create the most efficient and least detectable transition between the main (A) and fill (B) systems at their crossover (Fig. 14.43). Procedure 14.13: Delay setting of spatial crossovers

Speaker elements: A and B (A is the longer throw system) Processing: channels A and B for delay Mic #1 (XAB): mid-point between A and B (exact location TBD) Procedure: 1. Prerequisite: A and B are level matched and EQ'd in their respective zones. 2. @XAB: Measure the A solo impulse response. The peak shows the arrival of the bulk of the HF range. This is the "0 ms" relative target response. 3. @XAB: Measure B solo. Compare the B and A arrival times here. 4. @XAB: Adjust the B delay to match B solo impulse to A solo. 5. @XAB: Measure the A and B solo frequency responses. 6. @XAB: Combine A + B and compare with the A and B solo responses. Combined response should increase +6 dB over the soloists. 7. @XAB: Fine adjust delay the B speaker (if needed) to phase align B to A. Outcomes and actions: PASS: Delay is optimized when the arrivals match and coupling extends full range PASS/FAIL: If the phase responses do not match over the full range (e.g. different speaker models) then a single delay time cannot sync all frequencies

FIGURE 14.43 Delay setting for spatial crossover alignment

Procedure 14.14: Phase align a spectral crossover (HF + LF) Goal: Synchronize a particular frequency range shared by two elements There are two basic versions that differ in element spacing and mic placement. The “coupled” spectral crossover combines elements within the same enclosure, or close proximity. This can be repeated for three-way (or four-way) systems. Goal is the most efficient and least detectable transition between the (HF) and (LF) elements at their F. Mic position should be near field (1–2 meter depth typical). Mic is located X at the mid-point between the drivers in the plane in which the drivers are offset and on axis to the shared plane. Minimize reflections as they complicate the phase alignment process. If not possible to get a near-field position, the closest ONAX position is the best alternative. The uncoupled version synchronizes a frequency range shared by two elements in different enclosures. The primary application for modern systems is subwoofers separated from the mains. A common example from ancient times is the flying junkyard of horns separated from woofer(s). Our goal is to centralize the synchronization in the room and thereby distribute the errors most evenly over the space. Therefore the mic is placed in the far field (mid-point depth is typical). We seek to distribute the errors as evenly as possible because an uncoupled crossover cannot remain synchronized over the whole room. It helps to minimize reflections in the measurement as they complicate the phase alignment process. Ground plane placement can be a good option for subwoofer to mains alignment (Fig. 14.44). Procedure 14.14: Phase align a coupled crossover

Speaker elements: LF and HF (or MF and HF etc.) Processing: LF and HF channels for delay Mic #1 (ONAX): coupled version—mid-point between HF and LF (near field) Mic #1 (ONAX): uncoupled version—mid-point depth of room Procedure: 1. Measure the HF solo phase response. Set analyzer internal delay for clear view of phase slope in the acoustic crossover range (FX). This is the "0°" relative target response. Do not reset the internal delay during this procedure. 2. Measure the LF solo. Compare the LF solo phase to HF solo phase. 3. Determine which channel arrives earlier at FX (the steeper phase slope). 4. Delay the earlier channel (on processor) to phase match LF and HF solo. 5. Combine LF + HF and compare with the solo responses. Combined response should couple in the crossover and add up to 6 dB over the soloists. 6. Coupled: Move mic in the offset plane to observe response over angle. 7. Uncoupled: Move mic around room to observe response over space. Outcomes and actions:

PASS: Delay is optimized when the arrivals match and coupling through crossover FAIL: Coupled: If the phase responses cannot stay within the coupling zone (3 ms then reduce spacing and restart PASS: Delay is optimized when the arrivals match and coupling extends full range

Procedure 14.20: The staggered four-element end-fire array The tuning of the staggered end-fire array is step-by-step identical to the four-element standard. The difference is mic placement, which moves off the central axis to a location approximately 45° off the centerline of the array. The mic needs to be in the far field (10 m from the array is sufficient).

Procedure 14.21: Ongoing equalization of a combined system (A + B) Goal: Compensate frequency response effects of audience presence and room acoustic/weather during live performance to match the pre-show response. This is third-stage equalization: compensating for changes in a live performance. The changes occur as layers: empty stage vs. band on stage, empty house vs. full house, temperature and humidity variations. In practice these may be presented in sequence or all at the same time. The sound check allows us to isolate the first change layer. It is difficult to isolate the other layers because audience presence, temperature and humidity change happen together. Mic location is critical to maintaining continuity between the original calibration data and the ongoing equalization. In the ideal world we keep mics in the same spots as room empty but this is extremely rare. The fallback positions require some key features to be of substantial help during the show. Each mic position must be clearly within the coverage of a subsystem. We can’t establish cause or make clear decisions from ambiguous positions. If we see a change in the area between A and B, which system do we modify? Without an independent look at A and B we are guessing. Procedure 14.21: Ongoing equalization of a combined system (A + B)

Speaker elements: all (all speakers on at all times) Processing: all Mic #1, 2, 3 ... (SHOW): as many as possible as close to ONAX locations as possible Procedure: 1. Prerequisite: fully calibrated system. 2. Store full system combined response with mics in original ONAX positions. 3. Move mics to best available SHOW positions and store full system response. 4. Compare data from the pre-show ONAX positions with closest comparable SHOW positions dedicated to the same subsystems. If the divergence is too large then consider repositioning or deleting the particular SHOW mic. 5. Store the frequency response of the final SHOW positions with all speakers on. These become the mono reference (REF) traces. 6. During the soundcheck compare the live responses to the REF traces. Stable changes may be the result of busing changes (such as stereo or matrix sends). Unstable changes may be the result of leakage from the stage or busing. 7. Store the most stable final REF traces during the soundcheck. 8. Showtime: Compare the live SHOW traces to the REF traces. Outcomes and actions: PASS: Equalization is optimized if it maintains minimum variance to the target curve PASS/FAIL: Only take action if the readings show plausible, physically possible changes due to new

conditions. If the changes are out of scale then consider all possibilities for error before taking action

14.8 Finishing the Process How can we end an iterative process that approaches perfection but can never reach it? If the fully combined system is not acceptable, we can disassemble the pieces, adjust them, and recombine them again. Alternative approaches might have equal merit. We can first try dividing the space and combining the systems in one way, and then try the other. Nothing ends a discussion better than “Let’s test both theories.” Here is how it ends: We run out of time. Seriously. As long as there is the time there is more that can be learned. Anything we learn today is applicable in advance for tomorrow.

Listening Our ears are trained professional audio tools but unlike the FFT analyzer they are neither stable nor objective. Each ear (and set of ears) has inherent physiological differences. They are not stable physically over our lifespan or over the short term. We need time to adjust to the local atmospheric pressure when we fly into town for a gig. Our dynamic thresholds are temporarily shifted when exposed to high stage levels. Long-term high-level exposure can cause permanent shifts in both dynamic and spectral responses. Those lucky enough to grow old lose dynamic and spectral range in the normal course of aging. These factors and others make the ear a subjective partner in the optimization process. The ears connect to our brain, which contains our unique personal reference library of accrued aural experience. Our expectations about how a violin should sound come from the thousand references to violins in our aural library. We accrue a map of the expected response of a single violin, sets of violins, and details such as how they sound when plucked or bowed. We use this ear training to perform moment-to-moment comparisons, internal “transfer functions” as it were, against our memory maps and make conclusions. The question then becomes how closely it matches the expected response. The ear/brain system also brings context into the equation. We evaluate a sound with respect to the surroundings. Is this normal for this distance? Is the reverberation level in scale with this space’s size and surfaces? Our trained ear/ brain system can make contextually derived conclusions such as “acceptable for a basketball arena’s back rows.” Our personal “reference” program material is the ultimate sonic memory map. We have heard it so many times in so many different places and systems that it’s permanently burned into our brains. The newly tuned system must satisfy our ears against this standard. Ear/brain training is a lifelong endeavor. The complex audio analyzer can be a great aid to this process through ear/eye training. The linkage between what we see on the screen, to our perceptions of a known source such as our reference tracks close the loop on the learning process. If something looks really good on the analyzer (or really bad, strange or unexpected), go there, look and listen. Correlate what is seen on the analyzer, seen in the room and heard. Conversely, if we hear something “strange” somewhere, we can move a mic there and see what “strange” looks like. This helps learning to read traces in context and identify transitional trends in the room. At the end of the optimization process I have often made adjustments to the sound system based on the walkthrough while listening to reference material. In most cases the adjustments are minor, but in all cases I try to find what I missed in the data interpretation. The answer is there, somewhere in the data. This becomes part of the learning process we carry forward to the next job.

14.9 Ongoing Optimization 14.9.1 Using program material as the source A powerful feature of dual-channel FFT analysis is the capability to perform transfer function measurements using an unknown source material. We are “source independent” (the origin of the acronym SIM™), which means we can continue to analyze the response in an occupied hall during performance. The independence has its limits. The source must excite all frequencies, eventually. We’ll have to wait a long time to get data if the source has low spectral density. If the data is dense we can move almost as quickly as pink noise. The result is an ongoing optimization process that provides continual unobtrusive monitoring of the system response. Once the sound system is in operation, the control is in the hands of the artistic sector: the mix engineer. Ongoing optimization allows objectivity to remain in place to aid the artistic process in a number of ways. The first is the detection of changes in the response. Restorative action can be taken in some cases, which minimizes the remixing required for the show in progress. When remedial options are limited it can be helpful to keep the mixer informed of differences so this can be considered in the mix. Much can change between the setup and showtime, and even over the course of a performance. Only a portion of those changes is detectible with our analyzer, and a portion of those, in turn, will be treatable with ongoing optimization.

Changes From Setup to Showtime For a Single Program Channel ■ Dynamic room acoustics: absorption/reflection changes due to the audience presence. ■ Dynamic transmission and summation: changes due to temperature and humidity. ■ Leakage from stage emission sources: the band etc. ■ Leakage from stage transmission sources: stage monitors etc. ■ Re-entry summation: the speaker system returning into the stage mics. ■ Duplicate entry summation: source leakage into multiple stage mics.

14.9.2 Audience presence We know that adding live bodies to a lively concert hall can deaden the acoustics, but adding dead bodies to a dead hall has not proven to liven up the room. The audience absorption acoustical effects are not evenly distributed. Floor reflections undergo the largest change, and ceilings the least. The changes largely depend upon the seating area prior to occupancy. The most extreme case is the

transition from a flat unfurnished hard floor to a densely packed standing audience (e.g. disco, rave, arena floor). The least effect will be found with cushioned seats. In any case the strongest local effects are modifications of the early reflections. This changes the local ripple variance, which may require an equalization adjustment. Reduction of the late reflections also affects the local ripple variance structures and signal/noise ratio. The frequency range of practical equalization shrinks downward as time offset rises, but the upper frequency ranges are likewise affected. Increased absorption corresponds to reduced fine-grain ripple and improved coherence as frequency rises. Reflections that arrive too late for our FFT time window, those seen as noise, are reduced in the room, creating the rise in signal/noise ratio (coherence). Audiences create noise of their own: Screaming fans, singing congregations and the mandatory coughing during classical music all degrade our data quality. Generally speaking the louder the audience gets, the less it matters what our analyzer says. If they are screaming for the band we are probably OK. If they are screaming at us, then looking at the analyzer might not be the wisest course.

14.9.3 Temperature and humidity Temperature and humidity changes affect both the direct and reflected paths. Temperature rise changes the reflection structure as if the room has shrunk. Direct sound arrives faster to the listener, as do the reflections (see Fig. 4.65). A temperature change of 5.6°C rescales each path by 1%, which results in a different amount of time for each. The deck is thereby reshuffled for all path length-based ripple variance, which is driven by time offset as an absolute number, NOT a percentage. The timing relationship between mains and delay speakers changes with transmission speed (the larger the difference in paths, the greater the time offset with temperature). Speaker delay offset over temperature is more easily visualized than reflection changes, even though they are the same mechanism. A delayed system changes from synchronized to early (or late). A reflection changes from late to not as late (or later). Think of it this way: The main/underbalcony timing relationship will still change with temperature even if we forget to set the delay in the first place. Expect the analyzer to see the following response differences: a redistribution of ripple structure center frequencies, a change in the amplitude range of some portions of the ripple variance, modified coherence and delay system responses that need some fine-tuning. Humidity change acts like a moving HF filter. The changes scale with transmission distance to each local area. Longer throws have proportionally stronger air absorption effects, and so the changes are more severe over distance. A short throw speaker sees only minimal differences even with large relative humidity variance. This precludes the option of a global master filter to compensate for the complete system. Corrective measures must take the relative distance into account. Expect to see the following response differences on the analyzer: a change in the HF response.

14.9.4 Stage leakage Stage performers present a leakage path into the sound system coverage area (section 6.1.3) that changes on a moment-to-moment basis. Listeners may have difficulty telling whether they are hearing stage leakage or the sound system (a desirable outcome when sound image is of high importance). Alternatively, out-of-control band gear and stage monitors are the mix engineer’s worst nightmare. We can mute the mic in front of the Marshall but it is still our fault that the guitar is too loud (somewhere) and too soft (somewhere else). The mix engineer can’t mix what’s not in the system. Stage leakage seriously compromises the ability to get accurate measurement data for the same reasons. Our electronic reference is what’s in the mix. If it came from Planet Claire (and not from our system) we fail the correlation test and the response on our analyzer is no longer stable and repeatable. Leakage contaminates our data and reduces our reliability and treatment options. If the show begins and a peak appears in some frequency range, we are put on high alert. But it’s a false alarm if the peak is due to leakage from stage sources. An inverse filter won’t remove it if it was sent by something that doesn’t pass through the equalizer. To make matters worse, our polygraph detector (the coherence response) may be fooled by the leakage. The waveform contained in the leaked acoustical signal will be present in our electrical response from the console. How did it get there? It leaked into the microphones. Since the waveform is recognized, the coherence can still remain high, and therefore the peak has the appearance of a treatable response modification, except that it will not go away, no matter how much equalization we throw at it. There is no single means of detecting stage leakage. One is to try to equalize it. If it responds, we are fine. If not, it might be leakage. Fishing expeditions like this, however, are not appreciated by the mixer, and should be a measure of last resort. The first consideration is the obvious: What do our ears and eyes tell us? We are in a small club and the guitarist has four Marshall stacks. This is going to be a leakage issue. The next item to consider is plausibility. What mechanism could cause a 10 dB peak to arise between setup and showtime? If this was the result of the audience presence, then why do the peaks keep changing shape? Before we grab an equalizer knob or touch a delay we must consider how the change we see could be attributed to the changes in transmission and summation due to the audience presence and environmental conditions. Large-scale changes and song-to-song instability point toward the band. Less dramatic and more stable changes point to the sound system in the room. Those are the ones worth venturing after.

14.9.5 Stage microphone summation A well-tuned system can sound great in every seat when we listen to our reference track, and take on a completely different character when the band comes on stage. We just discussed stage source

leakage into the house, but there is an even more insidious source of trouble: re-entry and duplicate entry mic summation (section 6.1.3). Leakage from stage sources, monitor speakers or main speakers back into the stage mics becomes part of the sound system’s source signal. Massive comb filtering can be introduced into the mixed signal by the stage mics. We can all hear it but our analyzer cannot see it. Why? Because our measurement reference point begins at the console output and the damage happens in (or even before) the mix console. This is best detected by logical deduction: The analyzer doesn’t see it but we can hear it. Therefore it must be upstream. Recall the ripple variance progression in the room. If we stay put, the ripple stays put. If we move, it moves. The opposite can be true when we have mic/mic (duplicate) or speaker/mic (re-entry) summation. The ripple progression upstream is governed by stage relationships. If stage sources move, the ripple moves. If we’re sitting still and yet the ripple is moving, the source of the combing is upstream of the sound system (unless you feel the wind blowing). The most effective means of isolating this effect is to keep good records. Store the data at a given location before the performance (or even the sound check). If the transfer function response remains stable while the sound is changing, there is strong evidence that the solution lays on the upstream side of the art/science line. Letting the mix engineer know that we cannot solve this should initiate a search on the front end for the cause and possible solutions. Leakage in a primary mic channel (like the lead singer) can make the entire PA sound like it’s underwater but can’t be solved by a bilge pump on the sound system. It must be plugged at the source or the ship will continue to sink. We have to use our ears to do this, because the analyzer can’t see it. Using our ears is not a problem, but tuning the whole PA around a combing stage mic is. Don’t do it. Every other source into the sound system will be detuned. We have joined the artistic staff when we’re using our ears to make changes not indicated by the analyzer. We’ll need to inform the mix engineer that we are also mixing the show (which may or may not be welcome news). We should always use our ears, but must be able to discern which side of the art/ science line we are hearing. Our side of the line is primarily concerned with audible changes over the spatial geometry of the room. If it sounds bad, somewhere, fix it. If it sounds the same bad everywhere, don’t fix it. Tell the mixer to fix it.

Ear-Eye Training Tips for Finding Upstream/Downstream Problems ■ Live audio is radically different from playback: problem and solution are upstream. ■ Sound is changing but measurement is not: problem and solution are upstream. ■ Measurement is changing but sound is not: cross-channel leakage (such as stereo). ■ Sound and measurement change: problem and (possible) solution are downstream. ■ Modulating combing: If you are moving, the measurement mic is moving, the PA is moving, the wind is blowing or there’s a strong HVAC vent in front of the speakers then it’s probably downstream. Hint: It shows up on the analyzer. If there are moving stage mics, actors/musicians or stage monitors leaking into mics then it is probably upstream. This will

not show up on the analyzer.

14.9.6 Feedback The worst-case scenario of re-entry summation into the microphones is feedback. Feedback is very difficult to detect in a transfer function measurement (it’s present in both the electrical reference and acoustic signals). The only hint is that the frequency in question may suddenly have perfect coherence. Feedback detection can be conducted in the FFT analyzer in single-channel mode and time spectrograph, where we are looking at the spectrum in absolute terms. The feedback frequency rises above the crowd and can be identified after it is too late. The time spectrograph display is the most effective analysis display for feedback detection because it can see ringing over time that has not fully developed into a full-blown howl.

14.9.7 Multichannel program material Everything just discussed pertains to a single channel of program. Stereo and other multichannel formats mix the sound in the acoustic space. As if it were not already difficult enough to get clear and stable data, we now add the complication of leakage between sound system channels. Our electrical reference is one particular isolated channel whereas our acoustical data includes related (our channel) and unrelated material (other channels). Stereo Stereo is a changing mix of related and unrelated signals. It’s a wonderfully desirable listening experience. Unfortunately it makes for a very challenging optimization environment. The stereo dilemma. Which channel do we use as the reference? ■ Left channel: Measure as far to the left as possible to maximize isolation for right. Data will never stabilize if the mic is placed near or at center. Coherence can be fooled by opposite-side panned signals (recognizes the waveform but mostly it is coming from a system it can’t control). EQ changes will not respond linearly if based on panned signals. ■ Right channel: analogous and opposite to left. ■ Mono sum of L/R: stable at the exact centerpoint between the systems. Cannot tell left from right. Systems must be perfectly matched, mic must be perfectly centered and all actions must be taken symmetrically on L and R. Unstable at all positions off center. The relevant question is: What can we do with unstable data? Should we be turning equalizer and

level controls to stabilize it? Of course not. It’s stereo. It’s supposed to be changing. A mic position has very limited use (and can lead to wild goose chases) if it cannot provide a stable frequency response. Maximum stability is found with a centered mic (XOVR LR) and mono sum reference. The secondary route is ONAX L (as far off center as reasonable) with L as the reference (or vice versa). Multichannel Systems with highly overlapped multichannel interaction are the utmost challenge for in-show measurement. Musical theater often uses separate music and vocal systems. The signals don’t share an electrical reference but are mixed acoustically and cannot be separated without stopping the show. The overture of a musical is likely to be the last shot at measuring the music system. Dialog portions without music would be the best chance for the vocals.

14.9.8 Show mic positions Most of the optimal mic positions will be unavailable when the hall is occupied (I suspect it has something to do with people sitting there). This greatly reduces the quantity and quality of data available during performance. Fortunately we need fewer mic locations. We’re finished with OFFAX, XOVR, VTOP and VBOT because we won’t be adjusting speaker positions or adding acoustic treatment during the show. The principal task of ongoing optimization is monitoring the effects of audience presence, and changing temperature/humidity on EQ and delay settings. The best-case scenario for mic placement is an ONAX (A, B, C, etc.) mic position that’s highly isolated from other subsystems and other channels of sound. Local response changes can be detected and acted on by adjusting the affected subsystem without risking the whole system tuning.

Show Mic Position Priority ■ ONAX A: the most important position for EQ decisions of the majority system. The most global adjustments can be made based on this location. ■ ONAX B: important position with semi-isolated EQ capability. Depends on how much isolation (over frequency) this system has from the A system. Often the mix location. ■ ONAX D (delays): useful for delay time adjustment. Insufficient isolation for EQ. ■ ONAX C or fills: Nice to have but minimal isolation leaves few options for EQ. ■ W HAT VR: A mic placed somewhere in the room with no particular system in control of its coverage. Useful for feedback detection and spectral viewing. Otherwise unusable.

The empty room response at each usable ONAX position is stored for reference during soundcheck. Performance data is compared with this and changes made if necessary to restore the response. We use fallback positions when we can’t get an isolated ONAX. The first level is a less isolated “ONAXish” position. This might have substantial overlap from a related subsystem (e.g. between the ideal ONAX A and XAB). Adjustments made here will have only a partial effect on the combined response due to leakage from the nearby system. XOVR positions are the least useable because they are far too volatile to serve an EQ function during the show, just as they were not used for EQ during setup. Lesser Subsystems This is a show. There will be no speaker muting to observe a particular subsystem. It’s a major challenge to measure the main (A) system with the other subsystems adding in. The challenge level goes up tremendously for all of the lesser subsystems (B, C, delays and fills) because of their lack of isolation from the main (A) and other subsystems up the chain. Even the best ONAX position for a lesser subsystem has a very limited range of use in show conditions. Only the frequency range where that system enjoys dominance will respond to independent adjustment (i.e. a 3 dB cut on a filter reduces the acoustic response by 3 dB). The farther down the hierarchy that our subsystems fall, the less we can do for them in a show context. The LF is the least likely range for the subsystem to maintain local control. We cannot expect an LF filter change in the subsystem equalizer to exert much control over the combined response. The VHF is the most likely range with enough isolation to use a local filter. Delayed systems can be viewed during performances in the hope of maintaining synchronicity under changing environmental conditions. This requires a mic positioned at the spatial XOVR for the delay and the mains. Such mics are more useful for delay monitoring than equalization. Time offsets between the main and delay can be detected and remedied, but EQ is difficult to conclusively adjust because the response in the fill area is a combination of the two subsystems. It is inadvisable to adjust the main system equalization to compensate for changes in the fill area. If any adjustments are to be made here they should be done sparingly and exclusively on the fill system. Detecting the time offset errors between the speakers without muting either system can be a difficult practice in the field. It is easiest when the main and delay have closely matched level and HF content. The impulse response is much more sensitive to HF content, so if the HF is strongly rolled off in the mains then it can be difficult to differentiate the two impulses during the show. There is an alternative method for situations where the time offset cannot be discerned or where a show mic is not practical in the delay area: Estimate the time offset based on a known sound speed change. Make note of the propagation delays before the show at a representative location. Compare this with the current propagation delay. Compute the percentage of time change. Modify the delay lines by the

percentage change. Measurement Mic at the Mix Position The location with the highest probability of showtime access, the mix position, is also one of the most challenged. In stereo systems a central mix position lacks L/R channel isolation and will therefore be hard-pressed to establish a stable response. Positions on the outermost horizontal edges of the mix position can find a more isolated response, and provide a clearer picture of “before and after” responses. These side positions will also include the opposite side arrival but, if the system has limited stereo overlap, some degree of control can be established. None of these is perfect, and often the best we can do is monitor a response solely in relative terms (house empty, house full).

Key considerations for show measurement microphone positions ■ Actionable intelligence: Is the mic in a location where we can get meaningful data? ■ Scope of work: probably just ongoing EQ and delay adjustments. ■ Security: Will the mics be stolen or tampered with? ■ Orientation: Is the mic aimed at the speakers or hanging down (affects the HF)? ■ Plausibility: Does the data confirm the laws of physics or is it showing magic? ■ Local/global: How much of the room is changing in the same way as we see here?

14.9.9 Subwoofers as a program channel There is no point in arguing with somebody who wants to run subs on an aux. However, if we are tasked with ongoing optimization of the system during the concert we have a duty to explain that the frequency range below 150 Hz will not be analyzable. The range covered by the aux subs cannot be reliably measured with a transfer function analyzer because there is no electrical reference that tracks the full range of acoustical response. Like the music/voice systems discussed earlier, the subwoofer range cannot be untangled from the other signals. A transfer function using the mains as a source sees the subs as an unstable, uncontrolled contamination. If the subwoofer send is used as the source the opposite occurs. In either case the electrical reference extends to the full range, so there is no way to separate the acoustic responses.

If Using the Sub Aux Feed as the Transfer Function Reference ■ Usable data: o nly range with LF ≥10 dB louder than mains (isolation zone). ■ Unstable, unusable data: all frequencies above or equal to the spatial XOVR.

■ Coherence is invalid: Can be fooled by correlated data arriving from other speakers. ■ Non-linear EQ tracking: EQ changes in the LF band only track in the isolated range.

If Using the Main Channel Feed as the Transfer Function Reference ■ Usable data: only frequency range ≥10 dB louder than subs (isolation zone). ■ Unstable, unusable data: all frequencies below or equal to the spatial XOVR. ■ Coherence is invalid: Can be fooled by correlated data arriving from other speakers. ■ Non-linear EQ tracking: EQ changes only track in the isolated range.

Chapter 15 Application Congratulations and thanks for making it to this point. This chapter is the summation of all our efforts so far. Our goal is to couple all of the various information sources together for maximum power and coherence. We will explore the design and tuning of eighteen sound systems for sixteen halls using the principles and procedures outlined previously. Although this is not a complete library of every application you might encounter, a wide variety is represented here. The general flow is toward increasing size and complexity. Each application has unique aspects similar to those you may encounter in the field. application n. 1. putting of one thing to another. 2. employment of means. apply v. make use of; put to practical use. Concise Oxford Dictionary

Eight of these application examples include field data from the SIM3 audio analyzer captured during optimization. There are over 340 frequency and phase response measurements compiled into 96 screen shots. The measurements are by no means a complete record of the data taken at any of the venues, but provide some insight into the process. This chapter differs from the previous ones in that it purposefully details my personal perspective and decision-making process. The figures for this chapter are packed with a tremendous amount of technical information. The text expands upon these figures and provides context and background. So much of the text in each application is linked to its related figures that virtually every sentence could include “as shown in panel x of Fig. 15.x.” Read the text and keep an eye on the figures and it should hold together. For the most part, when I do refer to a figure it will be one from a previous chapter. Be warned that there is repetition here and much of it is done on purpose. That’s in the nature of the work. Every system has frontfills. We don’t need all the details for eighteen of them, but we will do enough variations on the themes to make these processes clear enough to assist you in the field. There is a huge body of work in this chapter, much of it provided by others. I am grateful for all the support provided by sound designers, mix engineers, system techs, integrators, acousticians, riggers, digital network experts and more who made these designs and optimizations possible. I have attempted to use consistent formatting, conventions, symbol set and acronyms for this chapter in order to pack the maximum amount of information into the figures. The figures are a series of panels designated with letters and numbers. The letters show a common link, whereas the numbers

designate a series of steps. For example panels A1 and A2 might be plan and section views for mains speaker aiming whereas B1 and B2 provide the same function for frontfills. The MAPP Online ™ plots are 3 dB/color @4 kHz, 1 octave unless otherwise stated. Speaker locations are labeled by function with a simple rectangle. Frontfills (FF) are the same size as subwoofers (LF) and so on. A muted speaker is shown with its name grayed out. The speaker symbols are not scaled but the drawings are. Plan and section views are almost always identically scaled and placed so that they line up. A speaker or mic placed in plan lines up directly with itself in section. Most of the section views are complete from floor to ceiling. By contrast, most of the plan views are cropped to include just over half of the hall. This saves an enormous amount of layout space and reduces redundancy. Obviously all operations mirror to the opposite side. The mic locations shown correspond to our placement strategy for calibration. All of the SIM3 traces have the same vertical scale for amplitude (±30 dB) and ±180° for phase. Leaving these off the graphs saves valuable space for traces. I realize that the vertical scale may appear compressed to you. It is. I show it this way here because that is how I view it to make decisions in the field. In my experience it is far too easy to get lost in small details. In practice I look for the trends I can see at this resolution. If you see it here, you can hear it there and it is fair game for action. Note that the EQ trace is sometimes normal, and other times shown inverted. The coherence scale is 0 to 1 (as always). The top of the graphs is always 1 but the zero is not consistent in the SIM3 trace set (sorry). Sometimes coherence uses half the vertical height and other times 1/3. You may have been wondering whether or not anyone actually places mics at all these locations and follows the methodical procedures outlined in this book. Can it really be done this way or do we just wing it once the first shot of pink noise has been fired? The answer is in the SIM3 traces. One final note, before we begin. This chapter reads like a cypher code if you skipped the rest of the book to get to the really good stuff here. The decryption key for this chapter starts on page one of the book.

15.1 Venue #1: Small Hall We start simple with a small hall and small budget. The system is L/R mains, an LF system and frontfills.

15.1.1 Mains The horizontal shape is nearly square, just barely wider than deep. The FAR value is 180°) to cover the whole room from center. That is a definite no-go, leaving the option for a coupled point source of two or more elements at center, if desired. Instead an L/R system is used, so we bisect the room and find the FAR value for each side (1.8), which means we need at least an 70° speaker/side. The available positions are offstage in the corners so the speakers are aimed inward to form a symmetric uncoupled point destination. We follow the horizontal aim method (middle/middle), which yields 25° inward. The horizontal aim for the left main can be verified by measuring its custody edges at mid-depth (OFFAX L and XLR). The L main solo response should be equal at these locations. The horizontal coverage can be verified by comparing ONAX L to the previous measurements. The speaker is not wide enough if the HF range at OFFAX L and XLR are more than 6 dB down from ONAX L. In this case we needed a minimum of 70° horizontal coverage and specified an 80° model, so we have a little bit of extra width. The vertical shape is a simple rake with a 2:1 range ratio, the limit of coverage for a single speaker. Mains vertical coverage begins at the third row and continues to the last. We need 25° of vertical coverage so the compensated coverage angle is 50° (25° × 2:1 range ratio). The speaker is aimed at VTOP (-11°), which we do any time the range ≥2:1. The vertical aim for the left main can be verified by measuring its upper and lower custody edges (VTOP and VBOT). The L main solo response should be equal at these locations. Vertical coverage can be verified by comparing ONAX L with the previous measurements. The speaker is not wide enough if the HF range at VTOP and VBOT are more than 6 dB down from ONAX L.

FIGURE 15.1A Design process for Venue #1: Coverage partition strategy, solo mains and frontfills

Solo equalization is performed around the ONAX L location, which is on the horizontal axis between mid-point and the VTOP. This then becomes the GOLD standard.

15.1.2 Frontfills (FF) The horizontal shape is a slightly rounded stage but the seating is on a different arc, making the outermost seats closer. The difference is 2 dB (2.5 m/2.0 m) so we will compensate this with level and spacing. The outers are turned down 2 dB and their spacing is reduced to 80% of normal. The spacing math works out such that four frontfills can just barely cover the space. The outer and innermost seats are both at the coverage edge (-6 dB). This means that we will have to get exactly the positions we need when construction occurs, which is as probable as winning the lottery. The four-box scenario leaves a gap at the center aisle (which is OK) but risky because we are a long way from the L/R mains. I chose to add a fifth box on the centerline, which assures we have coverage where we most need it (near the center). All the spacings are reduced proportionally, which fills the entire row and leaves some room for real-world positions. Field verification of the spacing is performed along the first row (the unity line) with mics placed at the ONAX FF and XFF positions. Level setting and EQ are performed at the ONAX locations. The spacing is optimized when the combined responses at ONAX and XFF are equal. The vertical aspect should be a non-event. Fight for the highest possible location (footfill prevention) and minimize the risk of HF missing the ears by using first- or second-order speakers. Equalization for frontfill systems is usually carried out as a combined block (not as soloists). This is

simply a practical matter, and could be done in two stages (solo then combined) if time permits. EQ is done at the ONAX locations, not the volatile XOVR areas. You may wish to high-pass the frontfills to reduce leakage and maximize headroom. As a general rule I roll them off gently (second order @160 Hz), which simulates a unity-class spectral crossover to the subwoofers. Excessive range reduction can increase the risk of spectral sonic image separation (section

15.1.3 Low-frequency system (LF) We have enough room at overhead center to implement an in-line gradient cardioid subwoofer array. This takes only two speakers so it is the same money and L/R but yields superior uniformity and reduced leakage. Field calibration is done by measuring behind the array (procedure 14.17).

15.1.4 Combined systems The frontfills are combined with the mains along the third row. Timing is set to sync the two systems there and then the level is set to sum together to the unity GOLD standard. We can separately delay each frontfill to the main on their side if we have independent channels (center would be the longest). Use the middle frontfills (FF2) if there is only one processing channel (delay dilemmas, section The L/F will meet the mains on the floor. We will reuse the ONAX L location and sync the systems and set level there. Use the phase delay method for the spectral crossover calibration (procedures 14.9 and 14.10). Note that this location is the L main center (not the room center). This prevents three sources from sync’ing at center (section The mains will probably have to be delayed because they would likely arrive first. If the amount of delay to sync is too high then we may decline and let the subs fall behind rather than having the whole system do so (TANSTAAFL/triage). Let’s consider three options for main + LF spectral crossover: overlap, low-order unity or high-order unity. Overlap has the highest amount of ripple variance between the uncoupled sources, but has the highest immunity to spectral separation (i.e. distinctly localizing the LF and HF as discussed in section High-order unity has the best immunity from ripple variance, but is the most vulnerable to spectral splitting. A low-order unity is the middle ground in both categories. The risk of spectral separation is very high here because the sources are uncoupled in both the vertical and horizontal planes, which would lean me toward overlap (procedure 14.10). Room reflections in such a small space will likely have a stronger effect in the overlapping frequency range than the speaker/speaker interaction.

The combined equalization will be very minor. There is nothing to do with L + R. The main + LF crossover may require some EQ to reduce the bump in the overlap region. Frontfills are too small a contributor to create a combined response worth treating.

FIGURE 15.1B Design process for Venue #1: main + frontfills, cardioid subs and main + subs

15.2 Venue #2: Small Rectangle, AB Main Array This venue requires vertical coverage beyond the capabilities of a single element. The 3:1 shape would do well with a line array of third-order proportional elements but this is not in the budget. Two elements/side is the lower limit for quantity and the upper limit for budget, so that’s what it’s going to be. The system design consists of L/R mains (A), L/R downfills (B), frontfills and subwoofers. Our study will be limited to the first three.

15.2.1 L/R mains (AB) The room is narrow enough to cover with a 90° center cluster (with frontfills and some small sidefills). The system design will be L/R mains because (a) that is what engineers want and (b) L/R is not impossible. Just your daily reminder that center clusters are great when no other option is possible, but otherwise we default to L/R. Each main will be a single horizontal element and an ABtype asymmetric coupled point source in the vertical. The mains horizontal aim is calculated the same way as we would without a downfill. Its propagation begins at the speaker and it has to make it to the end of the room. The mid-depth calculation yields an inward aim of -16°. We only need 45° of coverage/side. We have the option of cutting it close with 50° speakers or allowing more overlap with an 80° model. In this case the walls are covered with heavy drapes so we are not afraid of them and will go with 80°. This is a TANSTAAFL/triage decision that gains more spatiality at a cost of ripple variance. The non-hostile walls tipped the scales in favor of added spatiality.

FIGURE 15.2A Design process for Venue #2: coverage partition strategy, solo mains (A)

The vertical plan is a steady incline that yields a 3:1 range ratio from the minimum allowable height. The coverage strategy is to make the mains carry the coverage from the rear to the point where their underside gets too much spectral tilt. The upper main (A) is aimed at the rear seating (ONAX ) and then we start down from there. A FAR wide vertical speaker could almost reach the front in the HF range because the coverage target keeps getting closer underneath. The stopper is spectral variance because the LF range will be much louder in front than back. Therefore a 50° model was chosen that leaves us with 25° underneath before calling for backup. The total range to be covered is 45° so we will be leaving the bottom 20° for the downfill to connect to the frontfill. The downfill (B) will be aimed at the off-axis edge of the main (A), which means that ONAX B and XAB are the same location. This is typical of highly asymmetric combinations (which is the case here). Element A is aimed at -5°, which puts XAB at -30°. Element B is a 40° device that can make it to -50° (that’s 20° below its aim at -30°). We can conclude from this that we have enough vertical coverage for the shape.

FIGURE 15.2B Design process for Venue #2: solo mains (B), solo frontfill, solo and combined mains AB + FF

The B element does not need to copy the horizontal aim of the unit above. Its range is limited to the front half of the room so we should truncate the shape and recalculate the horizontal aim based on the reduced depth. This leads to a 30° inward tilt. The coverage angle is also reconsidered here because the FAR shape is different (same width but smaller depth). Therefore a wider speaker is warranted here (a 50° model is definitely not an option here) and a 90° speaker was selected. The last item is power scaling. Element B is covering around half the range of A (6 dB) and is aiming its on-axis response into the off-axis edge of element A (another 6 dB advantage). It would be a waste of money to use the same element for A and B because the B element can be 6–10 dB less powerful without worries.

15.2.2 Frontfill (FF) The frontfill spacing is set by the distance to the front row. The stage is a straight line so the calculation is easy. The elements are 80° so the spacing is 1.25 × the distance to the first row.

15.2.3 Combined systems The upper and lower systems cross at XAB. Element B will be delayed to sync with A here and its level adjusted to combine to approximate the GOLD reference (the element A response in its prime area). We are combining a highly tilted response (A solo) with a flat one (B solo), so the combined response will likely be somewhat louder than the GOLD response. The combination effects will be felt more strongly below XAB than above due to the level dominance of the A element. Asymmetric EQ may improve things by reducing the B element’s LF contribution with shelving filters. The frontfills join the AB mains at XB–FF, which is at the second or third row (depends on blockage). The frontfills are delayed to the arrival of mains element B and level set to combine to match (or slightly exceed) the GOLD reference.

15.3 Venue #3: Small Shoebox, L/R, Downfill and Delays This is such an extreme version of the shoebox hall that it can only fit a single shoe. The horizontal shape has an FAR value of 3:1, which means a 40° center cluster could cover the room. The vertical plane is a steady rake with a flat ceiling. The ceiling height is very low by the time we reach the back of the hall. This type of vertical shape is beautifully solved with the modern line array (asymmetric composite coupled point source). The client wanted an L/R system so a standard line array seems like the standard solution, except for one small thing: These are first-order systems in the horizontal plane, typically around 90° of coverage. That’s 180° of speaker in a 40° room. The video also has a strong impact as this wiped out the standard sound reinforcement locations. The mains were given the uppermost corners with not even enough room to hang a downfill (those were uncoupled from the mains and hung several meters below).

15.3.1 Mains (L/R, AA) Because our main speakers were coupled to the sidewalls we wanted to ensure they didn’t spill a large amount of their pattern outward. An array comprised of two second-order elements with 20° of horizontal coverage provided a sharp edge in the horizontal plane. The L and R mains each had 40° of horizontal coverage so the overage was perfectly reasonable. Because the hall is so skinny there is actually a sizable percentage of listeners within the “stereo possible” window, so our overlap should work well. The L/R main was designed as an AB pair but the calibration settings turned out to be identical so it’s now an AA symmetric pair. Flexibility is better to have and not need, than need and not have. The mains have a wide vertical coverage (60°) and cover the majority of the depth. Delays finish coverage at the top. L/R downfills and stage lip frontfills finish coverage at the bottom.

15.3.2 Downfill (DF) The horizontal aim for the downfills was evaluated differently than the mains as was done previously in Venue #2. The coverage depth was much shallower than the mains so the elements were turned inward 25°. The vertical aim was directed at XAB where the flat response of the downfills meets the spectrally tilted response of the main’s underside. Delay and level are set to achieve a combined response that matches GOLD as much as possible.

FIGURE 15.3A Design process for Venue #3: coverage partition strategy, solo mains (A)

15.3.3 Frontfill (FF) The horizontal spacing was determined through the uncoupled line source design reference (Fig. 11.20). The unity line was the first row. Unfortunately this also turned out to be the limit line because the speakers were mounted too low to reach the second row. Fortunately the downfill system was able to begin its coverage there. The frontfills were timed to sync to the downfills. The outer frontfills were closer to the downfill so they received a shorter time than the inners. This is another case of delay dilemma, which weighs in favor of the larger number people affected (only one seat/side is in the FF1–FF2 crossover range). You might ask why we didn’t time things to a fictitious stage source (another player in the delay dilemma game). The digital age has largely handled this concern for us. Console and signal processor latency give the stage a healthy head start. By the time the downfills come down and in to the meet the frontfill they are definitely behind a nominal stage source (which is a guesstimate at best). Therefore this dilemma solves itself and leaves us to set to delay to the fixed and stable relationship between DF and FF.

15.3.4 Delays (DEL) Once again the horizontal spacing was determined through the uncoupled line source design reference (Fig. 11.20). The unity line was designed to fall at an advantageous location: the first row following a cross-aisle. This put the most volatile part of the combination (the underside of the delays) at a location where no listeners will sit. The vertical aim was the last row (VTOP) because the range ratio is >2:1. Timing and level were set at ONAX DEL, which also served as XL–DEL

15.3.5 Combined systems The most challenging aspect of the combination was fitting the downfills into the box between the mains and the frontfills. The mains overreach their coverage, so the downfills muscle their way in to stop the spectral tilt. The result was minimum spectral variance with a small cost in level variance (a rise of about 2 dB in the DF seating area). This is a trade I will take, especially in a small hall where the stage sound is a strong presence (it will be louder in front as well). On the other end the frontfills did not reach their original meeting point. We needed all of the downfill’s 90° × 40° coverage to meet with the inner frontfills.

FIGURE 15.3B Design process for Venue #3: solo mains (B), solo frontfill and solo delay

FIGURE 15.3C Design process for Venue #3: combined mains AB + FF + delay, side and rear surrounds

15.3.6 SIM3 optimization data Here are some details on the SIM3 traces acquired during the tuning. Refer to Fig. 15.3D. Panel A1 shows the response of ONAX A1 and A2 in the approximate positions shown in Fig. 15.3A(B2). The responses are spectrally matched but A1 is 2 dB below A2 (the traces are shown offset by 2 dB). Recall that the upper main system (A) covers most of the room. Therefore, multiple mics were used to ensure the aim was correct. We expanded our perspective downward by including mics A3 and A4 (Fig. 15.3D, panel A2). The responses were still spectrally matched, which gave confidence to the EQ settings. The minimal ripple variance (and very high coherence) occurs because the room was acoustically optimized for amplified sound reinforcement (highly absorptive) and was designed from the start to utilize Meyer Sound’s variable acoustic Constellation ™ system. This is the best possible environment for system optimization. The total level variance was 3 dB from A1–A3. ONAX A3 became the GOLD reference for the rest of the tuning. Panels B1–B4 show the process of adding the downfill. Variations of this sequence are repeated for many of the remaining examples so I will fully detail it here, being more brief as we go on. Do we need the downfill? This is assessed in panel B1 by seeing what we have without it and comparing that to GOLD. We are comparing the underside of the mains with its ONAX response. Do we need it? For level variance, no (we are even). For spectral variance, yes. The reality here is that it’s going to get louder in front because we have a large-range ratio (10 dB) and limited vertical steering. We can, and will, make it spectrally consistent. Panel B2 shows what will be combined: the spectrally tilted underside of A with the flat-on axis signal from B. This reverses the spectral variance down there. Panel B3 shows the AB combination, which reveals a rise in the HF with negligible effects elsewhere, i.e. spectral un-tilting. The final step is comparison to the GOLD standard, which reveals that we have minimized the spectral variance. The level variance is 3 dB (the XAB trace is shown offset -3 dB). We tackle the frontfills in Fig. 15.3E, panels C1–C3. We know we need them so we move on to preparing for combination. Panel C1 includes the phase data of the two speakers to be combined (after the delay has been set). Are they phase compatible? Yes, extremely. Don’t take this for granted. Not all speakers are created equal (or compatible). Panel C2 shows the three parts of the combination: B solo, FF solo and B + FF. Panel C3 verifies the combination by comparing it with GOLD. We maintained a matched spectrum to GOLD and added no more level variance. The combined level here is +3 dB over GOLD (same as XAB). Our final stop is up top where we will add the delays. The process is the same (surprise!): assess, prepare, combine and verify. These are laid out in panels

FIGURE 15.3D SIM3 optimization data for Venue #3

FIGURE 15.3E SIM3 optimization data for Venue #3

D1–D3. The equalization that was applied is shown in panel D1 (a single filter in the VHF range). This is a case where we can’t improve it much so just don’t &*%# it up. The needs assessment shows that the mains are -6 dB down in the rear rows compared with GOLD. The coherence is also low there, so we have double confirmation that we should use the delays. The final panel verifies that we have achieved minimum spectral, level and ripple variance between the last rows and GOLD. This family of SIM3 traces shows an unbroken line of minimum variance through four subsystems (DEL– A–B–FF) from the last to the first row.

15.4 Venue #4: Medium Hall, Two Levels, L/C/R with Fills This is a theater with two sound system designs in the same room: L/R and C. The L/R system covers the complete room on its own, as does the center system. The application is musical theater, an environment that can take full advantage of an L/C/R system. The L/R system is placed in proscenium towers. It’s an asymmetric point source in both planes: upper and lowers, each of which are inners and outers. Subwoofers are also designed and tuned as part of the L/R system. The center main system is a composite point source over the proscenium along with sidefills down low in the towers and underbalcony delays.

15.4.1 Left/right upper/lower mains (L/R, Up, Lo) We’ve got inners and outers, upper and lowers. Where do we start? The order of operations favors coupled pairings first over uncoupled. Inners and outers are directly coupled. Uppers and lowers are separated by a few meters within the towers. Therefore we marry inner and outer on each floor before connecting them together at the balcony front.

FIGURE 15.4A Design process for Venue #4: coverage partition strategy (L/R), solo mains (upper)

The inner/outer connection is in the horizontal plane. The outer speaker sees the two levels quite similarly because they are nearly the same depth. The inners see totally different things: people on the floor and air upstairs. The lower inners were level tapered to prevent overheating the center. If allowed to remain at full level, the center seating area could easily rise 10 dB above GOLD. It’s a

very square room so the center is much closer to the speakers than the rear. Both L and R’s inner speaker cover the center seating, whereas the outer pretty much goes it alone in its coverage area. Therefore a center-panned signal will overload the center area if we don’t take steps (i.e. taper down the inner level 3 dB). The splay angle was adjusted by the compensated unity splay method to 35° (50° elements × 70%). The combined shape squares off the coverage to minimize the level variance in the center. Upstairs we can see that the inner speaker covers the opposite side as much as its own. Its role is largely spatial enhancement, which is acceptable for this application because the L/R system is for music reinforcement, not voice. We could have concerns if the walls were highly reflective, but we knew they were acoustically absorptive. An alternative option would have been to reduce the splay, which would increase the overlap, add ripple variance and couple for more power. TANSTAAFL would have leaned this way if the room was reflective and the client was Megadeth. The vertical roles of the upper and lower systems are clear-cut with a balcony to divide them. The upper/lower range ratio was 2 dB, which was level compensated (lowers turned down 2 dB). The vertical aim requires minimal calculation for either system. The lower’s range ratio exceeds 2:1 and the upper is facing almost straight up the rake. The conclusion is the same: Aim them at their last rows.

15.4.2 Center main (ABC) The center main is a single asymmetric composite coupled point source. The overall vertical coverage spans 52° with a 2.5:1 range ratio (8 dB). We would love to avoid the balcony front but we can’t not get there from here (a twist on an old phrase). Balcony avoidance is unworkable here due to the smiles and frowns effect (Fig. 11.24). We are high above a balcony that’s almost straight, which makes our speaker very sad. Instead the strategy is to approach the vertical plane as a single slope. It’s not perfect, but it’s not a foolish overreach for something made of unobtanium. We have ten boxes so our average splay is 5° (52°/10) and the splay ratio is 2.5 (9°/4°), a match for the range ratio. Our vertical coverage extends from the top to the third row where we cross over to the frontfills. The underbalcony area is shadowed from the mains and therefore has mandatory mini-mains to cover there.

FIGURE 15.4B Design process for Venue #4: solo mains (lower), combined mains inner/outer + upper/lower

FIGURE 15.4C Design process for Venue #4: coverage partition strategy for center (C), solo mains (ABC)

FIGURE 15.4D Design process for Venue #4: sidefill + frontfill, solo u-balc, mains + sidefill + frontfill + u-balc

15.4.3 Center channel sidefills (SF) The outer corners near the front need help to counter level, spectral and sonic image variance. The sidefills cover this area and improve all three scores. Sidefill level depends on the depth of the coverage gap, which is determined by the lateral width of the center mains on the floor. This hall was used as an example application for the technique back in Fig. 11.10. In this case we can finish the sidefill coverage at the cross-aisle, which provides a convenient exit point. The vertical aim is at the desired end point, which ensures that the level and spectrum blend gracefully and prevent it from getting too loud in front. The safe approach to these types of fill applications is to use low-order speakers rather than attempt precise surgical maneuvers with sharp edges. Proper level, aim and delay can soften the blends and prevent shocks to the sonic image, spectrum or level.

15.4.4 Underbalcony delays (UB) The mix position is in the occulted area under the balcony. The most critical audio signal (the vocal center channel) will be mixed in a bunker on a tiny fill speaker used as a main. Nice. Spacing and aim are set as usual, but we can make allowances here to ensure the mix position is as close as possible to GOLD. Underbalcony occultation begs the question of where to set the delay. There really is no crossing in the crossover, a whole new category of delay dilemma. I don’t have a right answer (except a

jackhammer) but will tell you what I do in the field: move the mic out to the last seat before the blockage and set the time there. Level and EQ are set back at ONAX UB.

15.4.5 Combined systems The sidefills and frontfills are all that remain for us to combine for the center channel system. The SF level is set to match GOLD in its isolated area of coverage (ONAX SF). Timing and splay angle are adjusted at XC–SF to achieve a combined response that matches GOLD. Frontfill level and delay are set at the crossover XC–FF to achieve a combined level matched to GOLD. The L/R system has already combined inner and outer, upper and lower. All that is left for it is the subwoofers. They are just a pair on each side, located very close to the lower system. Essentially we have a coupled crossover to the lowers and uncoupled to the uppers, so the timing choice highly favors going with the lowers.

15.5 Venue #5: Medium Fan Shape, Single Level, L/C/R The venue is a simple fan shape with proscenium stage. The program material is theatrical/showroom productions that will run for an extended period of time. This makes an L/C/R system a viable alternative because the shows will have ample time to properly matrix the channels. In this case the principle channels would be center for vocals and L/R for music. The main center cluster required two elements in the horizontal plane because a single unit could not fill the entire fan. Therefore the main center array is a type AA symmetric coupled point source (horizontal) and a type ABCD asymmetric composite coupled point source (vertical). The center channel also includes uncoupled sidefills near the stage level, which provided coverage and reduced vertical image distortion in the near outer seats.

15.5.1 Left/right mains (ABCD) The horizontal aim strategy is the same as previous examples. We need at least 70° of coverage so the 90° model chosen will have only small amounts of overlap onto the sides and across the center. The vertical target is a steady rake from a low position that yields a 5:1 range ratio over 36°. That’s a lot of range but fortunately not a lot of angle. We have eleven boxes broken into successive splay doublings of 1.5–3° and 6°, and then finish at 9°. This gives us a 6:1 splay ratio to counter the 5:1 range ratio so we are in good shape.

FIGURE 15.5A Design process for Venue #5: L/R coverage partition strategy, combined L/R main (ABCD) + frontfill

15.5.2 Center main (ABC) The horizontal splay between the two 90° elements of the center cluster is a compromise value. The unity splay of 90° gives more coverage than needed and splashes too much on the side walls. We also knew we had the center channel sidefills to cover the outer extremes so we did not need to go all the way out with the center cluster. Reducing the splay to 60° balanced the overage between the center and the outside edges. The center’s vertical coverage target had the same VTOP and VBOT as the L/R mains but from a very different perspective. The high location cut the range ratio in half but increased the coverage spread from 36° to 60°. Again the solution was a steady progression of increasing splay from A to D, but with wider elements at top and bottom and a reduced splay ratio.

15.5.3 Center channel sidefills (SF) This is a wide hall and we did not want to have to splay the center mains out so far that they faced into the side walls. A powerful sidefill deck system with strong horizontal control would ensure that we could penetrate vocal content along the outer edges of the fan with minimal risk. Arrays with 3 × second-order 20° elements were used that had level taper flexibility to custom shape as needed on site. The area they need to cover could be found by the lateral width method (shown in Fig. 15.5B (D1)).

15.5.4 Combined systems The L/R and center mains are combined as ABCD coupled arrays. Each layer adds some lowfrequency buildup to the combination, which can be incrementally compensated. The center differs from the L/R arrays in that the AA combination is fully correlated (i.e. mono). The combination of the center cluster’s two sections is a coupled fully correlated (i.e. mono) AA combination. Therefore the combined spectral effects are compensated. This contrasts to the L/R clusters, which are both uncoupled and semi-correlated (i.e. stereo), and therefore are left with their combined effects uncompensated.

FIGURE 15.5B Design process for Venue #5: center coverage partition strategy, solo center (C) mains (AA, ABCD)

The L/R system connects to frontfills and subwoofers while the center cluster connects to frontfills and sidefills. Obviously the optimal delay time for the frontfills would be different for the L/R system and the center, so which do we choose? The center system gets priority because it is carrying the vocals. I am a big fan of intelligible tuba but vocals take priority when I have to choose. The grayest decision in the combination set is the merger between the center cluster and its sidefills. The sidefill level setting has a strong effect on the crossover location, which determines the delay setting. This is best determined on site during calibration by searching around to find the locations where the center mains drop in level or become excessively tilted. The crossover is expected to lie between the house center and the on-axis line of the sidefills (typically closer to the sidefills than house center). This is an uncoupled combination so there is probably very little (if any) combined equalization. This is a musical theater showroom with in-house productions. These types of applications almost always drive their subwoofers from an auxiliary feed. In such cases I just time the subs for mid-depth and set a level that would meet the target curve if they sent a unity level signal through the aux send. In such cases I leave the mains and subs with an overlapping crossover.

15.5.5 SIM3 optimization data

Panels A1–A3 of Fig. 15.5D show the family of EQ curves used for the L/R mains (ABCD). SIM3 has three transfer functions: room (EQ out vs. mic), EQ (EQ in vs. out) or result (EQ in vs. mic). These responses are shown for the each composite element (solo) at its ONAX location. The electrical responses are shown in panel A2. The VHF range is different for each, with A getting a boost and D getting a cut. This is because the longest throw system (A) has the most air loss. The EQ in the LF range also varies over the set of four composites but the result (panel A3) shows very well-matched solo spectrums.

FIGURE 15.5C Design process for Venue #5: solo sidefill, combined center (C) mains + sidefill

FIGURE 15.5D SIM3 optimization data for Venue #5

The next step is combination, which begins with A + B and continues from there. Panel B1 shows the beginning of the process by showing the spectral tilt that happens at ONAX A when we add the B section. The combined EQ will seek to control the tilt. The next panel (B2) shows the tilt of the full ABCD combination at various positions. The final panel in the series (B3) shows the final result after combined EQ. Notice that the response has more tilt overall than the equalized solo responses (which were fairly flat). My EQ strategy here was to keep the soloist flat and then let the combined response tilt us into the “target curve” area for this application. The combination tilted more than desired (shown in B2) and then was brought back down to the target (+6 to 10 dB in the LF range) with combined EQ. The tilt is greatest in the middle of the vertical coverage (ONAX B and C). This results from the beam concentration behavior described back in Section The level variance is 1 dB (it’s louder at the bottom), which is notable in light of the fact that the range ratio from ONAX A (92 ms) to ONAX D (23 ms) is 4:1 (12 dB). I have included the phase traces here for those who might fear that unmatched EQ would cause them some problems. Notice that the combined phase traces are well matched. The second set of traces (Fig. 15.5E) shows the calibration process for the center mains. This array has less range ratio and more angular spread than the L/R mains. Panels C1–C4 detail the combination effects at a single position (ONAX A) as elements are combined (without combined EQ applied). It begins with A solo, adds B, then C and D, and then finally the entire other half of the AA point source (horizontal) array (ABCD). The strongest effects are the closest neighbors (A + B, vertically) and A + A horizontally. The folks near ONAX are only minimally affected by the C and D sections of the array. The final set (D1–2) shows the combined EQ from top to bottom and the end result. A total of seven mic positions span the two drawings (GOLD appears on both for reference). The amount of equalization is substantial because the boxes start with individually flat responses. Large quantity at wide angles set the spectral tilt mechanism to maximum. The combined spectral variance is very small after EQ. Our target curve is flatter in this array because this will be the vocal system. The combined level variance is just 1 dB over the 9 dB range ratio between ONAX A (92 ms) and VBOT (37 ms).

FIGURE 15.5E SIM3 optimization data for Venue #5

15.6 Venue #6: Medium Hall, L/R with Centerfill This is another fan-shaped venue but the application is a rental with a simple high-power L/R system. In this case the design begins with “we have twelve boxes/side” and we go from there. The range ratio is fairly high (7:1, around 15 dB) but the rake is a simple diagonal so we can approach this with a series of sequential doublings. The horizontal coverage expands with distance and our 70° speakers are wide enough to fill the width (we need at least 60°). There is a gap in the middle that will require centerfill, but none on the sides because we are at the outer edge of the fan. The remaining parts of the system are frontfills and subwoofers.

15.6.1 Left/right mains (ABCD) The horizontal aim follows the middle/middle guidelines shown in Fig. 11.25. Coverage and aim can be verified by measuring across the width of the room at mid-depth. The vertical approach is to create a diagonal line of minimum variance, which is accomplished by progressive splay doubling. We are seeking to counter a 7:1 range ratio in twelve boxes so we have to start from a small angle (1°) and work up to 7° at the bottom. The upper composite elements have more overlap and more quantity, which maximizes our ability to cut the diagonal contour.

15.6.2 Centerfill (CF) The gap area that needs centerfill is a triangle whose bottom touches the frontfills and top meets L and R. The centerfill is needed most at its bottom (which is closest) and least at its top, where we will see it fade away under the mains. The centerfill cluster is also a line array (an AB composite) whose more important member is the smaller one (B). We don’t need or want this array to lay a surgical unity line from front to back. The sharp edges that we can make with tight splay angles are a disadvantage here. Help is on the way as we move back, so we can allow this array to gracefully retreat at the rear. This is why the splay ratio is less than 2:1 between the elements.

FIGURE 15.6A Design process for Venue #6: coverage partition strategy, solo L/R main (ABCD)

FIGURE 15.6B Design process for Venue #6: solo L/R downfill (D), solo centerfill, main + centerfill

15.6.3 Combined systems The low-frequency range rises as the composite elements of the L/R mains are added together. In this case I EQ’d the individual composites fairly flat and then let the combined response provide the

desired spectral tilt of the “target curve.” The LF array for this application was a simple L/R pair of gradient-inverted stack arrays. They were in close proximity to the mains and timed to sync with them at the middle depth. The crossover was a second-order unity type because there were no worries about image separation.

15.6.4 SIM3 optimization data We are going to move right to the combined EQ because we have just analyzed a similar L/R main array in the previous venue. This one has an even greater range ratio challenge (101 ms vs. 18 ms, i.e. around 15 dB). The biggest difference in EQ between the five subsystems is found in the VHF region again to counter the air loss. There are also variations in the mid-high EQ to maintain a constant level in that range from front to back. The unequalized response (room) on Panel A1 shows how far apart the mid-high and VHF responses were to start with. The equalized response (result) shows how close together we got them after asymmetric EQ. Panel A2 compares top with bottom. The 15 dB range ratio is reduced to a 2 dB level variance with minimum spectral variance. Now onto the really glamorous part: frontfills (panels B1–3). The notable feature in panel B1 is the filter at 16 kHz that cuts down a perfectly innocent VHF response in the solo frontfill. This is done because our mission is to match (not exceed) GOLD. The frontfill has a tiny tweeter and only 2 meters of air to travel through. GOLD (and everywhere else) can’t talk to the dogs so we have to penalize the frontfill and limit its range. If you don’t think to do this here (and underbalconies) you will get to experience “bacon fills,” the sizzling sound of speakers playing in the VHF range that the mix engineer doesn’t hear in the mains. The centerfill is calibrated in Fig. 15.6D. Panel C1 assesses the situation at crossover XL–CF. This should be the edge of the mains and hopefully a healthy area for the centerfill. The mains appear edgy, but not so much because of level, or spectrum. There is energy there; it’s just lumpy and low in coherence. Ripple variance is what tips the scale here. Next we prepare for combination by setting delay and examining the amplitude and phase. The amplitude gives us the flat CF vs. the tilted mains that we were looking for. Are they phase compatible? Above 500 Hz, yes. Below, not so much. This will not be a big issue because we need only a minimal-level contribution from the

FIGURE 15.6C SIM3 optimization data for Venue #6

FIGURE 15.6D SIM3 optimization data for Venue #6

centerfill. Remember that we are looking for some fresh, direct sound, not a power boost. The combined response (main + CF) is shown in C3. There is modest addition above 1 kHz (desired) and we’ve smoothed out the ripple (desired) and only lost a dB or 2 at 500 Hz, which we can live with. The final panel verifies that the centerfill + mains @XL–CF are matched in level, spectrum and ripple variance to GOLD.

15.7 Venue #7: Medium Hall, L/R with Many Fills This hall looks deceptively simple but required a large number of coverage partitions. The architectural team originally insisted on hiding the speakers in soffit panels far offstage. In fact, the proposed positions were so far offstage and so deeply recessed that they would not have line of sight to many of the seats on their own side. Listeners at the back of the house would hear the opposite side louder than their own. The fight for a usable position was hugely important for this venue. A compromise position was granted: far offstage but in plain view with unbroken sight lines. The mains cover most of the room but needed help in the center, nearby sides and under the balcony. A centerfill and gradient inverted-stack cardioid array is located above the proscenium. These are hidden behind scrim, which is no problem for their performance. The coverage partitions are shown in Fig. 15.7A (A1 and A2). The L/R mains cover the largest shares overall, but only a small portion of the front area (which is divided among the frontfill, centerfill and sidefill). We can visualize the mains vertical coverage as an ABXCDE array configuration because it’s split in the middle to minimize the balcony front reflection.

15.7.1 Left/right mains (ABCDE)

FIGURE 15.7A Design process for Venue #7: coverage partition strategy, solo L/R mains (ABCDE)

The L/R mains need to provide at least 60° of horizontal coverage (FAR 2.0), once we subtract the areas to be covered by the sidefills. The preferred speaker model is 90° due to power budget reasons,

so we have more than the minimum (which means it won’t be 6 dB down at the outer edges). The extra coverage is distributed to the outer walls and across the centerline. The overage effects are largely favorable (reduced level variance vs. increased ripple variance) because the walls are fairly HF absorptive. We were given no restrictions on the vertical height so we looked to balance three factors: level variance (move it up), sonic image variance (move it down) and balcony front ripple variance (keep it in the middle). Recall that gapping the balcony requires a fairly straight shot to minimize the smiles and frowns effect (Fig. 11.24). The compromise middle placement provided reasonable solutions for all three. One of the initial design decisions was between coupled vs. uncoupled (i.e. upper/lower) mains. The return ratio without underbalcony speakers was high, almost 2:1 (23 m/13 m). This falls to 3 dB once we factor in the underbalcony speakers (18 m/13 m), which lessens the case for splitting. The decision was made to keep the mains coupled because we were able to place the array at the most favorable location for coupling (i.e. the middle of the array at the balcony level). The vertical shape is now defined in three parts: upper (8° @1.8 range ratio), gap (10°) and lower (32° @4:1 range ratio). The upper section is covered by a two-element composite (AB), whereas the lower section is covered in three elements (CDE). We had a budget of ten elements/side for the mains, which were apportioned as four upper and six lower. The lower section was given more elements even though it was a shorter throw because it had both a wider angular spread and greater range ratio.

15.7.2 Centerfill (CF) The offstage locations for the mains made it a sure thing that we would need a centerfill, the question being how many (one or two). The gap area is a triangle that connects from the frontfill limit line to the off-axis edges of the L/R systems. This hall was used as the example for centerfill design so refer to Fig. 11.40 for more details. The only available position was above the proscenium and the chosen speaker was 80 H × 50 V aimed very steeply down. We didn’t need quite all of the 80° of horizontal coverage but my preference is towards soft edges in these types of transitions. The reasons are (a) the coverage shapes come together like a twisted-up pretzel so it’s safer to overlap a bit than leave potential for gaps, and (b) the level of the centerfill can be set low enough to fill the needed areas and then serve as an image aid as it fades out. The power scale for the centerfill was nearly comparable to the mains because it had to be hung quite high, making it a comparable range to the mains.

15.7.3 Sidefill (SF)

The need for sidefills was assessed during the design process using the lateral width evaluation. This hall was used as the example application (Figs 11.38 and 11.39) so we won’t repeat it here. The advantage in this instance is that the sidefill can cover the outer area to a limited depth and be finished before the wall turns inward. The mains could not cover this area without over-coverage as side walls closed in. The advantage also translates in the vertical plane because an outward turn of the mains would bring the entire horizontal coverage outward just to help this small outcropping of seats. This can be viewed through the triage perspective as a ripple variance tradeoff: Aiming the mains outward adds ripple variance for the whole room versus the uncoupled sidefill adding ripple variance in only its crossover range with the mains. That’s an easy choice.

15.7.4 Underbalcony delays (UB) The needs assessment for underbalcony speakers was evaluated as described in section 11.7.2. The clearance ratio is 25% (2.25 m/9 m) and the return ratio is 1.65 (23 m/14 m), yielding a composite value of 40%. Therefore underbalcony speakers were specified. The selected speaker was 80 H × 50 V. The vertical aim point was the last row (since the range ratio ≥2:1), which is -15°. This sets the coverage start at -40°, the off-axis edge of the vertical pattern (-25° from the vertical aim). The coverage start (the unity line) sets the horizontal spacing by the uncoupled line/point source spacing method (Figs 11.20 and 11.22). The 80° speaker (1.25 × lateral multiplier) has a vector range to the unity line of 3.5 m. This would yield a 4.5 m spacing on a straight line, but is slightly reduced by the 6° splay that follows the arc of the balcony front (net spacing is 4.2 m).

FIGURE 15.7B Design process for Venue #7: solo frontfill, solo underbalcony, solo centerfill

15.7.5 Low-frequency system (LF) What happens if we place subwoofers at left and right with the mains? The upside is coupling with the mains. We get a low image variance (+) and horrendous level variance (-) if we place them on the deck. Fly them at the top of the mains and the level variance improves at a cost of image. The down side is ripple variance from the uncoupled L/R relationship, also known as “power alley.” A center location provides the opportunity for minimum-level variance front to back and side to side. We used a five-element cardioid configuration (gradient inverted stack) centered over the proscenium. The array is aimed downward so it’s centered on the shape in both planes. The result is minimum-level variance and ripple in exchange for a high sonic image (a trade I will usually take). The worst-case sonic image and level variance would be found at the near seats on the sides (the exact same situation we find with the center channel of L/C/R systems). This could be remedied by “sidefill subs” placed on the deck if the budget and aesthetic concerns allow. This subwoofer subsystem (ba-daboom) must be operated with proper level scaling so it doesn’t get blown up trying to keep up with the center LF system. Adult supervision is required.

15.7.6 Combined systems There are a lot of pieces to put together in this system. Each requires proper aim and level to correctly partition the coverage and delay setting in order to minimize ripple variance around the crossovers. Combined EQ for the mains follows the pattern we have seen in previous applications. The underbalcony delays are timed and level set to combine with the mains to unity (GOLD). There is very little combined EQ here, other than possibly reducing the LF response of the delays. The sidefills have isolated dominance within their coverage (unlike the UB system) and therefore levelsetting and delay-setting operations are separated. Level setting is based on the ONAX SF response whereas splay and timing refer to the crossover (XL–SF). The intent is to match GOLD in both the isolated ONAX SF and XL–SF locations. The centerfill (CF) bridges the gap between frontfills, L and R. It’s a triangle with crossovers on all three sides and ONAX in the middle (XFF–CF, XL–CF and XR–CF respectively). Its level is set at ONAX CF (to match GOLD) and aim can be adjusted until the most uniform level distribution is found between these locations. Timing is the final step. Centerfill gets delayed to the mains at XL– CF (if it’s not already too late) and then frontfills (at least the inner ones) are delayed to CF at XFF– CF.

15.7.7 SIM3 optimization data This is the “room of many fills” and we will get a look at all of them here. The mains come first in

Fig. 15.7E (A1–4). This is an ABCDE mains, so we have done several variations on this theme already. This time I will show the combined traces in different locations as a one-on-one series of comparisons with GOLD. This is how the work is done in the field. The multi-trace pileups are only done after we have sorted through them one-on-one like this. These plots are pretty self-explanatory but I will add the following notes. Notice that the room curve (the unequalized response) strongly differs for each but the result (post-EQ) hangs closely with GOLD in each position. The VHF extension is stronger at the bottom (E) but that is one of the hazards of battling a 4:1 range ratio on the floor with only six boxes (the other four are upstairs). ONAX C shows the most spectral tilt. Is this normal? Yes, for two reasons: (a) It’s under the balcony, (b) it’s the middle of the array. I would be concerned if it didn’t have the most tilt. Before we get to the fills we will verify the horizontal aim for the mains (Fig. 15.7E). Panel B compares the outermost seat (OFFAX L) and center of the hall (XLR) at mid-depth. This process has been a part of all the optimizations but here at last we can see one. The responses are matched in the highs and mids, indicating a proper aim. The low mids are stronger (and have more ripple) at the OFFAX edge near the physical wall. The XLR location will get it share of that when the other side is turned on. The underbalcony fill is seen in the next three panels (C1–3). We see the parties to be joined in (C1). The mains solo response is a typical underbalcony response: strong spectral tilt and compromised coherence (ripple variance).

FIGURE 15.7C Design process for Venue #7: solo sidefill, solo cardioid subwoofers

FIGURE 15.7D SIM3 optimization data for Venue #7

FIGURE 15.7E SIM3 optimization data for Venue #7

The underbalcony fill can bring flat, coherent sound to counter both effects. Panel C2 shows how we balance the spectrum above 400 Hz with negligible addition below. Note that this is done without strangling the delay, i.e. cutting its lows and low-mids a million dB. It’s all about level, aim, delay and placing the delay to start at the right place. The final is (you guessed it) verification that the combined response is a match for GOLD (panel C3).

FIGURE 15.7F SIM3 optimization data for Venue #7

Let’s move on to the sidefills (Panels D1–2). It’s the same process of course, and we’ve done it enough times now to start to abbreviate. Panel D1 shows the parties to be joined at crossover and D2 verifies that the combination matches GOLD. The centerfill is next (E1–2). Panel E1 compares the combined response at crossover with the solo centerfill. In E2 we again verify that the combination equals GOLD. The frontfills are handled in Fig. 15.7F. Panels F1–3 tell the story. The solo EQ (including a secondorder HPF) is shown in F1. Also notable is the bacon prevention filter at 18 kHz. Panel F2 shows a three-way crossover combination: mains, centerfill and frontfill. We finish with GOLD verification in F3. The final set in the series is the tuning of the gradient cardioid subwoofer array. This was done up in the fly space above the proscenium, which is just as much fun as it sounds. The delay is set to match the front and back response, the rear driver polarity is reversed then we get the heck out of there.

15.8 Venue #8: Medium Hall, L/R, Variable Acoustics This is a multi-purpose hall designed for everything from symphonic to rock and roll. Multi-purpose is a term that has gained a justifiable reputation for making everybody miserable, but this hall proves that we need to update the file. The physical acoustics are extremely dead: RT60 of 700 ms in a hall that holds 1760 people. Nonetheless the reverberation can be extended to achieve the response needed for unamplified symphonic and beyond (RT60 > 3 seconds). The physical acoustics were designed from the start to include Meyer Sound’s Constellation system to provide the reverberation enhancement (variable acoustic systems were described in section 6.3.3). The result is an optimized environment for purely amplified sound (rock, EDM), reinforced sound (theater, jazz), as well as Bach. Engineers can add in the appropriate amount of reverberation character for the application. The huge advantage here is that we can get the upside of room reverberation (spatial envelopment) without the tonal coloration and coherence loss that comes from hard walls near our speakers. The SIM3 plots of the optimization show very high coherence compared with other halls at comparable depths. The tonal coloration from the room was very minor, which left us with minimal EQ requirements.

15.8.1 Left/right mains (ABCD) The optimal horizontal aim follows the middle–middle strategy (again). The room is a trapezoid: a fan in front and rectangle in the rear. The horizontal width is calculated from the middle depth. We need at least 60° of coverage. The chosen main system is 90° so we will have some overage in the rear. The trapezoidal shape minimizes the need for sidefills, because the hall is widening with depth along with the speaker coverage. We can also breathe easy knowing that the walls are highly absorptive, thereby reducing our concerns about over coverage.

FIGURE 15.8A Design process for Venue #8: coverage partition strategy, solo L/R mains (ABCDE)

15.8.2 Centerfill (CF) We have done several centerfills already and this one follows a similar pattern. The challenge here was that we had very little margin for error in the mains–CF connection. The arrays ended up lower than originally planned, which shrinks the lateral width on the sides, and increases the central gap area to be covered by the centerfill. The centerfill is hidden in the proscenium at a fixed height, so we can’t raise it up to expand its width (we can only tilt it up or down). The connection was made successfully here but I am passing this on as a cautionary tale. It’s safer to plan for some overage in these areas and use speakers with relatively soft edges for centerfill and sidefill. We want to leave the mains with the flexibility to move up or down to get their best shot without worrying about opening up a gap on the floor.

15.8.3 Low frequency (LF) Once again we meet a center subwoofer array: a gradient inverted stack in the proscenium. The vertical aim is downward because (a) that’s where the room is and (b) we can.

15.8.4 Combined systems The main array combined EQ sequence followed the process of previous examples. The only fill

systems here were small and uncoupled from the mains and therefore required minimal combined EQ effort.

15.8.5 SIM3 optimization data In many ways this room is similar to the last venue (#7). Its macro shape is fairly similar but its details make us able to keep the fills down to a minimum. Frontfills, centerfills and done. The mains are the same speaker model, nearly the same quantity and we will again gap the array around the balcony. There is one very notable difference about the rooms: This one has far less (or far more) reverberation than the other. This room was built with the optimal physical acoustical properties to utilize Meyer Sound’s Constellation variable acoustic system, i.e. very dead. The contrast between traces taken at comparable distances with matched speakers in the two halls is striking. Figure 15.8C shows the fully combined main system (ABCDE) from top to bottom in six steps. Notice that the coherence between 125 Hz and 2 kHz hugs the top of the screen for the entire run from top to bottom. Compare this with Fig. 15.7E, which is a very typical room in acoustical terms.

FIGURE 15.8B Design process for Venue #8: solo centerfill, solo cardioid subwoofers

FIGURE 15.8C SIM3 optimization data for Venue #8

FIGURE 15.8D SIM3 optimization data for Venue #8

Needless to say, this made for a very easy tuning and very consistent response, as can be seen in the traces. Constellation can then add the appropriate reverberation as needed for the program material (even unamplified symphonic). Let’s get back to the tuning. The B1 panel of Fig. 15.8C shows the horizontal aiming process again (OFFAX L vs. XLR). Panel B2 shows the process of coverage verification, i.e. the answer to the question: Is the speaker wide enough? This is done by comparing ONAX L with OFFAX L and XLR. We know our speaker is too narrow if OFFAX and XLR fall behind ONAX by more than 6 dB. It’s too late, of course, because the speakers are already hung, but isn’t it good to know for the next time?

The remaining traces for this venue are for the centerfill (Fig. 15.8D). The needs assessment (C1) shows only minimal help is necessary at crossover XMn–CF. Panel C2 shows the solo EQ, which includes a mild HPF and some VHF range matching. Panel C3 verifies that the equalized response at the crossover matches GOLD. Next we prepare for combination by setting delay, level and checking for phase compatibility. The phase compatibility answer is yes, no, maybe. This is not what we want but we have to look at the situation and be realistic. The downlobe from the mains is the product of a line of speakers at different distances from here. The centerfill is a single element facing right here. Expecting perfect phase compatibility is not realistic (and wouldn’t hold for more than one seat anyway if we did have it). We combine then in panel C5 where they are compared to the downlobe alone. The final verification is found in panel C6 where the combined response matches GOLD.

15.9 Venue #9: Large Hall, Upper/Lower Mains This rental system was loaded into a historic hall. As with most historic halls we are facing highly reflective surfaces with limited options for speaker placement, particularly in the underbalcony area. The hall shape is more like two halls stacked on top of each other: wide, deep, tall and steep (above) and wide, deep, low and flat (below). A tall room on top of a ballroom. This hall is the poster child for upper/lower mains and is used for the example in Fig. 11.30 D.

15.9.1 Upper mains (ABCD) The horizontal coverage was determined by the FAR method and aimed by the standard middle depth method. The vertical shape is a perfect diagonal line, which can be covered with sequential doublings of the composite elements (4 × 1°, 4 × 2°, etc.). In this case we were given a quantity of sixteen elements/side for the upper system. With this quantity we could have started the doublings at 0.5° splays but I chose not to attempt such precise laser targeting due to the smiles and frowns effect (Fig. 11.24) that will curl the pattern up into the plaster at the top. Our upper mains were firing upward into the rake, which means the longest paths (rear center and rear opposite side) would have a beam climbing up the very lively plaster back wall and ceiling. We can’t make the smile go away without raising the cluster. Sometimes it’s better to resist the urge to use a super-precision laser beam. This is a perfect case since our upper speakers are throwing curveballs up there. It’s better to open up our splay and, with it, our margin for error. Here we started with two sets of 4 × 1° composites, which created a gentler nose along the rear. The key consideration here is that smiles and frowns must be seriously considered before setting extremely narrow splay angles.

FIGURE 15.9A Design process for Venue #9: coverage partition strategy for upper and lower levels

The frown side of the equation also comes into play in this case. The array aims down at the front of the balcony, which is curved. If the mains were a center cluster then the curve would follow the frown then the effects would be minimal. The mains, however, are off to the side (closest to the balcony there), and farthest from the balcony at center. A laser line of coverage at the bottom of the array will fall lower at center than the side so we can see that a single angular solution cannot work for all. There will be some leakage to the floor when the coverage falls short but the center balcony seats still get covered (by the next box above). By contrast the seats on the near side can have the coverage fly over them without anybody to step in. We added an upper sidefill speaker to cover this area, which reduced the vertical (and horizontal) load for the upper mains.

15.9.2 Lower mains (ABC) The room is slightly shallower on the lower level, which means we could optimize the horizontal aim by turning it inward a few degrees. I did the calculation and determined it was not enough to make it worthwhile, especially since we had a substantial centerfill cluster. This can be a worthwhile consideration in some venues, especially when the depths of upper and lower systems vary greatly (such as Venue #3 in this chapter). The vertical coverage has an extreme challenge in terms of an unbeatable range ratio dilemma. A low and flattop location is the most advantageous for underbalcony penetration and separation from the uppers. A flat shot at our VTOP allows us to narrow the splay without fear of smiles or frowns.

The cost is level variance (too loud in front). Raising the mains upward causes underbalcony seats to lose sight of the mains and rely solely on fill speakers as their mains. The historic building aspect now comes into the picture because there are only limited locations for underbalcony speakers, leaving spotty coverage at best. Did I mention that the mix position was deep under the balcony? This is a case where we have to go to TANSTAAFL and triage to aid the decision process. There are a lot more people affected by the underbalcony limitations than the hot spot area near the front. The low position wins and the underbalcony speakers will only be used as needed. A secondary aspect is that the lower system was comprised of elements whose individual responses were narrower (vertically) and higher power than the upper system. This allowed us to penetrate better underneath with fewer boxes and still remain properly power scaled.

15.9.3 Combined systems There is surprisingly little to say about the biggest combination (upper + lower mains) because (a) the hall so thoroughly separates them and (b) both systems had enough control to render the combination effects barely noticeable. The crossover between the systems was at one of my favorite positions (the balcony front), which allows us to hide the largest mold mark in the system design. The frontfill system was set to cover the first three rows. Its timing was set to sync to the strongest local system at its limit depth. This was the lower L/R mains on the sides and the centerfill system in the middle. The centerfill system intended to provide unity level (match the GOLD response) in its most isolated area (the center of the triangular gap) as well as the crossover areas to the L/R mains. The timing is set to sync to the mains where the inner edge of L and R meet the outer edge of CF (the same as previously shown in Venue #7) and CF level is set to raise the combined CF + L (or R) level to match GOLD.

FIGURE 15.9B Design process for Venue #9: solo upper(ABCD) and lower (ABC) L/R mains, upper + lower

15.9.4 SIM3 optimization data The mains are a variation on the familiar themes. Actually two variations (upper and lower). We focus here on the solo EQ process for the A, B, C and D sections of the upper array. These are found in Fig. 15.9C where the room, inverse EQ and result are shown for each. I show the EQ inverted because that is the way I have done solo EQ since 1984. Viewing the EQ inverted makes it easier to complement the unequalized response (i.e. matching the shape rather than mirroring it). Panel B1 of Fig. 15.9D shows the same four solo EQ responses overlaid. The matched results are clear in B1 as are the unmatched EQ in B2. I included the phase response in B2 to allay the fears of the phase police. No phase demons were released by this equalization. The most probable cause of phase problems during equalization is the wrong amplitude settings (which are most likely narrow, deep filters). Seen any here? The last panels show the combined response. Panel C1 shows what was done when A was added to B. The low mids came up and we brought them down with filters in both A and B. The complete combined EQ family is shown in C2, which shows a level variance of 2 dB over a range ratio of 9 dB (144 ms to 56 ms). We move on to the lower mains in Fig. 15.9E. We must maintain a link with the GOLD standard set earlier for the upper system (upper ONAX B). We will also use ONAX B on the lower system for consistency and because ONAX A is so deep under the balcony. Panel D1 shows a comparison the upper and lower systems, and verifies our link. This is very important in this case because the mix

position is deep under the balcony and they will have no idea what is happening upstairs. This trace shows they need not worry. We will stick with this same reference point (lower ONAX B) and check out this array from back to front. Panel D2 compares this to the response near the back at ONAX A. We had lost 1 dB back there but our coherence is still holding up extremely well. Our aiming strategy seems to be providing good penetration under the balcony. We move forward to ONAX C (Panel D3), where we have gained 2 dB of level. Our level variance is only 3 dB so far even though we are spanning 10 dB of range ratio. VBOT (panel D4) is found in the fourth row (17 ms). It has lost the 2 dB we had gained earlier and returns to unity level with ONAX B and GOLD. We have overcome over 15 dB of range ratio but there is a price to be paid for steering over the heads of these near seats. Look at the HF coherence for TANSTAAFL in action. We are at the fourth row though, so help is nearby in the form of the frontfills.

FIGURE 15.9C SIM3 optimization data for Venue #9

FIGURE 15.9D SIM3 optimization data for Venue #9

The LF array for this application was a simple L/R pair of gradient inverted-stack arrays. They were in close proximity to the lower mains, so they were timed to be in sync with them. The crossover was a second-order unity type because there were no worries about image separation.

FIGURE 15.9E SIM3 optimization data for Venue #9

15.10 Venue #10: Wide Fan Shape with Multiple Mains Fan-shaped rooms are very popular venues for houses of worship. The most pressing sound design questions are often about the video: the locations left for us to fly speakers and amount of money left in budget. This leaves us to figure out how many mains are needed to fill the shape. Lower clusters, wider fans and extended stages lead us to more mains, while higher trims, narrow fans and shallow stages bring the number down. We get bonus points if we can bring the number down to two because the client feels like they have stereo, which is highly valued. Once we go beyond two clusters we need to be prepared to discuss why they can’t have stereo (the real answer is because they wanted the big fan and the huge stage and giant video but I will leave the diplomacy on that up to you). This is followed by discussions of why don’t we try cross-matrix delays and DSP magic and every kind of trick to try to make stereo out of three or four or more clusters. The correct answer is TANSTAAFL: because it should be more important for people to understand the spoken word than be entertained with panning effects of musical instruments. OK. Time for me to stop preaching. The main cluster quantity can be found by using the lateral aspect ratio to find the coverage width required at the start of coverage, e.g. at the end of the frontfills. This application example shows two ways to solve the same shape: simple and complex. The simple solution is three clusters of mono. The complex version is L/R mains with centerfill and sidefills. Either will work and depends entirely on the extent of the customer’s attachment to the L/R configuration. Both system options include the same frontfills and underbalcony fills.

15.10.1 Left/center/right mains option The height of the mains is set by the video projection and sightlines from the last row. This puts the bottom of our clusters at nearly the same height as the uppermost seat in the balcony. Needless to say this would not be our first choice. The height is 11 m, and the speaker is 90° (horizontal). Therefore the lateral width on the ground is 15 m (11 m × 1.4 = 15 m).

FIGURE 15.10A Design process for Venue #10: Coverage partition strategy (L/C/R), solo mains (ABCD)

FIGURE 15.10B Design process for Venue #10: solo L/R mains (ABCD), main + centerfill + sidefill

We would have to walk 45 meters to go from end to end at the third row, our start of coverage. This is the coverage target’s lateral width, the minimum needed to make the connection. We need three clusters (45 m/15 m = 3) to cover the width. This venue was used as an example for multiple mains in Fig. 11.27 (A1–3) so you can review the process there.

The vertical coverage has one notable feature. Notice that very little effort is made here to gap the balcony front. It’s a sad situation (i.e. it’s about the frown). The cluster is aimed steeply down, so there is no opportunity to precisely place the gap in the balcony front. The gap will slip below the balcony front at all but the centerline of the speaker’s horizontal aim.

15.10.2 Left/right mains option with sidefill and centerfill The L/R option is more complicated so we will break it down. First we have to look at the hall from an L/R perspective, which starts with re-aiming the mains. The wide spacing makes it a certainty that we can’t close the gap at center without a fill. The fan is too wide to cover inside and out so we can shave off the outer edges and rectangulate the macro shape. Notice that the hall depth gets shorter at the outer edges, which allows us to slightly reduce the throw for the sidefill system. It still takes a two-element coupled point source to reach our required depth.

15.11 Venue #11: Concert Hall, Three Levels This application features two approaches to the same hall (shape). There is a twist. Two highly reverberant halls were built from the same plans and renovated their systems with new mains at the same time. The leftovers from the existing systems differed, which opened the option to approach the design and optimization differently for each. Two plans were developed: one that gave the mains custody of all three floors (option 3F) and one that delegated the third floor to the overbalcony speakers alone (option 2F). The vertical splay angles within the array and the composite element subdivision were the primary differences between the two designs. The optimization proved that both approaches could provide minimum variance with only minor differences in power capability, ripple variance and sonic image. Option 2F had slightly more power capability (tighter splay angles), less ripple (less reach into the reflective upper level) and more image distortion (the sonic image was in the ceiling for the third-floor listeners). The coverage partition strategy for option 2F is shown in Fig. 15.11A (A1–A2). Notice how the two overbalcony delay systems (four and two elements respectively) provide exclusive coverage for the third floor. Option 3F is shown in 15.11C (A1–A2), where the A section of the mains covers the first half of the balcony by itself and then joins the delays in the rear. All other subsystems are functionally equivalent. The room is complicated and merits a brief note. The side seating rises up gradually in a “stairway to heaven” fashion while the central area has a low rake and jumps up vertically at the balcony. This gives us an impossible target shape for elements that are vertical lasers spread over 90° of horizontal. Two lines of vertical coverage are shown in different colors on the figures to denote the central and outer seating levels. The complete systems include cardioid subwoofers, sidefills, a centerfill array and frontfills, all of which change depending upon the stage apron settings. We have already done many of these subsystems so we will focus on the unique aspect here: the same hall with two different main system strategies.

15.11.1 Option 2F mains (ABCDE) The top element in the L/R main was aimed at 0°, the best position for precise horizontal coverage lines with no smiles or frowns. This allowed us to avoid the balcony front just above this line (one of the main reasons to implement option 2F). The range ratio is 2.6 (8 dB) and we were able to slightly exceed the minimum splay ratio with angles ranging from 2–7° (splay ratio of 3.5). The choice of eleven elements was budget driven but was totally capable of spanning the coverage angle we needed over the given range ratio.

FIGURE 15.11A Design process for Venue #11: coverage partition strategy (option 2F), solo main (ABCDE)

Horizontal aim was handled by the same method as in previous examples and the coverage needs were more than met with the 90° elements. The vertical progression is quite similar to many of the previous examples with the notable exception of the B section of the composite elements. This is a three-element composite with internal splay angles of 2° and 3°. Yes, I have broken my own guidelines. My preferred splay at this location was 2.5°, an option that was not physically available, leaving 2° or 3° as the options. I bent the rule (and the composite) because I wanted to center the element where I could place a mic. The 2° option compressed the coverage upward too much and the 3° option put ONAX B in the air below the balcony. The resulting compromise averages out to the 2.5° I wanted and got the mic where I could measure it without a ladder.

15.11.2 Option 2F delay 1 (ABBA) Yes, it’s an ABBA array but this is not Mamma Mia. It is fairly standard to break such an array into outers and inners for delay setting but in this case we had different vertical aims, depths and levels. The balcony is much deeper in the center section than the sides (eleven vs. five rows). The inners pass the fourth row and need to keep moving up to connect to the second wave at the tenth or eleventh row. The outers pass the fourth row and run smack into a solid wood reflector panel, thank you. ABBA: different aim, different level, EQ and delay. The spacing for these speakers was already set and not adjustable. We could, and did, adjust the splay

angles to find the optimal horizontal coverage and crossovers. I don’t have an exact number for you but it was found soon after I had said “too close,” “too far” and “split the difference.” The vertical aim required on-site measurement as well because we needed to ensure that the balcony front was fully covered (because we were not aiming the mains here). The first-wave delays needed to make it from the front row to the tenth or eleventh row where we wanted to hand over custody to the second-wave delays. This one’s not a delay dilemma. It’s a splay dilemma and we have to look at TANSTAAFL for guidance. The front row takes priority because there is virtually nothing from the mains for us there. At the upper end we can aim the second-wave delays down a bit and hopefully still make it close at the top. The top rows are less likely to be populated so they lose priority. As it turns out we made the connection without triage (but just barely).

15.11.3 Option 2F delay 2 (AA) The last rows are aided by an uncoupled pair of third-order line source elements (vertical). These are extremely narrow in the HF range and therefore only able to cover the last three rows. In the ideal world the second-wave delays would be riding on top of distant on-axis energy from the previous systems. In this case the mains are not even on the same floor and the first wave had to aim down enough to cover the front of the balcony. The role of these delay speakers was much more than direct/reverberant ratio helpers. They function here more like a distributed mini-main system, more relay than delay.

15.11.4 Option 2F combined systems The composite elements of the main arrays are added together and their combined EQ brings them to match the GOLD standard. The mains to delays crossover is on the balcony front (XA–D1). The delay timing is set in this gap area, above which the delay speakers move into level dominance. The level for the D1 area is set to match the GOLD standard as a soloist, because we are not counting on help from below. Therefore the combined response here was not much different than the solo response. There is certain to be low-frequency information coming from below but extensive LF reduction of the relay-delays increases their perceived disconnection from the rest of the system (not to mention that it does little to level the spectral tilt anyway). It is extremely rare for me personally to strangle speakers, which I define as high-passing above 200 Hz, but to each their own. We have now merged the main and first-wave delays so it’s on to the second wave. Many secondwave delay applications require an investigation as to whether to delay to the mains or first wave. This one was easy: a delay-to-delay connection at XD1–D2. Level setting was based on the combined response matching the GOLD standard.

We learned something interesting during the calibration of the gradient inverted-stack subwoofer array. The subs were rolled out on carts and put in place at the proscenium edge. We measured behind them to get the timing and saw strange readings. The rearward level of the front pair could not keep up with rearward energy from the single backwards box. The reason was that the proscenium was closing off too much of the rearward energy. Once we opened a 0.3 m gap between the speakers and the wall, we were fine. The lighting designer, not so much.

15.11.5 Options 3F mains (ABCDE) The top two elements in the L/R mains were aimed upward into the front of the third floor. It is unusual (for me) to create the composite module A with fewer elements than B or C. This case calls for it because we need to cover the front of the third floor and then gap the balcony before resuming the coverage area we met previously with option 2F. A coverage gap should always be at a composite transition (in this case A to B). From here we need to cover the identical shape as previously, but with nine elements instead of eleven. This is done by proportionally expanding all the angles by 20% (as close as possible using the actual available splays). This process was shown in Fig. 11.11 where a consistent 80° shape was created with various element quantities.

15.11.6 Option 3F delay 1 (OBBO) This time it’s an 0BB0 array, which seems even stranger than ABBA. This is the official array designation for a four-element uncoupled line source where the outer elements got turned off and moved into “special event” storage. Option 3F made the outers redundant and just a source of ripple variance. The mains coverage there was a perfect match for GOLD, so thanks and goodbye.

FIGURE 15.11B Design process for Venue #11: solo o-balc delays, main + o-balc (option 2F)

This venue did not have the second-wave delays so we must make it all the way to the back with the two inner units of delay one (and only). This was doable because we did not need to cover the early balcony seating, instead being able to merge with the mains halfway up like a traditional delay. The delays could be aimed at the deepest spot and allow their underside to meet the mains topside and gently extend the coverage and minimize the image distortion. The crossover (XA–D1) is located higher (and lower) than those we used in option 2F. In this case the level setting provided restoration to GOLD as a combination rather than as a soloist.

FIGURE 15.11C Design process for Venue #11: coverage partition strategy (option 3F), solo main (ABCDE)

15.11.7 SIM3 optimization data Yes it’s time to tune yet another ABCDE main array. Is there another way to show the process without being insanely redundant? I’ll try. Panels A1 and A2 of Fig. 15.11E contrast the solo and combined EQs over the span of ONAX A, B, C and D. This is a reverberant hall so I knew I could count on the room for plenty of spectral tilt. Notice that all of the solo responses are brought to nearly flat (panel A1) and yet the combined responses create a “target curve” shape with around 10 dB of extra low end (panel A2). This was done by only partially compensating the combined system coupling. This is evident by contrasting the solo EQs (A1) with the combined EQs, which differ only in the very gentle reduction in the low end added during combination. Panels C1 and C2 break some new ground. These are traces from the game called “hotter or colder.” This is how we find ONAX in a composite element. Place the mic, take a trace of the solo element and then move to the next row. Hotter or colder? The center of the composite (the real ONAX) should have the strongest response in the VHF range (because this is where the phase offsets are smallest). We haven’t done a frontfill in a while (Fig. 15.11F). Panel D1 shows that we need this for more than imaging. The coherence above 4 kHz is terrible compared with GOLD. Actually it is just plain terrible. We are deep in the underbelly of the array in a reverberant hall so this is no surprise. We can make two conclusions from panel D2: (a) we definitely can help (FF has flat response and high coherence) and (b) phase compatibility is a very relative thing (multiple arrivals from the array and

reflection make the phase response very difficult to read (and hear). We finish the job in panel D3 where we can see a closer matching to GOLD and improved coherence.

FIGURE 15.11D Design process for Venue #11: solo o-balc delays, main + o-balc (option 3F)

FIGURE 15.11E SIM3 optimization data for Venue #11

FIGURE 15.11F SIM3 optimization data for Venue #11

The final addition is the overbalcony system (E1–E3). The needs assessment (E1) shows the spectrum tilting, level falling and coherence dropping off. We prepare for combination in E2. Our overbalcony speaker can deliver a flat response to counter the spectral tilt and has an advantage in coherence due to its short(er) throw. It’s not exactly a short throw at 45 ms, but that’s much closer to the mains at 120 ms. The phase is easy to read and they are clearly compatible (probability is always better when the speakers are both aimed in your direction). The final step is verification (E3), which shows our combined response matches GOLD.

15.12 Venue #12: Arena Scoreboard This application calls for 360° of coverage from a central scoreboard. The room’s macro horizontal shape has a slightly rectangular 5:4 aspect ratio so the system will need only minor asymmetry to adapt to that shape. The intent is to have all sound (playback and announce only) come from this central point with no delays or fills. These applications often have limitations of various sorts that impact our available options. This one had a very compressed vertical space requirement, which eliminated the obvious choice of a modern line array. We need power, so we’ll have to spread horizontally. The solution is an asymmetric coupled point source in both the horizontal and vertical planes. The basic plan is three levels in the vertical plane (ABC) with decreasing power scale from top to bottom. The coverage partition plan is shown in Fig. 15.12A (A1, A2). The upper system covers the huge majority of the seating. The middle system extends the range to the floor and the lower system is only used when the central area of the floor is seated.

15.12.1 Horizontal plane (N/E/S/W, ABC) The upper array (A) is a circle in a square (or nearly a square). The array is comprised of a ring of 18 × 20° elements. There is 2 dB of level taper to rectangulate the response but otherwise it is simply a 360° continuous ring of sound. It helps that the room’s corners are rounded as these are otherwise the most distant and hardest seats to reach. The middle-level array (B) is 6 × 80° with slightly asymmetric splays to create the rectangulation. The bottom level covers the court and is a simple pair of speakers aimed to make a circle on the flat floor.

FIGURE 15.12A Design process for Venue #12: coverage partition strategy, E/W main A solo

15.12.2 East/west mains (E/W, ABC) East and west are the longer sides. The overall coverage target is 87° with a 3:1 (10 dB) range ratio. This is oversimplified because this shape is clearly double-sloped (section More than 90% of the listeners are in the upper slope (the raised seating area) while the remainder is found on the flat floor. The raised area is a simple target: 31° with a 1.6 range ratio (4 dB). This could be handled with a single element if we only needed to cover the raised seating, but the application calls for full floor coverage as well. We know that the upper element can cover this because it has 60° of vertical range (we need 48° (1.6 × 31°)). This 60° vertical coverage is more than we need but recall how the height limitations forced us to go horizontal. This speaker was chosen primarily for the horizontal power it could pack into the circle. The ±6° of extra coverage are a minor concern (and better than not enough). The vertical aim is found by the compensated unity aim method (Fig. 11.9). We have less than a 2:1 range ratio so we won’t be aiming at VTOP (-3°). The vertical center of the shape is 15° below this at -18°. The aim moves 6° down from VTOP to -9°. The best way to turn the corner on the double slope is to have the B element aimed at the transition point (-34°). This allows it to fade quickly above (moving off axis and away) and hold out for a long time on the floor (getting closer as it moves off axis). How can we make a circular pattern on the floor? We could face a 90° × 90° speaker straight down. An alternative is a pair of 45° by 90° speakers splayed at 45° and facing down. We almost have that (2 × 40° × 90°) to make an 80° × 90° spotlight on the floor.

15.12.3 North/south mains (N/S, ABC) The short sides are only marginally shorter and require only minor adjustments to bring us to the same place. Both the A and B sections are aimed lower to achieve the same basic shape. The revised angles are seen in the figures.

FIGURE 15.12B Design process for Venue #12: E/W main B solo, A + B, A + B + C, N/S main A + B + C

15.12.4 Combined systems The most dramatic combination effects will be the summation of a circle of eighteen low-frequency devices. The effect will be largely symmetrical (because there is only 2 dB of level taper), so the changes will be uniform over the space and therefore highly treatable. The vertical combined response makes a clearly defined double slope that follows the raised and floor seating. Delay for the B section can be set at crossover XAB and the same follows for the C section at XBC.

15.13 Venue #13: Arena, L/R Mains We return to the same arena to do a concert. The system will be L/R mains, coupled sidefills, LF and frontfills. Our focus here is on the mains and coupled sidefill. The stage is placed at one end of the long side. The mains cover the central long shot and the sidefills handle the leftovers.

15.13.1 L/R mains (ABCD) The horizontal aim was calculated by the “we always do it this way” method, i.e. straight ahead. Yes it’s true that we would have higher uniformity if we splayed them outward, but I never said that every design decision in this chapter was mine to make. We move on to the vertical where we see the classic double slope of an arena floor. People need to see the game and athletes need a level playing field. The raised seating area is a coverage target of 15° at 1.4:1 range ratio (3 dB). It’s far, but it’s easy. The floor is 30° at 3.5:1 range ratio (11 dB). It’s far and close and definitely not easy. Overall we are facing around a 5:1 ratio that is reflected in our splay ratio spanning 9° to 2° (4.5×). Half of our sixteen boxes go to the upper shape and half to the lower. The upper set runs from 2° to 3°, a splay ratio of 1.5 to counter the range ratio of 1.4. The lower set expands the splays and finishes at the maximum allowable angle for the system.

15.13.2 Coupled sidefills (ABCD) We will use the compensated unity splay calculation to find the horizontal aim for the coupled sidefill. The on-axis throw of the 90° horizontal mains is 70 m. The off-axis edge of the mains (45°) reaches its last seat at 50 m.

FIGURE 15.13A Design process for Venue #13: L/R mains coverage partition strategy, main A solo, B solo, A + B, A + B + C + D + E + FF

FIGURE 15.13B Design process for Venue #13: sidefill coverage partition strategy, sidefill A solo, A + B + C + FF, main + SF

This OFFAX position loses -6 dB at this point (by angle) and gains +3 dB (by distance) for a net -3 dB below ONAX A. This is 3 dB of level variance, but still 6 dB of spectral variance. We will use the sidefill to decrease them both. We will use the 3 dB range ratio (70%) between ONAX and OFFAX to set the compensated splay angle. Both speakers are 90° so the result is 90 × .7 = 63°. Now we know where the sidefill is aimed. To set its level we look to see how far it has to go, which is 37 m along its on axis line. The ONAX A to ONAX B range ratio is 1.9:1 (5.5 dB), which gives us the sidefill’s level setting.

The vertical target for the coupled sidefills is basically the upper portion of the mains double-slope coverage without the flat floor. This can be approached as a single slope of 52° at 2.2:1 range ratio (7 dB). We have nine boxes with an average splay of 6°, and a splay ratio of 2.5 (9° to 4°).

15.13.3 Combined systems The mains and sidefills are coupled in the low-mid range so we can expect to see some addition in both directions. The interaction is asymmetric because the sidefills are set at a lower level than the mains (3 dB shorter throw). Therefore the low-mid buildup is likely to gravitate toward the sidefills, which can reduce their low-mid response by more than the mains (Fig. 14.22 provides guidance regarding asymmetric EQ). The mains–sidefill crossover should be phase aligned at XMn–SF. Typically the sidefills are delayed.

15.14 Venue #14: Large Tent, Multiple Delays The venue is an 80 meter-deep tent with weight restrictions and limited placement options. The design featured L/R mains with centerfill, sidefill and frontfill. The mains were hung from a groundbased frame while the delays were hung from the ribs of the tent. The tent’s low clearance and the client’s requirement for high vocal intelligibility led us to use two levels of delays. We have already done lots of centerfills, sidefills and frontfills so the focus here will be on the two levels of delays. Each of the delay rings is comprised of three delay clusters arrayed as an uncoupled point source. Another unique aspect of the tent shape is that it’s much taller in the center than the sides. Sightline considerations forced the center cluster up high so we couldn’t hang L/C and R at the same height. We could aim the centers down more steeply, but this is poor for imaging and shortens the usable range (see Fig. 9.48). The better option was to move the centers back to the previous truss, which kept their aim angle consistent with the outers. This made the coverage start and stop points close to the same, i.e. a straight unity line depth. The timing sequence is clear for the first delay ring, i.e. they are sync’d at their start of coverage to the L/R mains. The second set faces the delay dilemma as to whether to delay them to the mains or the first delay ring. I don’t remember which we did in this case, but I can tell you how I determined it on site: measure at the coverage start location for delay 2 and sync to whoever is stronger (mains or delay 1). The delay ring levels are set in two stages. First make the solo responses of the three systems in each ring (L, C and R) land at the same level in their respective zones. Each of these is an AB asymmetric composite coupled point source so the A and B levels can be adjusted for best uniformity over their range. Then set their overall level to sum with the mains to match the GOLD reference level. The process is repeated on the second ring but now the level is set so that the summation of main, delay 1 and delay 2 matches GOLD.

15.14.1 Mains (ABCD) We have done enough ABCD main systems to limit our discussion to the unique aspects of this one. It’s too low, for starters. This is hardly unique for our field of work, but we have yet to face this in this application series. This is totally standard for outdoor applications such as festivals where trim height is limited by outdoor staging realities, crane capacity, weather safety concerns and more. We have similar limitations inside this tent as well as traffic from video and staging. Too low means we have to give in to the limits of our throw capacity, and tolerate more level variance than we would from a higher origin. In short: It’s going to get loud in front. The GOLD reference for most venues in this chapter is the ONAX A position. In this case we used ONAX B because the ONAX A location

sees a combined response of the mains and delays. We still have a fully dominant main system at ONAX B, which makes it the better choice for GOLD. Closer areas (e.g. ONAX C) are calibrated to match GOLD as part of the ABC coupled array. Areas beyond ONAX B are calibrated to match GOLD as a combination of the ABC mains and delays. Recall the driving force behind this: Mains are too low to reach anywhere near the back without delays.

FIGURE 15.14A Design process for Venue #14: coverage partition strategy, L/R main solo, delay 1

The next unique aspect is that there is very little (if any) listening area where our long throw system (A) will enjoy strong level dominance. We are basically throwing the top of the system out there as a coverage starter for the delays to ride along with. The mains and both delay systems all have something in common: They are aiming their top boxes to the very back of the room. The combined result is improved level uniformity and reduced sonic image variance over the alternative. The possible downside is increased ripple variance, but that is why we are using an extremely wellcontrolled system at small splay angles. So what’s the alternative mentioned above? It’s a dive and relay strategy. Angle the main systems down and let them die into the floor. Just make sure that the coverage is picked up by the relay system before it falls too low. The relay strategy can reduce the ripple off the sides and ceiling but the level variance suffers as each system goes through the startup of its propagation-doubling distances.

15.14.2 Delay 1 (D1, LCR, AB) This is the largest-scale delay system we have seen here, and the first composite delay array(s). The

design factors were described above so let’s concentrate on the tuning. The A and B sections are coupled composite elements. We can apply asymmetric EQ and level as needed to minimize variance between A and B but only a single delay time and overall level will join the system with the mains. Internal level and EQ operations use the ONAX A and B positions but the global level and delay setting is done at crossover (XL–D1). The choice for crossover depth is more complicated for a delay array than a solo delay speaker (in the vertical plane). There are five milestone locations in this typeAB delay array that we could choose for global delay and level setting (front to back): VBOT, ONAX B, XAB, ONAX A and VTOP. We can eliminate VBOT and VTOP because the combination will end up too loud in the range between ONAX A and B. The delay system (solo) should have constant level from ONAX B through to ONAX A. We can maintain a nearly equal combined level between ONAX B and ONAX A because the loss rate for the mains is very low (long doubling distance) and the loss rate for the delays is zero. Therefore any location between ONAX B and A can get an acceptable result. Using ONAX B as the crossover to the mains (sync’d with level set to combine to match GOLD) keeps the combined level equal to or less than GOLD. The delay level would be set higher if ONAX A is used, which would make the combined level at ONAX B greater than GOLD. My strategy here is delay (not relay) so I look to ensure that distant areas don’t exceed the GOLD level. The choices are quieter, earlier delays (ONAX B) or louder, later (ONAX A). The net effect on imaging should be a wash. Or you can compromise at XAB. What did I do? ONAX B.

FIGURE 15.14B Design process for Venue #14: main + delay 1, main + delay 1 + delay 2

15.14.3 Delay 2 (D1, LCR, AB) We previously discussed a second delay ring in Venue #11. In this case we are in the coverage of all

three systems (mains, delay 1 and delay 2). The crossover is set at delay 2’s ONAX B (XL–AI–B2). Do we delay to the mains or the delay 1? We will have to measure and see who is strongest.

15.15 Venue #15: Stadium Scoreboard, Single Level We now move outdoors to a large football stadium. The main system will live in the scoreboard and cover all the seating except a few pockets behind glass or blocked from sight. The shape is quite irregular, especially from the speaker system’s point of view. House left is a single high-raked level. House right is two levels and more than double the height. Coverage in the center area doesn’t start till we have crossed over 150 meter of open field and is extremely short in height. Scoreboards look huge from a distance but the space left for us is amazingly small after video and various sponsors have taken their places. It’s not necessarily shaped with speaker array-friendly openings, and full of solid steel beams. Not little ones. The “what the &%$# happened to my horn?” type beams. Coverage division is shown in Fig. 15.15A (A) which clarifies the minority of seats covered by the sidefills and the long gap before beginning centerfill coverage. The horizontal aim for the L/R mains highlights the asymmetric aiming strategy shown back in Figs 11.7 and 11.25. The approach here is to aim at the far corner of the shape (ONAX ). On the outside edge we find the FAR audience getting closer as we move on axis. This keeps the line of minimum variance moving in the right direction as we move off axis. We need the SF system at the near outer edges because we are too far off axis (excess spectral variance). The opposite occurs as we move from ONAX toward center. We are getting farther and more off axis so the dropoff is rapid. This is why we add the centerfill there, but only a small vertical slice because the stands are very low there. Aiming the L/R mains toward the corners allowed us to get the longest throw from the mains while wasting the least amount of them over the heads of folks in the low center area.

15.15.1 Right mains (R, ABCDEF) This system had to cover upper and lower levels, while (hopefully) avoiding the line of glass boxes in between. The goal was to position the array so that we could gap the coverage along the glass box line. This is only viable if the array is placed at the right height to avoid smiles and frowns. There was not enough vertical room in the scoreboard for the full array so we had to break it in half and hang them side by side. This turned out to be a favorable event because it allowed us to position the lower main at the best height to skim across the lower level and gap the glass (Fig. 15.15A (C2)). Coverage moves gradually down into the bowl from the sharp edge at the top of the array. The upper mains were essentially a five-element symmetric composite configuration (5 × 1.5°). The fifth box (B) was on a separate channel just in case level taper was warranted (it wasn’t). The angles were kept constant and symmetric because of the expected smile effects from our upwardly aimed array as the coverage spins outward (and the stands get closer).

15.15.2 Centerfill (CF, AB) The centerfill system is a parabolic dish loudspeaker with an extremely tight pattern (6°) and flat beamwidth over most of its range. The speakers don’t go as low in the spectrum as the mains (250 Hz and up) but we knew we would get plenty of low-mid help from the mains (whether we wanted it or not). The dish speakers form a two-dimensional symmetric coupled point source: two rows of 6 × 6° (Fig. 15.15B).

FIGURE 15.15A Design process for Venue #15: coverage partition strategy, R main AB solo, CD solo, ABCDE

FIGURE 15.15B Design process for Venue #15: centerfill A solo, B solo, A + B, L + R + center

FIGURE 15.15C Design process for Venue #15: solo sidefill, solo cardioid subs, solo left main, L + C + R + sidefills

15.15.3 Sidefills (SF, AB) The sidefills were also different on left and right. The right side had to cover upward as well. The throw distance to the upper area was still quite long so the same speaker model was used here as in

the mains. The area covered by the lower system was close enough to merit a power scale reduction.

15.15.4 Subwoofers (LF, +-) A cardioid subwoofer array was employed here, and the neighbors behind the scoreboard are grateful. Cardioid steering also reduces the LF level near the scoreboard, which helps the overall LF level variance.

15.15.5 Left mains (L, ABC) The left mains were essentially a copy of the lower section of the right mains. They also face press boxes and employed the same coverage avoidance strategy as the other side.

15.16 Venue #16: Concert Hall, Center, 360°, Multiple Levels Congratulations for making it this far. The ultimate test of your endurance is at hand. If M.C. Escher had ever designed a concert hall, this would be it. This is the most complicated mind-bending design and tuning I have yet to face in my thirty-one years of design and optimization. It begins with a highly reverberant room with 360° of multiple seating levels that stagger vertically and horizontally, guarded by highly reflective surfaces over, under and between them. The main system will be a center cluster with extensive limitations on height, width and weight. The front of house is covered on three vertical levels that wrap around with 180° of horizontal coverage. The sides are covered with two vertical levels and the rear with a single one. The remaining subsystems are sidefills and subwoofers on the deck and underbalcony speakers on the first floor. The twist here is that the hall is twisted, which means we will need the speakers to be as well. Simple solutions go out the window once we see that the height changes so much that seats on the third floor sides are the same elevation as those at the center of the second floor. The balcony fronts are large and reflective as are the surfaces behind them. Speaker coverage tends to run in straight horizontal lines. Our approach must find a way to bend the coverage down as we move around the front of the house. The coverage partition plan is shown in Fig. 15.16A (A1–A5). This hall takes five panels whereas all the others have needed only two. Take your time with that and come back when you’re ready to start.

15.16.1 Upper mains (A) The upper mains array cannot be a modern line array. It won’t fit (too long) and it creates straight, thin, vertical lines. And we’d need two of them splayed apart horizontally to make the 180° radial shape. Instead we will go with fine slices in the horizontal plane (20°) that can be staggered vertically as we spin around the room. We got lucky because most of the seating sections were 20° slices, which allowed us to drop the vertical aim in sync with the seating sections. There is 15° of vertical stagger along the nine elements, which makes a huge difference in regards to getting the sound onto the seats and off the reflective surfaces. The elements have sharp horizontal edges, and soft vertical (60°). The combined EQ had to factor in the low-mid addition as cabinets were added, but isolation was dominant in the upper ranges. Minor level tapering was done in the horizontal plane because the side speakers had a shorter throw than the center (around 2 dB). Delays were set to compensate for the stagger between elements at different vertical aims (the lower aimed boxes leaned forward). These may seem like small details but they all add up in the combined system.

The vertical aim(s) follow the aiming guidelines (Figs 11.8 and 11.9). The shape is fairly symmetric because we are covering two levels so the aim is not far above the middle.

15.16.2 Middle mains (B) The upper mains covered the top two floors. The middle system (B) lands the response on the ground and moves it most of the way to the stage. These are wider elements (80° H) with a slightly reduced power scale that can cover the

FIGURE 15.16A Design process for Venue #16: coverage partition strategy (upper, lower, section, transverse)

FIGURE 15.16B Design process for Venue #16: coverage overview (upper, lower, section, transverse)

FIGURE 15.16C Design process for Venue #16: (horizontal) upper mains (A), sides and rears (solo and combined)

180° target in three slices (splayed at 60°). The vertical shape is another double slope, in this case a fairly straight upper slope and angled lower slope. The vertical aim of the B element links its coverage to the VBOT edge of mains A. The underbalcony speakers are located at the joint between the slopes, which helps to soften the transition. The horizontal assembly for both the A and B sections

was used for the symmetric point source example in Fig. 11.12.

15.16.3 Lower mains (C) The lower section of the first floor is covered by a single speaker (90° × 40°) at the bottom, which connects to the frontfill and sidefill speakers below. We now have connected a coverage line from VTOP A on the third floor to the frontfill.

15.16.4 Deck sidefills (SF) Let’s finish the floor while we are here. The deck fills function as a sonic image anchor and coverage extension at the outer near edges. These speakers could be power scaled to fill only their local area, which would be less than or equal to the lowest section (C) of the mains. In this case, however, the L/R sidefills were actually scaled at equal level to the upper mains (A), far more than the minimum. The reason is acute “monophobia,” i.e. the addiction to mixing on L/R mains even if it means it will be 12 dB louder in the front rows. A mono-center permanent install that services a wide range of visiting acts must build in the capability to provide boom-boom on the floor or face an endless demand for rental equipment.

15.16.5 Upper and lower side rearfills (USR and LSR) Our next destination is the side area along the depth of the stage. There are two levels here and they are quite close and only a few rows deep. These are the kind of shapes we normally like to cover with uncoupled elements, but we are all hanging out together at the mother of all clusters. It takes a surprising amount of vertical and horizontal coverage to hit these targets, which calls for precise aiming in both planes. The upper and lower side rearfills connect horizontally to the outermost elements of main A and B respectively. Splay is verified and delays are set at XA–USR and XB–LSR respectively. The vertical crossover is found at XUSR–LSR, which is on a balcony front. A delay dilemma between arises here. Do we sync horizontally or vertically if there is a conflict? The vertical crossover is on a balcony front where nobody sits (for long). That’s the one we can let slide.

FIGURE 15.16D Design process for Venue #16: (horizontal) middle mains (B), sides and rear, deck sidefill, frontfill

15.16.6 Rearfills (RF) The last stop is directly behind the stage. There is only one speaker here, so no delay dilemmas or question about horizontal aim. Vertical aim follows the usual guidelines. Level is set at ONAX RF to match GOLD. The delay is set at the crossover to the lower-side rearfill (XRF–LSR). It might be tempting to delay the rearfills to a fictitious stage source. This would be viable if the rearfill was uncoupled from the sides and front system (it’s not). The coupled array that delays together, stays together.

15.16.7 Combined systems We have three levels of various speaker models covering 360° at different heights and depths hung together as close as Geppetto the puppeteer can rig them. Combined system EQ should be a breeze! It can be if we are careful at each step of the way to set the proper aim, solo EQ, level and delay. The predictable low-mid addition can be monitored and compensated globally. The key to the process is a proper order of operations. First we linked all the horizontal elements of the A system (A1–A2– A3). Horizontal for B was next (B1–B2) and only then did A and B come together in the vertical plane (XAB). The C section follows and we have completed the front system. Our focus moves to the side where the upper side rears (USR) join the mains horizontally at XA3–USR. The lower side rears (LSR) have to fit like a puzzle piece: horizontally joined to the mains B2 at XB2–USR and vertically

joined to USR at XUSR–LSR. Still with me? The circle is completed when LSR meets the rearfills at XLSR–RF. Of course we still have the frontfills, deck sidefills and UB delays to add in but those seem like the easiest things in the world right now, eh?

FIGURE 15.16E Design process for Venue #16: Vertical, main (ABC) sides, rears, frontfill, deck and u-balc

15.17 Venue #17: Medium Hall, Single Level We just completed the most complicated design. And now for something completely different: total simplicity (and not in a good way). Old folks like me remember when line arrays hit full stride in the marketplace and they were the cure for everything audio. Anyone caught using those old-fashioned speakers must have been sleeping under a rock for the last twenty months! This application (or better named mis-application) includes a five-element/side line array in a room with a 2.1 m (7’2”) ceiling at its highest point. One of the selling points was that they won’t need delays in the 14 m deep room because line arrays can throw so well. The clients were using the side and rear surrounds as delays by the time I was called in to attempt to tune it. Obviously the solution was 2.7 ms of delay and -4 dB at 2 kHz. The primary program material was spoken-word reinforcement with tons of headroom so they can get it as loud as possible. Live music and playback held the next level of importance there.

15.17.1 Original system The main systems were hung from the ceiling. We can’t just duct tape the top speaker to the ceiling. We need to tie in to the beam and then shackle down to the hanging grid, and only then does the sound-making part begin. In this case the uppermost speaker was at standing head level and the array finished at the knees. Did I forget to mention that the listeners (or should I say attempted listeners) like to stand when the system is in use? At best case we have two usable elements/side when people sit down. When they stand up the sound stops at the third row. There is no vertical plane to discuss. Closer listeners are louder by the inverse square law. In this case the range ratio exceeded 20 dB, as did the level variance. The system was not optimizable as such, and we worked out a plan to go back to the dinosaur era and use the old-school speakers and try again a year later.

15.17.2 Revised mains (L/R, A/B) Priority was given to ensuring that a speaker could be mounted horn up as close to the ceiling as possible. This left some clearance all the way to the back of house even with standing patrons. It was also determined to partition the room with two levels of delay, which meant the mains would hand over the baton after 6.5 meters. The intent is for the front mains to remain a partial presence all the way back to minimize image distortion and hold together as a system, rather than a scatter of ceiling speakers.

FIGURE 15.17A Design process for Venue #17: coverage overview (original vs. revised)

The frontline consisted of L/R and a centerfill, which was delayed back to the podium. The mains were set up as inner/outer coupled point sources to allow for maximum flexibility and level tapering to maximize gain before feedback. We could have used a single 80° speaker but the pair of 50° units offered sharper edges and the option of asymmetric shaping. The inners were turned down 3 dB (due to their merger with the centerfill) and therefore the compensated unity splay angle of 35° was used (50° × 70% = 35°). The coupled crossover is asymmetric, so we need to delay the inners (because they were turned down). The vertical aim for the mains was toward the rear of the room (>2:1 range ratio). The equalized response at ONAX A becomes the GOLD reference for the rest of the calibration.

15.17.3 Revised delays (D1, D2) The first ring of delays was placed 4 meters ahead of the mains and scheduled to land a unity line at 6 m (2.2 m vector distance from delay 1). The fixed height and known unity line set the spacing as follows: 2.2 m × 80° (1.25) = 2.75m. This gave us a quantity of six elements to extend the lateral line across the room. The vertical aim was also set to hit the rear of the room (>2:1 range ratio). The second delay ring followed a similar process to the previous one in terms of spacing, quantity and aim. The floor rises slightly as we go back in the room so the clearances get a bit smaller. The difference in calculated spacing was small enough that is was not worth trying to get that implemented in the install. Instead both delay rings follow the same model, spacing and quantity. Vertical aim is at the last seat.

FIGURE 15.17B SIM3 optimization data for Venue #17

FIGURE 15.17C SIM3 optimization data for Venue #17

15.17.4 Revised combined systems We have already combined the inner/outer mains (AB). The next step is to join them to the centerfill by phase aligning the systems at XL–CF (delay whoever gets there first). This completes the first wave. The second wave (delay ring one) is timed to the first wave and the level is set to make the combined response of the first and second waves match GOLD at XL–D1. The third wave (delay ring two) will join the combined energy of the first two. The delay is set to the stronger of the two at XL–D1–D2. Is it the left main or delay 1? It was the left main in this case.

15.17.5 SIM3 data

The calibration data from the original installation was extremely minimal. The trace shown in panel A1 of Fig. 15.17B gives an optimistic view of the situation. The hall was unoccupied, which meant the speaker transmission blockers, also known as “audience,” were not in place. The “front” and “back” mic positions in my measurements were not all the way at either. Nonetheless the traces show more than 10 dB of level variance. The revised system at comparable locations is shown in panel A2. This is recognizable as a member of our minimum-variance family. Let’s tune the revised system. The L/R mains are AB coupled point sources. The process of setting their splay is shown in panels B1–2. Here we see GOLD (ONAX A) compared to crossover XAB (without the B speaker on). The beamwidth plateau is visible 6 dB down from ONAX. The response restores back to 0 dB (panel B2) when the B speaker is splayed properly to create unity gain. The first set of delays is described in Panels C1–4. The first data shows the equalized solo delay. It matches GOLD in spectrum and coherence. Do we need delays? I think that question was answered a year earlier, but we will prove it again (panel C2). The mains are not getting it done out here. They are 6 dB down and losing coherence. We prepare for combination in panel C3, which shows the familiar pairing of tilted main vs. flat fill. Delay is set and the level is set to create combined unity. The last panel verifies that the crossover connection has matched the response to GOLD. Onward and outward to delay ring 2 (Fig. 15.17C). The needs assessment is crystal clear (because the sound coming from the mains and delay 1 isn’t). This brings us to the delay dilemma. Do we sync to the mains or delay 1. We will ride with the strongest. Panel D2 shows the results: mains by a landslide. The final data (D3) verifies that we have arrived at the end of the room and still match GOLD. You have arrived at the end of this book, and I sincerely hope you still match GOLD.

Afterword This concludes my transmission, for the moment. The cycle of design and optimization has been and will continue to be a learning experience. This is still a young field and the potential for growth is huge. For me personally, I feel that each day has the potential for some new discovery. Rare is the day when that potential is not reached. Over thirty years later “still learning” is the order of the day. I understand why doctors refer to themselves as “practicing” medicine. The most common question I receive goes something like this: “Unlike you, I don’t have perfect clients with unlimited budgets, unlimited tools, and unlimited time and support for system tuning. If we can’t do it all, then what should we do?” First of all, I have never had this client, but would love to. Jobs happen in the real world, and require prioritization and “triage.” I have never had the opportunity, on any single job, to perform all of the design and optimization steps shown in this book. I have, however, used all of them over the course of time. They are all in my playbook, ready to use, as the situation requires. Coaches are not required to use all players on their team. They must, however, be prepared to read the situation on the field and be ready to deal with whatever contingencies arise. This book strives to bring into focus the nature of the forces at work in the practical world of our sound reinforcement environment. This knowledge alone is a powerful ally, even without an analyzer. The end product we provide to our clients is a complete combined system. If we must streamline the process to fit in the time allowed, we should do so consciously. A skipped step is a leap of faith and a calculated gamble. We must maintain clear knowledge of where the leaps are, lest they come back to haunt us. What is the most important? This is a matter of opinion. For me personally, it is like food. We need a variety of the highest-quality ingredients. Good speakers and signal processing. The next level is like real estate: location, location, location. Good placement, good angles, good spacing and good architecture. Level, delay and EQ setting are the finishing processes. Even more important, however, is maintaining perspective of our role in the big picture. We are members of a multifaceted team. We are there to provide a service to the clients on many levels. The importance of personal relations cannot be overstated. In some cases we are our own clients, stepping out of our lab coats, putting on the artist’s beret and mixing the show. The meeting point between the scientific and artistic sides of our world is the optimized design.

Glossary Absorption coefficient: A specification to indicate the sound energy lost during the transition at a surface. The range runs from a maximum of 1.00 (an open window) to 0.00 (100% reflection). Active balanced interconnection: A balanced-line connection to or from a powered (active) input or output device. Active electronic device: An audio device that receives power from an external source (or battery) in order to carry out its signal-processing functions. An active device is capable of providing amplification of the signal. AES/EBU: The standard protocol for packetized digital audio transmission. Air absorption loss: High-frequency attenuation that accrues over transmission distance in air. The humidity, ambient temperature and atmospheric pressure all play a part in the parameters of this filter function. Amplifier (power): An active electronic transmission device with line-level input and speaker-level output. The power amplifier has sufficient voltage and current gain to drive a loudspeaker. Amplitude: The level component of the audio waveform, also referred to as magnitude. Amplitude can be expressed in absolute or relative terms. Amplitude threshold: An optional feature of transfer analyzers that allows for the analysis to be suspended when insufficient data are presented at the analyzer inputs. Array: A configuration of sound sources defined by their element quantity, displacement and angular orientation. Aspect ratio: A common term in architecture to describe a space as a ratio of length vs. width (or height). This term is also applied for the coverage shape of speakers (interchangeably termed the forward aspect ratio here). Asymmetric: Having dissimilar response characteristics in either direction from a defined center line. Averaging (optical): The finding of a representative response over an area by viewing the individual responses from various locations.

Averaging (signal): A mathematical process of complex audio analyzers that takes multiple data samples and performs complex division to acquire a statistically more accurate calculation of the response. Averaging (spatial): The finding of a representative response over an area by averaging the individual responses from various locations into a single response. Balanced: The standard two-conductor audio signal transmission configuration chosen for its noise immunity. This is suitable for long distances. Bandwidth: Describes the frequency span of a filter function (in Hz).

Beam concentration: The behavior of speaker array elements when they have a high proportion of overlap. Beam concentration is characterized by a narrowing of the coverage area with maximum power addition. Beam spreading: The behavior of speaker array elements when they have a high proportion of isolation. Beam spreading is characterized by a widening of the coverage area with minimal power addition. Beam steering: A technique of asymmetric delay tapering used in subwoofer arrays to steer the coverage pattern. Beamwidth: A characterization of speaker directional response over frequency. The beamwidth plot shows coverage angle (-6 dB) over frequency. Binaural localization: Horizontal localization mechanism driven by the arrival difference between the two ears. Bit depth: The resolution of the digital quantization, which sets the dynamic range of the system, approximately equal to 6 dB × number of bits. Cancellation zone: The inverse of the coupling zone. The combination is subtractive only. Phase

offset must be between 120° and 180° to prevent addition. Cardioid (microphones): Unidirectional microphones commonly used on stage. The cardioid action is the result of cancellation zone summation behind the microphone derived from the combination of forward and rear entry of sound at the diaphragm. Cardioid (subwoofers and arrays): A configuration of standard low-frequency elements (separated or within an enclosure) configured and aligned to create a cardioid pattern. Channel: A distinct audio waveform source, such as left and right, surrounds or a special source effect. Each channel must be optimized separately. Clipping: An audio waveform distortion that occurs when the signal is driven beyond its linear operating range. Coherence: A measure of the ratio of signal to noise in an FFT transfer function measurement. Combing zone: The summation zone having less than 4 dB of isolation and an unspecified amount of phase offset. Combing zone interaction has the highest ripple variance. Combining zone: See Transition zone. Compensated unity splay angle: The unity splay angle between array elements with asymmetric relative levels. Complex audio analyzer: An analyzer that provides both amplitude and phase data. Composite point source: The combination of multiple array elements into a virtual single symmetric array element. A symmetric composite point source element (matched levels and splays) will resemble the response of a single speaker. Compression: A slow-acting reduction of audio signal dynamic range, typically to prevent clipping or protect drivers. Constant bandwidth: A linear rendering of bandwidth, with each filter (or frequency spacing) having the same bandwidth expressed in Hz. The FFT calculates filters with constant bandwidth. Constant percentage bandwidth: A logarithmic rendering of bandwidth, with each filter (or frequency spacing) having the same percentage bandwidth expressed in octaves, e.g. 1/3 octave. The RTA filters are constant percentage bandwidth. Coupled (arrays): Arrays with elements within close proximity, i.e. within a single wavelength over a majority of its operational frequency range.

Coupling zone: The summation zone where the combination of signals is additive only. Phase offset must be

Sound Systems - Design and Optimization (3rd Edition) - PDFCOFFEE.COM (2024)
Top Articles
Latest Posts
Article information

Author: Aracelis Kilback

Last Updated:

Views: 6124

Rating: 4.3 / 5 (64 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.