Anyone that has been involved in sound reinforcement for any amount of time would agree that having some piece of equipment fail during a service is more of a ‘when' question than it is an ‘if' question. If you work in church audio long enough, sooner or later something will go wrong and you will find yourself in a room full of people turning around to look at you while you desperately wish you had been given the gift of invisibility.
The reality of this broken world is that things fail us and they have a tendency to do so when we can least afford it, financially or operationally. Failures can come in different shapes and sizes ranging from unexpectedly dead batteries to consoles that decide to retire without any notice. Regardless of the form or severity of a malfunction, the one common thread to all failures is that we as techs are the first line of response. This is something that many churches and tech leaders need to spend more time considering. In short, it requires advance planning
Pick a random piece of equipment that is in your audio rig and consider this question: “If this piece of equipment died in the middle of the next service what would I do?” Every one of us will likely approach this type of situation differently and derive a different answer. The most important aspect here, however, is not how we answer the question but whether or not we actually have an answer in the first place.
Not too long ago we had a weekend service to honor a long-time member who was moving to another area of the country. This person is an incredible musician who had blessed us for years with his compositions and his masterful way of playing saxophone. His farewell address to the church was a highly moving performance of ‘Alabaster Box' that was itself a poignant soliloquy of his own story. During the last service, the tech team was emotionally drained from the overall experience, having not lost sight of the fact that this would be the last time that this man would be with us. What a perfect time for our console to freeze, and as such, that's exactly what it did. During the transition, with sixty seconds left in the introduction, the console locked us out of all control. The only line open was the announcer's mic and the announcer was quickly coming to a close. Like we said earlier, widgets will fail when we can least afford it. But what do you do? Who do you tell? What is the protocol?
I'd like to outline some thoughts on how you can approach this topic so that when [emphasis added] this happens, not if, you are not working through everything ‘on the fly'. Let's approach this from the perspective of the gear we use, the individual techs who work the service, and the teams composed of techs that work together. Each one represents a different scope of response and preparedness that together can help to comprise a plan.
Gear
If you have a PA system, you have gear. But then we all know this, after all the gear is what attracted many of us to this segment of service. Notionally, each piece of gear has a specific purpose for its being there. Let's think of our audio gear as falling into two separate categories: Critical and Less Critical
The list of “Critical” gear contains pieces of equipment that are vital to the PA system's operation. If for some reason this equipment is lost it would single handedly jeopardize the entire service. Generally this category is where you place your main signal chain from the console out to your drivers. This is where we generally find the most expense gear and the least redundancy. Failures here might lead straight to an acoustic service or even a cancelled service. This is where I would place the console failure that we discussed above.
The “Less Critical” gear is made up of equipment that could fail without causing such a drastic impact to our services. It will generally create an awkward moment or a changeup of the service order, but it will not keep us from successfully holding a service. I would place into this category issues such as microphone cables that get damaged, a toasted outboard unit, or a battery that fails before it should, etc. In general failures of this type of equipment cause techs to do dreaded things like going up onto the platform to correct an issue, or to lose the ability to use a nifty new effect unit, etc.
The important thing when looking at your gear with respect to future failure is to understand beforehand where your system components fall into these categories. The idea here is that when you have a failure you should already know what a realistic outcome of the situation is. For instance, if you lose your main processor, it might not be realistic to try and deal with it while the house is filled with hundreds or thousands of people waiting and watching.
I will note that once you have separated your gear into these lists you might find the risks of failure for certain pieces of gear to be unacceptable. It is possible to move equipment from the critical list to the less critical list with redundancy or some amount of re-design. Obviously those types of mitigations quickly become financial- or missional-type questions and as such will be evaluated differently for each organization.
Techs
How well do your techs know the system that they are operating? In an ideal world every tech that is at FOH, monitor world, or in a high level support position should know enough about the system to trouble shoot it in real time. The implication of having our gear broken out into “Critical” and “Less Critical” lists is that the techs who are running the gear should know enough to isolate a fault to the appropriate part of the system, and hence understand the criticality of the fault.
However, while that is the ideal situation most of us don't live there. Many of us have a mix of operators that range from exceptionally experienced to just beginning, each with a differing level of capability. Some of them will excel at systems knowledge such that troubleshooting will be easy. Others however, won't know where to start. If you have a group of operators who are not yet able to achieve a really high level of systems knowledge then we can mitigate that with continued training and by adjusting what is on our criticaland non-critical lists.
Tech training within the church environment is always a hard thing to do. When we get those rare opportunities to train, consider taking some time to talk through system failures. Better yet simulate one and walk the team through troubleshooting and correction. Remember that when an audio failure happens during a service, everyone, including your senior pastor, will first look to your operator for an indication as to the severity and recoverability of the situation. Make sure that you equip them to be able to convey the required information to your leadership.
Team
The last major piece to this puzzle is that of the team that surrounds the tech. While most churches have one or maybe two PA techs on the schedule at a time, they are generally surrounded by a larger technical team that handles other aspects of service such as video, lighting, production, etc. This team, as a whole, should be clued into the concept of supporting each other during times of technical failure.
For instance, if we lose the PA during a video it wouldn't make sense to keep the congregation sitting in the dark watching a silent movie. Or would it? What do we do? How do we convey information to the platform as to when or if the video will be restarted? Is there a production component that needs to deal with live streams or off site feeds?
At this level of cooperation it is best to have some basic pre-planned responses such as skipping a service segment, re-attempting a service segment, or even stopping the service mid-stream. I bring up the team level component because it will be easier to respond to failures if the team is pre-programmed with actions they might need to take without a lot of prodding. For example, if the lead tech is focused on the troubleshooting, the team needs to automatically know how to respond.
I was the audio tech in the situation I described earlier where our console froze mid-service. In all honesty we were less than prepared to deal with it. After I was able to figure out what was going on, I came to the conclusion that the only way to recover the audio system from the situation was to stop the service and reboot. Without any forethought into this type of failure it was quite an uneasy thing to decide or convey to the rest of the team. To her credit, once our service coordinator understood the situation, she jumped right in and started to manage it. However, that experience drove home the idea that our team should be responsible for gracefully shepherding the church through difficult situations created by the very technology that we are responsible for.
We were blessed that in our situation God stepped in at the last moment and brought our console back to life before we actually stopped the service, thereby allowing us to deal with the failed gear after the service was over. The lesson, however, had been learned.
All of us, in some form or another, use audio equipment that we depend on to achieve the missional purposes of our organizations. Most all church techs understand how their gear assists with those missions. Let's make sure that we take the time to also understand how our gear can hinder those missions and what we can do in response.