Inside an Immersive Theatre Workflow

As immersive audio gains traction in live theatre, designers are finding new ways to improve audience impact and vocal clarity. We spoke with Anthony Narciso, lead sound designer for City Springs Theatre Company’s The Wizard of Oz, about his approach, creative decisions, and advice for working with spatial audio.

Can you tell us what your role is and give us a quick overview of the production?

I am the lead sound designer for City Springs Theatre Company’s production of Wizard of Oz, a United Scenic Artists, Local USA 829, IATSE union member and board member for the Theatrical Sound Designers and Composers Association.

City Springs produces four large Broadway style musicals every year. They are a local organization in Sandy Springs, Georgia that takes shows almost touring style into a venue that it rents, so they don’t own their own venue. Conceptually for us as a design team, it’s like we’re doing a tour. We can’t modify the building terribly, which a Broadway-like show would. In our regional theater, we take a base structure of the Performing Arts center and enhance it for theatrical performances. This includes hanging additional speakers, running additional network infrastructure, deploying a spatial audio system and tracking to place voices in time and space, deploying sound effects back through QLab, supporting the 12- to 20-piece orchestra in the pit, and adding infrastructure for backstage paging, lobby feeds, video feeds, and all safety protocols.

What first drew you to sound design, and how did you find your way into immersive audio?

I went to University of Central Florida for theater design, and I was able to go to the regional theaters, starting out with some of their youth productions and whatnot, and worked my way up. Even while I was in school, I was designing main stage shows at the theaters locally in Orlando, Florida and I’ve been designing ever since. I’ve been in immersive audio since 2023, so we’re in my third season of using it almost exclusively in regional theater productions.

What was your experience and the benefits you’ve seen from using immersive audio in your productions?

I use the Dante-enabled TiMax SoundHub and D4 tracking system, and when you light up that system, it’s eye- and ear-opening. Hearing the audio sources as objects coming from the direction and the physical location of someone on stage, you wonder why we relied on stereo PA for so long. The precision is incredible—you hear every word.

One of the biggest benefits is intelligibility. Colleagues have told me our shows sound better than some Broadway productions, simply because spatial audio lets every seat get clear, consistent sound. You eliminate hotspots and off-axis drop-off because the central cluster isn’t the only vocal source. You also get a big increase in gain before feedback, often around 6 dB in real-world use. While a lot of spatialization leans toward more intimate sound, this makes a huge difference in musical theatre where audiences have come to expect loudness in theater shows.

But spatial audio isn’t limited; it’s flexible. If I want something to sound like a traditional stereo PA, I can create an object in the software and mimic that classic sound. I use this for things like handheld mics, so they have that old-school PA character audiences expect.

The creative possibilities are enormous. TiMax lets us move audio, shape spaces with reverb, and route sound in ways we simply couldn’t before. Other systems exist, like Soundscape and ELISA, but I stick with TiMax because I love working with it, the rental shops I use carry it, and I’ve built great relationships with the people behind the technology.

What was the reaction from audience members to the immersive approach?

I’m usually not around long enough after opening to hear audience reactions firsthand, but I can tell you this: In the company’s earlier years, sound was the area that needed the most improvement. Amongst several other improvements, putting the spatial system in place resulted in sound getting frequent compliments from long time patrons and new audience members alike.

We consistently get positive feedback on vocal clarity. In our last production especially, between a great A1, the full tracking system, and a fresh set of mics, there was a lot of praise for how clear everything sounded.

Why was Dante a good fit for this project?

Dante is the backbone of everything. If I’m honest, TiMax as a spatial processor really gives you two options: MADI or Dante. MADI isn’t super useful to me—I could use it from the DiGiCo console into TiMax, but it’s not a great option for getting to the amplifiers afterward. Since TiMax sits between the console and the amps, I wouldn’t choose MADI as my path to the amp rack.

Without Dante, we wouldn’t even be having this conversation. Spatial audio requires an individual channel for each speaker cabinet, and there aren’t many clean, efficient ways to get up to 64 channels from the TiMax processor to the amps. AES would mean 32 cables; analog would mean 64. Dante is simply the way to go for spatialized audio—and honestly, even without spatial, it’s still the best way to access a large number of amplifier channels.

Outside of the immersive signal flow, where else are you using Dante?

Dante has opened so many doors for us. We used to rent a Shure microphone package. And now the theater company is fortunate enough to own it. All the channels from the Shure Axient systems go over Dante to the console—no copper snakes involved. We also use a program called WAVETOOL, which takes a Dante split into a separate Mac. It lets us monitor the wireless mics, view audio signals in real time, and even record them—making it easy to troubleshoot issues like when a mic drops out or if there’s a connection problem. Gone are the days of analog mix splits; now you just open Dante Controller, patch it to the console and the A2, and you’re done.

Dante’s flexibility is what helps the most. As the theater added budget, we could afford to run two QLab computers running Dante Virtual Soundcard for playback redundancy and setting that up was almost effortless. We can also move between plugin platforms depending on needs—sometimes using Waves over its own network with the DiGiCo console, or with the SuperRack LiveBox that supports Dante and third-party plugins. We’ve also used LiveProfessor and RME’s Digiface Dante in the past for live plugin processing, since with hardware Dante interfaces, latency stays low. So, if the console lacks a multiband compressor, we can simply add it digitally with no D-to-A or A-to-D conversions involved, and the signal remains clean. This opens a lot of creative flexibility.

Dante also makes communication with the crew and backstage areas much easier. In this venue, the spotlight operators are outside PA coverage, but we don’t need to set up a monitor because Dante lets us send a program feed straight to the Clear-Com Arcadia base station. The operators just assign the channel to their belt packs and adjust the audio. Dante also handles distribution for backstage paging. Whether it’s a dressing room, a green room, or front-of-house, audio runs via different Dante channels we can easily assign.

What were some of the challenges you faced on this production and how did you work to overcome them?

The first challenge on this production was the schedule—we normally get 12–14 days to build a show, and this one had to go up in about a week while the theater was juggling holiday events. That meant the sound design had to be quickly deployed, with most of the creative and technical work done ahead of time.

The other challenge was balancing all the elements of The Wizard of Oz. The show is full of big orchestration, environmental effects, and story cues that overlap, so clarity becomes everything. Moving from Kansas to Oz, separating dialogue from things like tornado wind, monkeys, or stingers, and making practical effects feel larger-than-life all requires careful blending. A lot of the work is making sense of those chaotic moments, so every emotional and environmental cue still reads.

To handle both the tight schedule and the storytelling complexity, I do a lot of the work before we ever get into the theater. I build rehearsal cues in QLab and send them to stage management to use throughout the rehearsal process. The director and I refine timing, pitch, and overall feel through notes and conversations. QLab’s spatial tools—and TiMax when we use it—also let me pre-visualize movement and placement, drawing paths and shaping motion ahead of time. By the time we reach the space, most of the creative decisions are already dialed in; I just need to refine them and scale everything up to the theater.

How can new engineers start working with immersive audio and get buy-in from theater companies?

I’d start with something accessible like QLab. It’s easy to route Dante in and out, place speakers, and experiment with object movement. Begin in a small lab space or on a local theater show so you can explore object-based mixing, so you’re not stuck using the old method of stacking fade cues to shift audio between speakers. TiMax panLab is another great tool—it can control QLab, move cues in space, and even manage console outputs. Plus, it’s rentable, which helps keep costs down. Affordability matters; start with tools like QLab or panLab that don’t require spending thousands before you know how to use them.

The bigger challenge is getting buy-in. Always lead with storytelling. Producers won’t approve immersive audio because it’s “cool”—they’ll approve it if it serves the show. I learned this from lighting designers: the ones who got moving lights into packages years ago didn’t pitch them as new toys; they tied them to a dramatic need. I approach immersive audio the same way. For example, I waited to introduce spatial audio until Fiddler on the Roof, where the director wanted intimacy and realism—voices from bodies, orchestra from the pit. That artistic reason made the case.

If you pitch immersive for a big, loud musical just because it sounds exciting, you’ll probably get turned down. Tie it to the text, the story, and what the director wants. That’s how you convince a theater to invest—not because it’s new tech, but because it makes the production better.

If you could shape the future of immersive audio and Dante, what changes or features would you want most?

On the immersive side, I’d like to see it in more venues and used by smaller venues as newer technologies come out, like QLab’s new object-based audio, offer a more affordable entry point. The more people using it and talking about it, the better.

While Dante is super standard almost everywhere I go, I would love to see deeper Dante integration and higher channel counts beyond 64×64 supported in more consoles or Dante cards. I hope more console manufacturers do what Yamaha and DiGiCo are doing where virtual sound check (“copy audio” on the DiGiCo) is a one-click button you can hit and it auto patches to a designated laptop source running DVS without the need to go into Dante Controller and patch. I think it’s amazing that if the cast isn’t in for 30 minutes, we can hit the button, playback the show audio and an A1 can be mixing and I could be doing design and artistic notes on the sound.

Special thanks to the whole City Springs Theatre Company’s Wizard of Oz production team Anthony Marshall, Tamir Eplan-Frankel, Alyssa Marrero, Allie Lourens, Sawyer Gray, and Charlie Taylor for their contributions.

Looking for more resources on spatial audio in theater production check out: https://www.nmhspatial.com/.

Want to learn more about Dante in live production? Visit our Live page.