How to Stream a Debate, Gaffe-Free
It seems like only yesterday that we said “goodbye” to the U.S. political conventions. During the dog days of summer, we had a chance to reflect on what makes live-streaming them so challenging. And as the delivery partner for a lot of broadcasters, with the third presidential debate and Election Day just around the corner, we wanted to offer some hard-won wisdom for others embarking on high-profile live events.
Managing partner-splitters: In politics, people who vote Republican for one race and Democratic for another are called “ticket splitters.” In video production, think of them as programmers who cross partner lines to work with separate vendors for encoding, CDN and network. There’s nothing inherently wrong with this. It can enable you to work with best-of-breed partners for each phase of workflow and delivery. But it’s not without opportunities for error.
Change you can believe in: For example, many broadcasters have policies in place that prevent changes during primetime or drive time. Believe it or not, that’s not always the case when it comes to live-event time. Actions on parts of your platform — or a vendor’s — that seem totally unrelated to the live event can wreak havoc on the event itself.
For example, a network-wide virus scan can compromise egress bandwidth. This seemingly internal spike in consumption can ripple down to external viewers. Suddenly, the pipe you thought was dedicated from encoder to CDN partner entry point looks a lot less dedicated.
We’ve found three best practices that can help: transparency, telemetry and talking.
The promise of radical transparency: In the months and weeks leading up to a major live event, adopt a default policy of “communicate and share” even at the risk of over-communicating. Ask questions of and offer answers to your vendors. “What does your upstream architecture look like? Where do your firewalls or DMZs live? Walk me through your network tiers. Where does media traffic live, or not?”
Your questions are designed to get at the “itch” spots — the parts of the architecture that are most likely to get rubbed, and fail.
Multichannel Newsletter
The smarter way to stay on top of the multichannel video marketplace. Sign up below.
Good telemetry trumps blame: Political conventions are often about blaming the other guy for today’s challenges. In live-event video delivery, there can also be plenty of finger-pointing as well, especially in the multivendor setups we typically see.
Instead of playing the blame game, use good telemetry to isolate problems at their source. For example, set beacons throughout your workflow and tie them to your broadcast operations control center (BOCC). Then, set regular trace route tests to check for latency. Look at the timing of egress from ingest to CDN, for example. If the latency spikes above that threshold or shows a lot of variance, that should trigger an alert that leads to a proactive call from your workflow monitor.
By eliminating the finger-pointing phase, mean time to notify and mitigate should drop significantly. You can do the same with ingest errors by regularly polling entry points. Imagine if potholes could be filled that fast!
Know your moves — communication and operational awareness: Heading into any large live event, there should be a clear picture between the content provider and the CDN of what the contingencies will be used in the event of quality degradation. What are the primary and backup paths for first mile? Do you have backup streams/workflows configured to switch to in the event of unforeseen failures? What are the targeted regions for viewers, and which ISPs will be serving the last mile? What is the security policy for the content, and who is entitled to receive it? What device platforms are important for the event (e.g., Apple, Roku, etc.)?
It’s important to set expectations and be prepared for the order of magnitude of concurrent users — both from a network capacity and tooling perspective, which will ensure proper throughput, alerting and data aggregation as we can now monitor content and traffic in near real time.
Communication continues to be key during the event. For example, establish a live audio bridge between customer and CDN to facilitate any changes in production that may impact the streaming content. Check all available bitrates and latency for primary and backup feeds. Just prior to the start of the event you should know how your content is being mapped at entry point, and periodically checking throughout can give the customer and CDN a level of assuredness as they are executing properly on a key event. Calling out that status on a live bridge is good operational hygiene. It is a good way to stay synchronized. Call and response protocol on an agreed upon cadence is a simple concept but not always applied in this stage of the content supply chain. We believe this should be the norm.
In order to win the viewer’s vote on where they go to get their OTT content, the quality must be there. We know from analytics that when rebuffering spikes or bit rates drop and picture quality is poor, people will leave the stream and go somewhere else. Therefore, it is imperative to provide proactive, real-time monitoring, communication and quality metrics, while ensuring broad interoperability across a fully managed end-to-end, multivendor ecosystem.