Multicast, Unicast and the Super Bowl Problem
For bandwidth people, hardly anything is a surprise anymore. Not even that 50% and higher uptick in broadband usage, year after year after year, since about 2009.
Not even when they realized that nothing has ever grown that fast, and for that long, in the history of consumable goods.
It’s because of all of our Internet-connected stuff, of course, and how much we’re using those screens to create, ship, download and stream video.
Video is the undisputed fat man of the Internet.
Cisco reconfirmed the trend last week in the latest installment of its Visual Networking Index, a recurring study that tracks what’s going on with Internet usage. The report covers tons of ground, with numbers so big they’re hard to conjure.
Like this: “Internet video-to-TV traffic will increase nearly fivefold between 2012 (1.3 exabytes per month) and 2017 (6.5 exabytes per month).”
In the hierarchy of increasing numerical size, it goes: “kilo-“, “mega-,” “giga-,”, “tera-,” “peta-,” then “exa-.” “Exa” is a quintillion. As in million, billion, trillion, quadrillion, quintillion. Sextillion, septillion, octillion, nonillion, decillion. OK, I’ll stop.
But let’s get back to bandwidth people. Part of their work is to ensure that demand doesn’t outstrip supply. Which brings us to “the Super Bowl problem.”
The Super Bowl problem, from a bandwidth perspective, has two parts: One, what if everyone tunes in all at once? Two, what if everyone pauses all at once?
“Everyone,” when it comes to the Super Bowl, was 108 million people this year. They all saw the game on their small, medium and super-large screens. So what’s the problem?
It’s the “Internet-connected” part. Of the 108 million, only 3 million saw the game as an online video stream. That brings us to the lingo of video distribution, tweaked for online usage: Multicast and unicast.
Refresher: In the good old days (meaning today), to broadcast is to send one to many. Whether one person watches the game, or 108 million, doesn’t matter. In bandwidth terms, it’s all the same.
The Internet doesn’t work that way. It’s intrinsically many to many. When you stream House of Cards on Netflix, other people might be streaming it at the same time, sure. But how it works is called “unicast.” One stream unicast to me, another to you, another to Harry, another to Jane.
Nailing up enough bandwidth to unicast 108 million unique streams is both a horrific waste of bandwidth, and a great way to buckle the system.
So, there’s multicast. It’s the streaming equivalent of raising the red flag on your (physical) mailbox — not to say “there’s mail in here,” but to say, “gimme.” With multicast, if someone in your serving area is already watching what you want to watch, you “join that stream.”
Multicasting to 108 million screens is non-trivial, in engineering-speak. It’s computationally intensive. Getting there involves work on everything from cloud to protocols to servers to routers to devices. It’s about making sure that video cloud has enough intelligence to recognize a URL with multicast headers, then making sure enough “multicast join” mechanisms are in place. And lots more.
Right now, the thinking is to multicast to the home — which is why you keep hearing about “gateways,” by the way. The interim step to “all-IP.”
From the multicast stream to the gateway, streams can be unicast to the other devices in the house that want a look. That way, the buffers needed to pause are in the house, not in the network. Imagine building a buffer big enough for 108 million pauses!
That’s the short version of broadcast, multicast, unicast, and the Super Bowl problem. Or you could just skip the game and go to Costco. No lines — or so I’ve heard.
Stumped by gibberish? Visit Leslie Ellis at translation-please.comormultichannel.com/blog.
Multichannel Newsletter
The smarter way to stay on top of the multichannel video marketplace. Sign up below.