Cloud, Baby Cloud! (Class Recap)
DENVER and SALT LAKE CITY — Last week saw the premiere of a video cloud class, titled “How Cable’s Video Cloud Works: From TV Everywhere to Internet Everywhere.” Creating it consumed my summer.
Aside from one review that blasted me for talking too fast (“sounds like babbling”), the reaction from three audiences (the course debuted in New York on Oct. 15) was gratifying.
Here’s the gist: Cloud is everywhere. Whether you’re designing a storeroom on a construction site, running a pet store or sending a file too big for an email mailbox, you’ve heard of or are actively using cloud technologies. The same is true for video and cloud. And, like anything anchored in IP (Internet protocol), it intersects with every step in the journey of a video asset from the time it’s created to the time it reaches the eyeballs.
The overarching premise of the class, developed for non-technologists, was this: If you accept that “the competition” is no longer “just” satellite and telco, and now includes over-the- top purveyors, then it’s time to accept also that how the cloud changes how people work, culturally.
It’s not better or worse — just different. It’s faster. It’s agile and collaborative, with a mantra of “continuous improvement.” (And yes, there is a persistent whiff of “kumbaya” in this part of any cloud discourse — but it’s allowable, because the results are measurable, by way of reversed subscriber churn and speed-to-market with new stuff that’s clearly working.)
From there, we (to include my partner, Craig Leddy) bucketed the video cloud into sections: Content culture. Distribution. Processing. Devices. Applications.
Distribution changes with the addition of WiFi as a tetherless conveyer built for content, and with the establishment of CDNs (content distribution networks) that centralize video storage, then organize it hierarchically. Intent: Put the most popular stuff closest to viewers (in lieu of “pitching” and “catching” video assets, via satellite, to a distributed storage footprint).
Processing changes, with the addition of adaptive bit rate streaming to “right-size” an asset for the screen that displays it, and with the delineation of things that are better done in the cloud than in the end device, such as encoding and transcoding. Preservation of “state,” which is software-speak for “pause/resume,” with the addition of “anywhere” to the mix. (Pause in the living room, resume in a hotel somewhere.)
Devices change in a delicate interplay of what happens locally vs. in the cloud: Number of tuners per box, physical storage and the encoding/transcoding, specifically. Box people want these things to stay in the box; cloud people want the opposite. And the same vendors populate both sides, so, everybody wins?
I’m skipping a lot here — it’s a four-hour class. The good news is that the buckets worked, and it appears that we lifted at least some of the cloud’s … fogginess. Thanks again to CTAM and Rocky Mountain WICT for commissioning the cloud!
Stumped by gibberish? Visit Leslie Ellis at translation-please.com or multichannel.com/blog.
Multichannel Newsletter
The smarter way to stay on top of the multichannel video marketplace. Sign up below.