This weekend, broadband operators enter uncharted territory: an 11-hour bandwidth-streaming extravaganza in celebration of one of America’s favourite pastimes — Super Bowl Sunday.
For the first time ever, the big game will not only be available to cable TV subscribers. It will be streamed live on NBC’s website for anyone in the U.S. who tunes in, regardless of whether they pay for monthly services or not. Even outside of the U.S., Internet viewers can still tune in by paying $9.99 to stream the game from NBC.
As a broadband operator, you’re probably already thinking it: Shut the front door, we’re screwed.
All that bandwidth required at once is going to put massive strain on provider networks, making bandwidth congestion management an equally massive subject for anyone involved in network operations.
So what has history shown us?
Well, this isn’t the first time a big event is going to be streamed. Back in 2010, operators broadcasting the Vancouver Olympic Games over the Internet became vanguards in the live event streaming realm, delivering multiple angles, multiple events, and on-demand replays. It was a great opportunity for delivery networks and service providers to learn geographic usage patterns and capacity demands.
Great. So all seems ok.
But then came the 2014 Oscars… I don’t know if you remember the blackouts, but due to the high interest and corresponding bandwidth demands, many broadband subscribers were unable to enjoy the event as operators struggled to cope with the load.
So, how will operators meet consumer expectations and deliver great service quality come Feb. 1st?
Besides the obvious investments in infrastructure and node splits, which can take more than 2 weeks to affect change, operators have many ways to deal with bandwidth congestion management. Fair usage policies are the most common approach, followed by bandwidth caps and then other proprietary OTT services. In Super Stream Sunday’s case, new streaming technologies like MPEG-DASH allow the stream to dynamically switch between different bitrates, allowing everyone to view an event at different bitrates imposed by a policy engine without breaking a network.
There are also other new initiatives at several standard bodies to help operators add capacity, such as the predicted 10x throughput of DOCSIS 3.1, or advances in compression technologies, such as HEVC, which require half the bandwidth.
One thing is certain: operators need a platform. A platform that can identify usage patterns and trends from its own subscribers. A platform that will help providers run prediction simulations and determine where to support regional bandwidth requirements. A platform to help them identify where to perform node splits and allow for tailored policy enforcement. A platform that allows them to provide the best QoE for their subscribers. If they don’t have this ready, we may have another Oscar debacle on our hands.
We’re just a few days away from this landmark event. So operators, are you ready? Do you have the tools to collect usage data or dynamically adjust usage limits on your network?
Let me know your tips and tricks for reducing network congestion during a major live streamed event by emailing firstname.lastname@example.org.