Published on 1 Jul 2014
The basic premise behind Big Data is very simple: disk space is now almost free, therefore we can record everything in the present. From consumer preferences to how subscribers use remote controls, you can record and analyze this information later on, because you never know when it might come in handy.
The trouble with this approach is that the sheer amount of overwhelming data makes extracting meaning a challenge. Often, this data hides interesting trends that are only valuable when analyzed over long periods of time or when covering large population segments. Very often, this data is only valuable when cross-referenced with other data.
Despite these difficulties, Big Data gives us a way to examine trends on a wider scale and better our understanding of a problem — or actually see if there is a problem hiding somewhere. For example, tracking all the button presses on a remote control will help designers create better interfaces in the future and simplify the remote. In the same way, collecting tons of information about the network and traffic will help optimize routing, network upgrades, and predict congestion.
By cross-referencing data from individual subscribers, with data from the macro network, service providers can gain a wealth of information about growth and trends. For instance, by correlating service quality data from customer premises equipment with network performance and congestion data, and comparing this information to churn rates, you could analyze how poor service quality leads to churn.
In the end, learning more about how subscribers use services and devices will be essential to map network performance — and will be essential for you to deliver the best possible user experience.