In the last part of my series on Enhancing Customer Care, I described the essential role of the customer call center, which directly affects the Customer Service and Retention stages of the customer lifecycle model. Both of these stages are immediately impactful to a subscriber’s quality of experience (QoE) whenever a service issue occurs.
What’s equally impactful to the state of customer experience is how customers are using the services purchased. For the operator, this means proactively ensuring that customers receive the services they have paid for — the promised up/download speeds and bandwidth allotments, connection reliability and predictable service are being delivered.
As the way people consume bandwidth evolves and the demands increase with certain applications, operators are left with a challenge: how and where to best invest in the outside plant to deliver the needs for today and scale for tomorrow. As the rate of evolution in access technologies increases, adopting a just-in-time strategy has become a best practice, but when is just-in-time?
Ensuring that proper speeds are being delivered to your customers is not only vital to the customer experience, but also a leading indicator of your network’s health. For example, one aspect such as the accuracy of delivered up/download speeds could have multiple causes. Are the issues occurring due to packet corruption, which might indicate failing hardware or noise issues in the outside plan? Or, are revenue losses due to provisioning errors or bandwidth theft, or subscriber edge problems where the subscriber is only accessing the infrastructure over wireless connections and is experiencing congestion or signal to noise ratio issues?
At the highest granularity, operators need to ensure that network infrastructure and capacity can deliver the subscriber’s expected quality of service. Following the best practice of just-in-time infrastructure buildout, operators experience many challenges in attempting to gain accurate infrastructure statistics. Results from a 2017 survey on Capacity Planning in the Broadband industry show that nearly one third of technical teams are unsatisfied with their current capacity planning processes. The report revealed that a great deal of guesswork is involved when operators are attempting to plan for capacity requirements. This is due to numerous factors, including:
How can we improve customer care by ensuring proper usage and improving capacity planning? The answer begins with accurate analytics.
There are multiple sources of usage records. One commonly used lightweight source is Internet Protocol Detail Record (IPDR). Using IPDR, operators are now able to see holistic pictures of their network’s usage, from both the network and per-user perspective, in near realtime as recorded from the actual headend equipment. This offers immense value to operational teams who need to match usage within a subscriber’s contract as well as manage congestion on a routing element or node.
One of the more recent ground-breaking revelations of obtaining accurate usage records throughout the day is that operators can now use them as a predictive analytics platform. This helps infrastructure teams make more accurate capacity planning decisions by basing subscriber usage on real data, including:
By making data-driven decisions, the customer’s experience (as they use the services they paid for) will be maximized given the investment.
As the operator’s footprint grows or densifies, the other major consideration is which access network will deliver the high amount of bandwidth required. These bandwidth requirements are continually growing. Over-the-top (OTT) services and content producers are plentiful in the modern era, and wide-spread use of personal Virtual Reality systems for live sports and gaming platforms is on the horizon. Unprecedented amounts of bandwidth will be required to maintain high quality experiences. Reliable, low latency systems will be essential for health and well-being when autonomous driving becomes a reality. Capacity planning departments must now weigh the pros and cons of FTTx, DOCSIS 3.1, G.Fast, 5G or a converged network approach to determine which access network fits their subscribers’ needs and the natural evolution of the existing network.
The immediate future of network access technologies includes fiber optics, regardless of whether it is a fixed-line, wireline, or wireless operator for extending reach and backhauling purposes. To prevent adding undesired overcapacity in the outside plant and know how the infrastructure will scale out in the future, usage analytics provide insight into the decision making process.
Many arguments can be made for each network access technology, and ultimately the best choice will be highly dependent on the operator, subscriber density, environment, topology, and importantly: use of the connectivity. For some operators, a converged network is best.
However, converged operators need a consistent abstracted view of the subscriber through the activation, orchestration and provisioning stack, regardless of access technology.
When one thinks of customer experience, often the first word that comes to the mind is “assurance”. Assurance is a component of the equation but is usually prescriptive, alerting when service levels are not achieved. This is more of a reactive care method. As the saying goes: an ounce of prevention is worth a pound of cure. Most understand that this is more than assuring service is simply delivered to the gateway, but the complete experience of the customer. This is where proactive care comes in.
Proactive care is the automated activities that alert or take prescriptive actions on telemetrics that project issues. A decade ago, CAT5e still dominated how most Internet of Everything devices connected to broadband services. Wireless connectivity, WiFi in particular, is how most residential and commercial devices communicate today. What this means is that at the end of the day WiFi is part of the subscriber experience and must be a component of the proactive care equation. Understanding the wireless environment/topology at the edge is vital to ensuring a great customer experience. Modern access points provide a wealth of information to diagnose and adapt to optimal settings at the subscriber edge using traditional SNMP or push mechanisms such as TR-069. Beyond just WiFi, most access points provide detailed information such as negotiated speeds, noise floor, packet loss, spectrum utilization, and even neighbouring access points.
WiFi is a critical component in the customer experience. In a recent report by Incognito, 40% of respondents felt that better WiFi hotspot coverage would be the most appreciated service option to add on to their existing plans. How subscribers access and use broadband services has evolved, adding additional complexity to ensuring service levels are achieved.
In the first part of this series, I shared a statistic from an internal customer study which showed that regardless of a satisfactory conclusion to a customer call, there is a 37% probability that the customer will leave the service provider within three months of having an issue. What is the key to avoiding that attrition? Constructing a network where service issues become a rarity.
With intelligence at the edge, and data on how and when subscribers use the services they have paid for, operators now have the data and tools to ensure a healthy network and a happy subscriber, as well as a means to accurately plan and scale for network capacity requirements in the future.
In the next chapter of this series, I’ll be exploring the Bill and Pay stages of the Customer Lifecycle. We’ll shine a light on the importance of billing transparency and accuracy, how integrated B/OSS can help reduce errors and operational expenses, and how enabling self-service options for your subscribers improves the overall customer care experience. Stay tuned.