On a hot summer afternoon in 2003, an overloaded power line in Cleveland, Ohio drooped onto underlying foliage, tripping the high-voltage line and sending a power surge back through the system.
Earlier that day, a technician in the area had gone to lunch after repairing an alarm processor and neglected to turn the system back on. Without the proper safeguards in place, line after line throughout the region tripped. Neighboring grids started feeding power into Northeastern Ohio, tripping more lines until cities as far as Detroit and New York were in the dark.
It was the largest blackout in American history. In New York, the subway system ground to a halt, stranding commuters and trains. Across the region, more than 50 million people lost power.
What Went Wrong
Looking back at the blackout, a few lessons stand out:
- The importance of real-time visibility: In 2003, it took more than half an hour (and multiple calls from customers and other sites) for back-office coordinators to realize the magnitude of the problem.
- The danger of unplanned downtime: In Cleveland, mission-critical infrastructure was not properly maintained, exposing the region to increased risk. In the years since, power companies have invested billions upgrading computer systems and cutting back vegetation.
- The need for more stringent reporting and regulation: At the time, there were not sufficient systems in place to ensure the safe, successful completion of each task. Today, once-voluntary reliability standards have become mandatory.
Asset Uptime for the Automation Age
Nearly two decades later, 5G is redrawing the lines of carrier dominance — and asset uptime has gone from an internal metric to a system critical link.
Once 5G networks are fully operational, they will support a wave of new technology that requires persistent uptime. Everything from smart cities to self-driving cars will rely on the speed and almost-zero latency of these new networks.
When it comes to self-driving cars, latency (the time it takes for data to travel between two points) is the key to unlocking their full potential. In practical terms, latency is the time it takes for a car’s sensor to identify a potential problem (e.g., another car, an object in the road) and determine what to do next.
With current networks, this process takes about 50 milliseconds. But with 5G networks, it will take as little as one millisecond, enabling real-time communication between self-driving vehicles and the infrastructure that supports them.
Almost everything that uses a wireless connection today will benefit from the deployment of tomorrow’s networks. However, all these advances come with an extra degree of responsibility — in the automation era, outages like the Northeastern Blackout will pose even more of a threat.
Understanding (and optimizing) the logistical puzzle of maintaining all this infrastructure will be one of the biggest challenges for telecom leaders over the next five years. The service demands and maintenance of these massive new networks will force a reimagining of the entire field service industry — and involve more than just people.
How Do We Get There?
To keep pace with the deployment of 5G networks, there will need to be a radical restructuring of the way telecoms manage field service. When it comes to 5G, success will be measured by network density — for every traditional tower, field service teams need to install 300 small cell antennas. This is a task that simply cannot be accomplished with traditional workforces and legacy processes.
To meet this scale, field service management must pivot to field service automation. IoT devices can support this change by streaming real-time data and automating routine tasks, but field service automation will provide the stability and consistent uptime needed for the automation age.