Opinion: The journey to Networking 3.0

Story image

In the last several decades, we have seen massive changes to networking and networking technology. From the hardware-dependent, scale up networks of then, to the software defined networks of now, service providers, hyperscalers and enterprises across the world have been on an exciting networking journey.

There have been three distinct areas in networking: Networking 1.0, 2.0 and 3.0.  Each era of networking was shaped by a killer application. That killer application led to significant technological advancements that ultimately defined networking in that era. In each case, networking changed the world, by connecting everything and empowering everyone.

Networking 1.0 – Connection-led

In the 80s and 90s, before the Internet was ubiquitous, the killer application of networking was connectivity. Whether it was telephony, private lines, virtual private networks or optical transport, Networking 1.0 was connection-led.

Connectivity as a service scales with the number of people or locations that are connected to each other. Fortunately, Moore’s law allowed doubling of the network equipment capacity every 18 months, which more than kept up with the increasing demand of connectivity. Network equipment increasingly turned into ‘God boxes’, refrigerator sized chassis that provided switching, routing and multiplexing capabilities all in one device.

One of the advantages of the refrigerator chassis was that the number of deployed networking equipment didn’t have to grow much to support the growth of connectivity as each chassis got a capacity boost every 18 months. A crew of expert network operators employed their multi-year rigorous training in managing these equipment through the command-line interface (CLI) and network management system (NMS) that they painstakingly learned. And there grew the bond between the human operators and their ‘pet’ network equipment – the network equipment were managed as ‘pets’.

But there were issues. Implementations throughout the network were completely closed and interoperability was not always guaranteed. It was difficult for network operators to work and incorporate other systems into their own and, generally, the only option to building a network was to buy what was available from the equipment providers of the day. There was no open or open-source networking eco-system for academics and innovators to rapidly innovate on.

Then the internet happened.

Networking 2.0 – Data-led

When the Internet started becoming invaluable part of our daily lives in the late 1990s, everything changed. Multimedia contents exploded and with that the volume of data flowing into the network. Moore’s law was not keeping up with the explosion of network usage and the networking 1.0 strategy of refreshing network equipment every few years to keep up with growing demand was running out of steam. Keeping up with the Internet required a completely different kind of networking strategy. I call this, Networking 2.0. Networking 2.0 was data-led.

I joined Google as a Network Architect few months after Google had acquired YouTube. I had a front row seat as the Internet fueled the explosion of content and data all around us and the traditional networks started bursting at the seams. Ten years later, over 1 billion hours of videos are watched on YouTube each day! And the network kept up, by adopting a completely new architecture, that is scale-out instead of scale-up.

Instead of building networks with refrigerator chassis, we started designing network fabrics using leaf-spine topology, first in the datacenters but soon in wide-area networks, network edges and content delivery networks

As data flooded in, the fabrics kept expanding to accommodate. One distinct downside of the scale-out architecture was that the number of network elements kept on increasing with the increase in network bandwidth. Managing them and operating them as pets with human touch became increasingly untenable. It became clear to us that networks had to be treated like cattle, where the fabric mattered, not the elements that the fabric is composed of.

Also, with more devices, came more network failures. In fact, the number of element failures in the network increased proportionally with the number of network elements. Mitigating failure one element at a time, where operators individually responded to each event was not a viable strategy anymore. Instead, software made the system reliable. Enter software-defined networking.

In order for a network to be managed and operated by software, the underlying network elements and subsystems had to be programmable at every layer, with disaggregated data/control/management planes and sufficiently abstracted through data-models and APIs. This became the mantra of Networking 2.0.

Automation became a matter of survival in this era as automation allowed continued scaling-out without compromising reliability or efficiency.

Developments in SDN and scale-out network architecture, underpinned by automation technology, is what is leading us into what we are experiencing today and into the future: Networking 3.0.

Networking 3.0 – Application-led

Application in the cloud world needs to be ubiquitous, effortlessly spanning the boundaries of campus, branch, datacenter, private network and public internet. While we are at home, in the office, at a cafe, or traveling around the world, we want our favorite applications to just work. Such ubiquitous, reliable and secure application delivery is made possible by a new breed of network architecture. We call this Networking 3.0.

While we are in the midst of, and still realising, the full capabilities of Networking 3.0, what we know is that it is application-led.

Application-led networking inherits much of the good properties of Networking 2.0. Networking 3.0 however goes a step further by blurring the line between physical and virtual networks. Similarly, the line between on-premise and off-premise or private and public network blurs. Network underlay and overlay are utilised seamlessly in data center SDN, SDWAN, SDLAN, SD-WLAN or VPCs.  For today’s networks this isn’t just a preference, it’s a requirement.

Through this journey, networks are slowly but surely marching towards a completely autonomous and self-driving future.

While networking 1.0 was completely human-driven, software defined networking enabled automated workflow driven operation in Networking 2.0. The next phase from here in Networking 3.0 is event-driven networking – generally a closed loop automated operation that listens for events and triggers automated workflows as response to those events. Despite the loop being closed, such operations are still pretty limited in their scope and sophistication.

Here is where Networking 3.0 is starting to take advantage of the progress made in artificial intelligence and machine learning. By learning from different events and the efficacy of the pre-determined automated responses, the system continuously adapts and improves with little-to-no human intervention. Eventually, networks will become entirely autonomous, self-analysing, self-discovering, self-configuring, and self-correcting. This is the self-driving Network, and it’s the goal for the future of networking.

We have come a long way since the connection-led days of networking, but with the day to day running of networks soon to be entirely driven by machines, it is exciting to see what the next phase of this networking journey will be.

[“source=datacenternews”]

, , , , ,