How to go from a stack of routers, firewalls and load balancers to servers virtualizing those functions.
It started decades ago, routers, firewalls, load balancers, etc were and still are computers. The computers are more specialized, have FPGAs and other specialty hardware, but they are still computers.
The reason the network had to move to the current generation of focused products was that traffic levels were pushing the limits of what could be done using a generic machines. If I recall correctly, in the mid 90’s the last of the minicomputer based routers were removed from the peering exchanges and the era if Cisco/Juniper came to be.
Today the hardware available in servers: Ivy Bridge based processors, packet offloading NICs and other enhancements have brought back the possiblity of handling high numbers of packets per second (PPS) and Gigabits per second (Gbps) on inexpensive hardware. Companies like Vyatta, LineRate Systems, Midokura and others are out to prove that you no longer need expensive, proprietary hardware to do Network tasks.
In public, the ETSI Network Function Virtualization working group is pushing hard on vendors to provide virtualized versions of products and solutions. Even Cisco has released UCS based Network products that it claims can do up to 17Gbps.
But there is an issue, how do you get from a Hardware Defined Data Center to a Software Defined one? My view is that we need to follow the Software-led path towards what we currently call Software Defined Networking (SDN)
How to transition from Proprietary Hardware to Commodity Hardware?
As SDN tends to be described as a separation of the Control Plane and the Forwarding Plane, one of the first steps is to implement a Control Plane that runs separate from the rest of the hardware. Eventually this Control Plane will become part of an Orchestration toolset. Once the Control Plane is implemented, there are products that can be put in place to reduce costs, while adding a minimum of complexity. OpenFlow controlled switches can provide part of the solution.
Putting a few OpenFlow switches to put in the network to re-path some of the current traffic, would be a good first step. The switches could be used to offload traffic from your current hardware path, freeing up resources on your routers, firewalls, load balancers, etc. The main hurdle here is to have a system that optimizes the flows put onto the OpenFlow switches.
As more mature solutions come out to help manage flow tables, these same products could be used in concert with other network control products, to Orchestrate a NFV based version of the infrastructure via Network APIs.
While current commodity based hardware servers are not capable of pushing tens of millions of PPS or 100s of Gbps, OpenFlow based switches are. The orchestration of the packet flow for both SDN and NFV will be very similar. While OpenFlow may not, in the end, be the Network API used to manage all of the Hardware and vDevices in the Network, the work of building the Control Plane, OpenFlow integration and eventually the Orchestration management tool continues to provide value.
We are at the beginning of the Software-led Network Virtualization transition. It is important to look at what benefits are gained via Software-led designs and how you can take advantage of them now and in the future.
[…] more information see my article on The NFV Conundrum a Migration Path and NFV Explained on […]