Deja Vu – Containers, Virtualization, and SDN

January 27th, 2015 by

osborneEverything old is new again. There really is a cyclic nature to many things in live, and technology is no different. It isn’t that the thing that we were doing 5 years ago was wrong, so much as it was the best strategy we could take at the time based on the available technology. When we look over the history of software, hardware, and the way we combine and use them, we will notice some interesting patterns emerging.

Leveraging the best of the past for today and tomorrow

Many of the patterns in technology are surprisingly familiar, but at the same time they also appear to be innovative because there are new use-cases that make them more viable solutions. Recall that the concept of virtualization and logical segmenting of physical hardware has been in place since the days of mainframe and LPAR (Logical PARtition). The centralized compute model also stems from the classic mainframe architecture which was the basis for VDI and many other virtualization concepts.

Let’s take a look at some key technologies that are getting a lot of attention today and see how the past has helped to shape these ideas into what we see as new, innovative, and disruptive products today.

Containers – LXC reborn

Docker is getting much of the attention lately, but we have to look back to LXC (Linux Containers) to see where the Docker concept has come from. This is a great example of the idea that everything old is new again. LXC has been available since 2008 and has enjoyed wide usage in the industry.

Where Docker shines over LXC is that it contains a runtime based on containers. Effectively, it takes the container concept and then extends it to run an application virtualization engine inside. Leveraging the containers as the base, Docker added extensions which moved the virtualization further up the stack. Docker also added cross-platform capability which was limited with LXC. In fact, LXC was even a challenge to run across distributions because of subtle differences in the core of each linux derivative.

As quickly as everyone ran to Docker, there were also challenges which triggered the team at CoreOS to make a move to build Rocket. The Rocket platform is gearing up to create a management framework for containers which was a hole in the Docker ecosystem that some say could be where the commercial arm of Docker is looking to grow into.

One thing that is for sure is that container concepts are growing in popularity and adoption. The versatility of containers to allow for development of apps across platforms, and across physical and virtualized environments is where it will continue to expand. This was a good example of where the technology was present, but needed some more care and feeding to turn it into a more viable alternative for developers and operations teams.

Virtualization – LPAR++

As mentioned earlier, the hypervisor concept on x86 may be big, but its origins are rooted in classic mainframe concepts. Being able to logically separate hardware into virtual environments was a powerful concept, but the x86 architecture was different in that it wasn’t able to match the sheer compute power of a mainframe system.

Distributed computing was born and the server sprawl began to happen in a big way. The x86 architecture was ideal to run smaller environments across multiple machines, but then the hardware became more powerful which opened the door for revisiting the model. As the larger, more powerful x86 systems became widely available we had realized that there was a lot of unused compute cycles and memory usage. This was the dawn of x86 virtualization.

Linux also drove the adoption of x86 because it was a new alternative to the hardware-bound Unix platforms which were primarily happening on SPARC running SunOS. Many web properties were launched using this powerful hardware and software pairing, but the rise of x86 *nix alternatives began to show that it was equally versatile and the costs were significantly less.

VMware may have been a small player at one point as Citrix was dominating the virtualization landscape using its popular OS for server, desktop, and application virtualization. VMware saw the opportunity to simplify the deployment, and increase the stability and versatility of the hypervisor as it developed the vSphere platform.

Microsoft made its foray into virtualization with Hyper-V, which took some time before it was able to stabilize and make the big leaps with features and stability that have now been in place for a few years. While we once chuckled at the idea that Microsoft Hyper-V would be a major contender in virtualization, we are now looking at significant growth of the platform as a result of better features and stability that have occurred mostly since their Windows Server 2008 R2.

KVM and Xen, which are open source hypervisor alternatives, have also seen significant adoption in the last couple of years. Because KVM and Xen are free for use under open source licensing, they are particularly popular in companies who wish to put their resources towards engineering staff rather than hypervisor costs.

Many, if not most, organizations are running a hypervisor as their primary platform. Private and public cloud options have been made possible using these same technologies which is driving density with better self-service capabilities.

Software-Defined Networking – Flexibility and Process Enablement

The latest big trend in virtualization is the introduction of SDN platforms such as VMware NSX, Cisco ACI, and the open source alternatives such as Open vSwitch. The idea behind SDN is to provide a de-coupling capability from the hardware, and moving the intelligence of routing and logical switching closer to the workload while also providing scale-out capability and the logical use of networks across physical boundaries.

Extending networks across physical boundaries has been in place for a long time, but these layer 2 boundaries required subnets to be different across environments in order to battle the limitations of WAN protocols. SDN introduced the separating the control plane from the data plane, and created an ideal platform to run layer 2 over the layer 3 networks using encapsulation protocols like VXLAN.

Running routing software on agnostic hardware is also not a new concept. I was recently reminded of early work that I did running LRP (Linux Router Project) that allowed the use of linux as a base to run on any hardware and perform routing inside the lightweight software platform. This was certainly a long way away from encapsulated L2 across L3 boundaries, but again the purpose of what I’m showing here is that the concepts have been in place for quite some time, but not yet matured enough to gain popular adoption.

Everything old is new again

Just like the return of musical styles, the concepts in technology are truly cyclic. Open source projects have opened the doors to many of these technologies gaining in features and popularity, plus commercial products are rising in adoption as well to provide features that have been long sought after.

History repeats itself, but it also improves with each iteration. That is the important part of it all. As old concepts are revisited, new ideas and innovation are happening to make these old methods more viable for today’s consumers.t/

Leave a Reply

Your email address will not be published. Required fields are marked *