The Myths and Legends of Vendor Lock-in

April 2nd, 2015 by

There is an ongoing discussion (read:argument) around something that is like the Voldemort of technology: vendor lock-in.

vendor-lock-in

Many acknowledge that it exists. Many deny that it exists. Both camps are right to a degree depending on the environment. If given the question, I have to admit that vendor lock-in does indeed exist. What is more important in my mind is that most vendor lock-in is unavoidable, but that’s just fine.

I won’t go into specific hardware and software except to illustrate examples. The idea here is just to look at how vendor lock-in happened, or rather that it became noticeable.

Back in the day…

At one point in the past, and still in some organizations today, server and network infrastructure was commonly purchased or financed rather than leased. This was around a lot of the cost of manufacturing, and the long lifecycle of applications that didn’t require changes.

Another driver for the longer life cycle was the longer innovation cycle for hardware. Operating systems lasted longer, and firmwares came out less often, plus vulnerabilities were less prevalent because less applications were outward-facing.

Distributed computing become a mainstay for most organizations which changed the way that servers and networks were acquired and deployed. As leasing became more popular for companies to do for hardware and software platforms we evolved the application lifecycle with it.

Rapid application development and more new development practices came to the fore and we found that applications were popping up faster. They were also becoming stale at a faster rate which mean that the timing of upgrades for software and applications became shorter than the lifecycle of the hardware.

Virtual Escaping the Physical Boundary

Virtualization came along and extended the ability to support these applications on both sides. Virtualized hardware for the guest instances meant we could support it for longer than the lease of the hardware. Better deployment agility allowed us to adapt to the faster lifecycle of applications as well.

We now had the issue of more than one competing virtualization vendor which introduced a problem of designing our guest instances to one environment. Despite the air of portability, it was portability within the platform only. This is flexibility with an asterisk. Openness with conditions so to speak.

Then along came the cloud, which many saw as the escape from vendor lock-in. Or, so we have been told.

Cloudy Forecast at Hotel California

As The Eagles once wrote: “You can check-out anytime you like, but you can never leave.” which is a good way to describe what can happen with a vendor lock-in experience. As you build your entire application platform around a specific infrastructure offering, you inevitably build in some tightly coupled hooks to take advantage of the best available tools within that infrastructure.

If you are running on AWS, you will build out an EC2 compute platform with some S3 storage, and add in an Elastic Load Balancer, plus some queuing features of SQS for example. This is all very simple infrastructure to deploy and manage. It is all very accessible from programmatic tools using publicly available APIs, and the inter-dependencies are easily identified because they are regionalized within the AWS cloud.

All good, right? Well, let’s just say that something suddenly happens that requires you to migrate your application to somewhere new. While you’ve done your best to make everything accessible via APIs, there are still specific hooks to AWS potentially. Migrating your data can be challenging and costly. It isn’t that it can’t be done, but it is a non-trivial process and can induce cost for data transfer and I/O.

So, the end result is that if you are moving to the cloud to embrace openness, be careful what you see as open. Amazon doesn’t really tout themselves as open, but they do toss around the idea of flexibility and ease-of-use. They really mean flexibility and ease-of-use within their platform.

Should I Worry?

Yes and no. We should be concerned about how we are architecting our platforms and infrastructure because of the medium to long term effects of having to evolve and migrate those environments. We should be prepared in the case that a service becomes suddenly unavailable. Not just in a localized outage such as an AWS region, but imagine adopting a cloud provider who suddenly shuts their doors with limited notice.

It’s the same for our data center platforms. Even if you are planning to adopt OpenStack, or CloudStack because of the open source platforms, you may find that your open platform has dependencies on that open platform. While it is open, the images and applications you write to go into there may still be bound to some dependencies.

We won’t all be writing scale-out, stateless applications for everything. Many organizations have massive amounts of legacy data that is locked in where it is at. It may be somewhat flexible within its platform. Again, this is alright.

I prefer to think of it more as a vendor bear hug.

3 responses to “The Myths and Legends of Vendor Lock-in

  1. I don’t believe that the harrwade virtualization requirement is true anymore. From memory, on the page where you install the XP mode files, there is one that is installed to remove this requirement.

Leave a Reply

Your email address will not be published. Required fields are marked *