Give Your Developers the Agility They Need.

August 26th, 2014 by

A recent Tech Republic article claims that analysts significantly underestimate public cloud usage. Like you might hide your spending from your significant-other on a guilty pleasure, developers aren’t divulging their use of public cloud resources, and thus businesses can’t measure their true cost.

Shadow IT practices have gone on for a while resulting in organizations losing track of how much public cloud resources their employees may be consuming. One example is a recent EPA – the U.S. Environment Protection Agency – report acknowledging that they had lost control over usage of public cloud resources.
piggybank

Now on the one hand, I hope that when IDC, Gartner and Forrester create their estimates they are not only looking at the demand but also at the supply, i.e. if they collect data from vendors they should have a good sense of spend even if users don’t. On the other hand, as a former SaaS product manager, I know how tempting and easy it is to stop dealing with the internal friction of getting resources through formal channels and just going to AWS or Azure.

In fact, convenience and flexibility are seen as the top reasons to go to the public cloud. It’s less about cost and more about getting up and running quickly. However, many organizations, as my former employer used to do, still allocate resources based on physical world mental models in which departments get access to servers not compute or storage capacity.

If you’ve already virtualized your data center, you may have started to leverage the Resource Pools capability from VMware as a way to provide your organization with the flexibility to logically separate compute resources by internal customer or department. Resource Pools start you down the path of standing up a Private Cloud with the flexibility to give developers dedicated resources. Or you may be further down this path and are leveraging a Cloud Management Platform like vCAC or OpenStack which make it even easier to provision resources and enable developers to on-board new workloads.

But how do you understand how much compute to allocate, reserve or limit to manage workloads in the cloud? Workloads are not static and their demand changes. This gets even more complex when you consider a typical 3-tier application and the interdependence of the tiers. And how do you maintain control while giving your developers the agility that attracts them to AWS or Azure?

VMTurbo Operations Manager with the Cloud Control Module will manage Resource Pools, Tenants or Virtual Data Centers. It abstracts cloud resource silos as Provider VDCs (allocation of physical resources) and Consumer VDC (organizations). This abstraction gives VMTurbo complete visibility into the full VDC chain, from the resources provided by the underlying hosts and physical datastores, through the resources consumed by a provider VDC, to the resources consumed by VMs hosted on a consumer VDC.

VMTurbo assures the workloads get the resources they need without over allocating or sizing the resource pool. As an example, when VMs/Workloads in a consumer VDC are experiencing more demand VMTurbo will recommend and implement (in automation mode) size up actions of CPU, Memory or Storage to the VDC. Conversely if you stood up that new multi-tenant SaaS application, but have not been able to attract any new users, VMTurbo will recommend a size down action to keep you from over allocating and wasting resources.

So let’s go back in time to my former employer. How could VMTurbo have helped?

The analytics SaaS product I was managing was a 3-tier application with web, app and database servers. The production, test/dev and staging environments ran across ~9 hosts. We had dreams of a much larger user base but, in reality, our user base was small. As the Ops team used to say, when I asked them if we were pushing the resource limits after onboarding 3 new customers, “I would not use the word push to describe this environment.”

We were spending ~$260k all-in per year on infrastructure including Ops team support. As we looked at adding a new SaaS application to the environment I was hoping to leverage the same infrastructure. But lack of flexibility and convoluted cost centers resulted in us going to AWS with a projected spend of $50k per year (this was after we started out in the AWS free tier without telling anybody).

With VMTurbo in the environment, the Ops team could have easily started to separate resources for each application, and consolidated the over provisioning for our ETL and analytics services running on our Tomcat servers. ETL is resource intensive but would only run once a day for each customer. This would have easily saved us from having to go to AWS and improved the performance issues we had when those 3 new customers ran their first ETL at the same time. VMTurbo gives you the ability to manage, take actions, and plan for changes, and ultimately reliably give your developers the resources and agility they need so they don’t have to go behind your back.

Triangle: Agility

This article is about agility. Read more like it at the [Performance, Efficiency, Agility] series.

Leave a Reply

Your email address will not be published. Required fields are marked *