Coffee and Data – More than just an engine to fuel your team

April 14th, 2013 by

Coffee Cup Image

What do Starbucks and an IT Datacenter have in common? – Supply and Demand

Think about it. Starbucks provides its supply of resources such as coffee, muffins, soda, salad and parfaits to a mass of demanding consumers who all have different taste preferences.   Similarly, a datacenter happens to provide finite compute resources such as CPU, MEM, IO, Net, Storage to a mass of demanding consumers who all have different workload requirements.  The consumers in this case just happen to be those annoying end users who make your business run.

Now in Starbucks case, when it becomes over-utilized (demand exceeds supply) – you get unhappy consumers.  Chances are they probably go somewhere else to get their coffee at a cheaper price.  Even if the top cats only messed up the distribution plan at one store location, they have still missed the mark in that region.

Well what happens when supply doesn’t meet demand in your IT Datacenter? The same consequence ensues:  your end user is calling you on the telephone….or perhaps you just suppress those 30 alerts in your inbox because you’re human and you admit to yourself that you can’t address 1000 data points simultaneously.

So in a dynamic world of your Datacenter that operates much in the same way as our physical world in Starbucks, why are we still using a static method of addressing resource contention by defining lines in the sand that we don’t want to cross and waiting for something to break? Maybe we can make these lines dynamic and put them lower on the curve, but even then you are still sacrificing efficiency and draining your human resources to scramble before the infrastructure reaches critical mass.

Storage Resource Contention Chart

What if we took the same concept of currency that makes our global economy run like a well-oiled machine, and applied this idea to shared resources within an IT Datacenter? – Couldn’t Software do a little bit more than applying statistics to silo-ed metrics and finding correlation?

The Economic Scheduling Engine

VMTurbo has purpose built an economic scheduling engine that does just this.  By treating the virtual datacenter as a marketplace of buyers and sellers, software has evolved past treating data as just data. More importantly, it has created an end goal, an actual problem to be solved by using abstraction.

Consider the logic behind this mentality: Just like a human being shopping at Starbucks our VMs consume and need every single resource in real time to deliver quality of service to your end users.  They DEMAND a variety of things like Mem, CPU, IO, Network, Storage and each of them is entirely unique in its demand based on application, load, and line of business.  The inflection point we all see manifest is that if any single one of these pieces are missing, then you have an unhappy user.

Storage Resource Demands Graphic

On the other side of the equation, just like the store locations for Starbucks, your hosts, data stores, disk arrays, controllers, and network are all SUPPLYING resources up the supply chain to the workloads that need them. And while virtualization has bred us to push our utilization higher it has also increased the risk of shared problems across this intractable supply chain.  There are simply too many moving parts and transactions changing dynamically hourly, weekly, and monthly to maintain an alerting system that triggers response based on resource breeches.

Leverage Efficient Market Principles to Drive Convergence

When you have a scenario as unique as this, economic theory simply makes sense to drive a convergence.  In Starbucks case, they would adjust prices at stores with higher demand and decrease prices at ones with low demand.   Coffee and muffins would vary by region based on the consumers’ needs and budget and the consumer would make an action based on what is the most bang for the buck.  In the datacenter, VMTurbo reflects these relationships through the use of virtual currency.  Pricing of compute resources are determined as function of utilization and the end users demanding them on providers like Hosts and data stores.  Utilization goes up, price increases; Utilization goes down, price goes down. When looking at the whole picture, there are realistically multiple prices across multiple metrics and relationships such as:

  • Applications consuming vRAM and vCPU from a provider VM
  • VMs consuming IO, Space and Latency from Data stores
  • Data stores Consuming IO and Space from Arrays
  • Hosts providing Mem, CPU and IO to VMs

VMTurbo Supply Chain Graphic

You can begin to see that every transaction across each metric and relationship has its own price point.  Through abstraction, VMTurbo is able to capture the price point across every transaction made within an entire datacenter’s supply chain.  Then, using a common data model focused from the top down, VMTurbo gives the virtual machines and applications a brain and a budget to find the best price on resources to assure the best experience for the end user.

The Outcome

The end result is that the economic scheduling engine determines all control points available (Stop, Start, vMotion, Storage vMotion, Re-size, Provision, etc…)  and delivers the exact executable decisions needed to make the entire supply chain converge across all objects and metrics in real time.   Based on these prices and relationships, the actions will change to accommodate the best quality of service to the end user at the lowest cost.  At scale, this can be applied to thousands of virtual machines, hosts, and storage components to drive an entire datacenter into the most efficient price point without sacrificing quality of service.

More importantly, it has allowed for a way to capture the interdependencies of a datacenter limited by the restrictions of a statistical model.  If you understand the price point of every object and every relationship across the entire supply chain, then every decision VMTurbo recommends is made with an awareness of the impact and outcome on every other relationship that could potentially be impacted by such action.  No more dominoes…

VMTurbo is able to look at the Macro-picture and every granular detail of its makeup, to ensure that every VM has the ability to access and consume the resources it needs when it needs it, and that every provider object is healthy and utilized efficiently. And in between, the supply chain hums like Porsche without contention.  You have just reached IT Nirvana my friend.  Now go enjoy a nice latte in your autonomous datacenter.

Leave a Reply

Your email address will not be published. Required fields are marked *