So, your organization is thinking about, or has already decided to bring cloud automation into your environment. Perhaps you are trying to leverage vRealize Automation in order to streamline and standardize deployments across different platforms in your infrastructure. Maybe you are bringing in OpenStack to give end users a self-provisioning portal, or you’re adding a configuration management system like Chef or Puppet to your workflow so that mass reconfiguration is a little less manual.
What do these tools have in common? Well, the most obvious thing is that they all give you the power to outsource parts of your IT management to software. This is great for the IT management process, it means that we can spend less time on the repetitive processes of provisioning, configuring and updating our virtual machines. It also means that there is less room for human error.
On the other hand though, we’re also handing the proverbial keys to the kingdom to robots on one level, and our end users on the next. This isn’t necessarily a bad thing; in fact, that’s pretty much why we started down this path. However, the more that we are giving up control over the details of how demand is coming into your environment, the more difficult it becomes to keep an eye on everything.
When an end user is spinning up a new virtual machine, are they going to be choosing where their workload is going to live? Are they going to be taking into account memory congestion, network congestion, CPU contention and Ready Queue? Are they going to understand how their decisions are going to impact the performance of their workload? How about the performance of other workloads on that host?
As far as placement goes, the answer is usually no, and intentionally so. Keeping end users from having to ask these questions (or ask us to spin up their VM, at which point we have to ask ourselves these questions) is a large part of why we are creating these portals in the first place. Maybe they will give sizing a passing thought as they are provisioning their machine, but if they are like most end users, they’ll probably conform to the “more is more” philosophy. On the off chance that they are even aware of ReadyQueue, how likely is it that they are going to size their VM in such a way that it maximizes performance for their machine and those around it?
On top of that, the more automation and self-provisioning we bring into our workflow, the more dynamic the environment gets. Ultimately, the buck stops at IT when it comes to performance in the virtual environment, so how do we minimize the number of angry calls we get from end users? We can try to keep monitoring the environment, but the more that the environment attempts to manage itself, the harder it gets to stay on top of every workload and understand how they are all vying for our limited resources.
The challenge is that self-provisioning only exacerbates the issue of how we manage constraints in real time. Demand is growing from our provisioning system, but as it’s increasing, it is becoming ever more unpredictable to manage our existing workload.
Moreover, the more that we give up control over how workloads are provisioned into our environment, the more difficult it becomes to predict where and when issues are going to occur in real time. This is compounded by the fact that end users are expecting ever better and more reliable performance, while management is putting on pressure to provide those services at lowest possible costs. This means that we can’t simply provision hardware to bury our problems in raw capacity.
So, how do we do more with less, better, and under more difficult circumstances? We can try to proactively limit what resources our end users can gain access to, but that makes us less agile and our new cloud automation solutions that much less useful to the business. We can try to bring more physical and people resources into the environment but again, that is expensive. We could try to create strict, complex policies to make sure that VMs will have space where they are going, but creating manual, rigid constraints is just creating a future problem.
VMTurbo understands demand. Within an hour of being in your environment, VMTurbo will understand every workload clamoring for resources and also the supply that you have available to satisfy those demands. Because the demands on your infrastructure are constantly changing, VMTurbo will be continuously reassessing the health of your environment and delivering you recommendations to assure and improve performance.
Demand changes in real time, and with cloud automation solutions in place, real time can be too fast to keep up with, even when the resource provisioning process is automated, so VMTurbo closes the loop by taking actions automatically if you let it. In this way, you can truly keep ahead of demand, and it should come as no surprise that in order to keep up with automatically changing demand is with intelligent control that can understand and respond to that demand in real time.