You don’t need a degree in rocket science to let software control your software-defined data center and achieve infrastructure automation. However, that isn’t to say that the Aerospace industry hasn’t already adopted software control methods. In an article published by NASA, the concept of Software Controlled Automation is explained as being critically important in the success of a two-week mission testing survivability in space. The software “automatically detects and diagnoses failures” and even “determines the consequences of the failure.” Going one step further, software control “was used to automatically repair” failures that were both proactively detected and diagnosed before any human intervention.
Why couldn’t we apply these same concepts to the modern datacenter? Even in a small environment with 100 VMs spread across 10 hosts, accessing 4 storage pools, the number of data points can quickly become overwhelming to efficiently control in real-time, while assuring performance of the applications living in the environment. Imagine we are only looking at a few metrics: memory, CPU, latency, and IOPS. That’s 16,000 observable data points that an administrator would need to be monitoring, and reacting to, in real-time! See where that degree in rocket science might come in handy?
With VMTurbo, this admin would have an intelligent software platform controlling the datacenter as workload demand fluctuates, and addressing areas of risk automatically, just like NASA’s tool. Only by automating sizing, placement, and capacity decisions, can a solution effectively identify and remediate dynamic resource contention points in real-time, and even prevent risk from being introduced in the first place. In fact VMTurbo takes infrastructure automation beyond the process that NASA has been able to achieve. VMTurbo is not used to automatically repair. It assures performance by preventing issues from ever occurring.
You may be asking yourself, “Sure that sounds great, but how could I actually quantify the benefits of infrastructure automation?”
Well, Houston Methodist is one of many VMTurbo customers who asked that same question recently, and saw some incredible results. Just like many other companies, they were having difficulty “controlling performance in a complex environment with existing hypervisor monitoring tools,” which led to “inconsistent Quality of Service and disruption of virtualized workloads.” Lastly, by attempting to throw hardware and reactive root cause analysis at problems, it was impossible to efficiently use virtual and human resources.
Through applying full automation to an environment that consists of 2,500+ VMs, supporting over 4,500 physicians, Houston Methodist is successfully utilizing their existing hardware in a much more efficient manner. Also, like NASA, they have freed up vital human resources to focus on other tasks, letting intelligent software reap the real-time benefits of automated demand-driven control. For Houston Methodist, eliminating the “break-fix loop” has opened the door for scalability and agility, enabling a team that supports physicians and nurses to focus on the next strategic project that make their end users lives easier.
So, if you had to choose between real-time infrastructure automated control, or always anxiously waiting for the next alarm, why would you ever choose the latter? Doesn’t it just make sense to let software control software? Go ahead, let VMTurbo do the hard work for you, and see how great control feels.