How to Approach Hyper-V Management

June 30th, 2015 by

Microsoft’s virtualization platform Hyper-V has evolved significantly over the years to adapt to emerging market trends to change what we manage.  Despite these advancements, little has changed in how we manage an environment.  While Microsoft has done a great job at creating a more attractive hypervisor with new bells and whistles, little has changed about the mode of operations and Hyper-V management.

I often find that people switch to Hyper-V or standardize on the hypervisor due to monetary constraints.  Frankly, it makes sense given the financial burdens involved in running a virtualized datacenter.  As the amount of data and number of applications to support the business grows, IT budgets and headcount stay stagnant or often increase at a much slower rate.  Finding a way to fill this gap is impossible unless we change our approach to hyper-v management or more broadly to cloud and virtualization management.

Think about it, if you are moving to or leveraging a Hyper-V environment for cost how are you going to continue to scale the environment without introducing unnecessary capital expenditures?  It boils down to increasing utilization without introducing performance degradation and headcount.  Effectively allowing software to manage the tradeoffs we make on a daily basis between application performance and infrastructure efficiency.

The reason why increasing utilization and performance simultaneously is difficult is because the problem is beyond human scale.  Monitoring systems rely upon the user to interpret data, warnings, alerts, recommendations, etc. to synthesis a decision or state they must be in.  Consider your Hyper-V environment today and the thousands of metrics you are examining and attempting to control.

Let me give you an example.  Consider an environment with 10 hosts and let’s look at the four main resources: Mem, CPU, Network, and IO.  That is 40 independent metrics that change states hundreds of times within a day…almost unpredictably.  If you are trying to keep each resource within a certain range of utilization for efficiency purposes, how do you stay there and what happens when a resource leaves that state-range?

hyper-v management

If you’re doing a good job monitoring (leveraging the above UI or some other tool) you will probably get alerted to the fact that you already left the place you wanted to stay in.  Now You react and You restore the environment back to the desired range for the given resource.  But remember there’s hundreds of metrics that You need to analyze simultaneously to make sure that the decision You take does not push another resource out of its desired range.  This mode of operations is not realistic as the environment grows and becomes more complex.  Not to mention every monitoring tool is simply trying to sell you better visualization, smarter alerts, and more data…but what really changes?

VMTurbo provides a different approach to IT operations and hyper-v management in order to sustain a desired level of utilization across every resource the environment.  In fact, VMTurbo’s control platform maintains a desired state for every resource without any operator intervention.  A different mode of operations that focuses on where you need to be instead of monitoring when and where the environment has become abnormal or broken.

The economic scheduling engine that drives VMTurbo will drive higher utilization of resources and reduce volatility caused by unpredictable workloads to reduce performance risks in the infrastructure.

By letting application demand dictate how to utilize supply lines of underlying resources, it guarantees that every workload will have the resources available to satisfy its needs.   Instead of decisions generated as a reflection of fluctuations in utilization and user-defined boundaries, our platform provides decisions when the marketplace is de-stabilized.  When supply or demand begin to outweigh each other, decisions are presented to control the environment as close to resource equilibrium as possible.  This assures performance and drives higher efficiencies.  Think about it.  If services entities are always shopping for the cheapest resource it assures their demand is satisfied (applications perform) without over allocation resources (utilization is maximized) market competition.

hyper-v management - decesions
Demand driven decisions to assure application performance

The idea is to meet application demand with available resource supply in a THE environment so it can elastically scale as efficiently as possible.  If we don’t understand the relationship between demand and supply you cannot assure performance, drive efficient usage or effectively perform hyper-v management.

If you only look at utilization and supply, how do you know if demand is satisfied?  Usage is meaningless without understanding demand.  Even if you could define the place we you see performance problems you would need to continuously re-evaluate the environment because everything is dynamically changing.  On the contrary, if we only see demand, how do you know if you have the resource availability underneath?  Our system correlates these forces in real-time to understand how what the equilibrium looks like and the methods to maintain it.

Leave a Reply

Your email address will not be published. Required fields are marked *