Virtual machine density is a very important metric to track as part of measuring how effectively you are maximizing the utilization of your compute resources in a virtualized environment. For most IT organizations, it used to be that memory was the most constraining resource impacting VM density as they scaled out their virtual data centers. Many IT professionals thought that VMware DRS and VMware’s memory management capabilities did a good enough job of maximizing resource utilization in their server infrastructure.
More recently (and specifically in speaking with attendees at VMworld 2013), I am hearing from a lot of VMware users who have recently deployed blade servers with much larger memory footprints that CPU scheduling and IO are now becoming common resource constraints to deal with. To address these issues, users are implementing complex affinity rules to try and prevent DRS from causing performance issues. The downside of this is that by hard wiring their environments in this way they have limited the scope of what is achievable in terms of VM density because workloads no longer have the freedom to move to every host within the cluster. This is obviously not an ideal solution as you scale out because of the incremental hardware costs, and the cost and complexity associated with maintaining and creating new affinity rules as you scale out more workloads.
At VMTurbo, we thought about this some time ago when building our core technology that underlies our platform (which we refer to as the Economic Scheduling Engine). Our solution considers a broad set of resource constraints when making decisions regarding workload placement and sizing virtual and physical resources to meet workload demands. Resources such as CPU scheduling and IO (among many others) are considered in these decisions. This simplifies the ongoing administration of organizations’ virtual infrastructure while enabling significantly greater VM density and resource utilization.
You can try this for yourself by downloading our free 30-day trial.