The n-Dimensional Problem
Your virtualized infrastructure is an n-dimensional universe of potential resource states. In this virtualized universe there exists an infinite number of possible states in which our environment can be in at any given moment. But what is the good state where we are assuring application performance while being as efficient as possible and how do we find and control this state?
Consider if our environment was shaped only by CPU on a single virtualized host. We know we want this host to be in a state where we are assuring performance while also being as efficient as possible. Let’s say this state is somewhere in this range. Now, what happens when we add one more host to our environment? Our environment has become two dimensional, where the good state is now defined by the intersection of the good states of CPU for each host. And such a desired state for our infrastructure has shrunk compared to the number of potential states in which it could be. Now, what happens when we had a third? A fourth? A fifth? 100 more hosts into the dimension for CPU alone. The combinations of CPU states become n-dimensional with the bad states greatly outnumbering the number of potential good states. Our desired state becomes even smaller within this chaos. When workload demands change for CPU, how do we ensure that the environment is maintained in this good state when any dynamic shift in demand can force it out?
Consider now what would happen if we add memory. Again, when we start with just two hosts, our good state is the intersection of smaller states. But when we continue to add hosts to our universe, the desired states for memory become a small subset compared to the amount of potential resource states in which our environment could be at any moment. And it’s always changing.
While each dimension by itself is complex, we now need to consider how to maintain the desired state across these two dimensions. Just like the intersection of good states for two hosts, we now have an intersection of states across n-dimensions. A place where CPU and memory are both converged and controlled across all hosts. When we make a decision on CPU, how do we know that the workload won’t readjust and push others out of the desired state for memory? And if we make a decision for memory, how do we make sure we don’t push CPU into one of the bad states?
Let’s add the additional dimensions for IOPS and Network and SWAP. Where the tradeoffs between each are constantly pushing the environment in conflicting directions. In this case, identifying a desired state that satisfies all these tradeoffs becomes harder and harder. Indeed, hard beyond human scale. The desired state for compute can only be achieved at the intersection of all of these dimensions where workload demands are being satisfied by the infrastructure supply. But every demand change that happens risks averting the environment into the vastly large or potential bad states in this n-dimensional universe. In reality, we don’t just have dimensions for compute. We have multiple dimensions. Consider what the desired state would look like for hybrid cloud and decisions to move workload off or on premise. Where the desired state for storage or such a desired state for network and for workload and the desired state to satisfy quality of service for workload. Any change in each impacts all of the others. So we must find that equilibrium that satisfies all these tradeoffs. Everything added to our environment expands our universe exponentially making the potential desired states smaller and smaller.
But somewhere, we can find these desired states. A place where our environment is converged and all of these tradeoffs are satisfied. But with the dynamism and fluctuation inherent in your environment, how can you find it, and more importantly, how can you control it there?
The answer: Demand-driven control from VMTurbo (Turbonomic). Visit turbonomic.com/control to learn how.