In college, I spent a significant amount of time crammed in a narrow boat on the Potomac River. As part of an eight man crew, I spent hours each day trying to get it moving faster. Now, as I’m sure you can imagine, eight large men pulling on separate oars, in a shell that has a width of about two feet, doesn’t provide the most stable platform. Introduce wind and wake, and you have a lot of moving parts that need control.
“How does that relate to DevOps automation?” you might ask. Well let me tell you, trying to increase agility and speed is difficult enough. Trying to do it in a shell that is rocking back and forth is next to impossible. In order to enable speed, you first need a platform that won’t restrict your efforts.
Much has been written about how IT organizations should begin their maiden DevOps voyage. In fact, online material dedicated to DevOps preparation far exceeds online material dedicated to DevOps practice. I would argue that this is because to actually arrive at the goal of organizational and technical agility, you must first establish a controlled platform that won’t prohibit your efforts. That’s the hard part, and largely what organizations struggle with when it comes to DevOps.
If agility or speed is the goal, then automation is the toolset, or oar, that will propel IT organizations forward. However, my ability to use my oar is only as good as my ability to know that, as a boat, we aren’t going to slam down to either port or starboard as I set it in the water. If we do, I can pull as hard as I want, but we still won’t move. Given the dynamism and moving parts inherent in virtual and cloud environments, not rocking the boat is immensely difficult.
Automating Can Get Rocky
Automation is only as useful as the degree to which you can understand the outcome of whatever task it is you are automating. For instance, let’s say you create some sort of script or recipe that automates the deployment and configuration of an n-tiered application in a virtual environment. How comfortable would you feel actually executing that script, if you don’t understand the current availability of the compute, network, and storage resources that support it? What about the other types of workloads contending for those same resources?
That’s a problem well beyond human scale, and trying to come close sounds incredibly time intensive. So what options do I have? I could further segment the infrastructure, creating a dedicated pool or cluster for my single deployment. However, what cost would that have on resource efficiency? Perhaps more importantly, what cost will further logical segmentation of my infrastructure have on overall agility?
So we have competing goals: in order to automate, we need a controlled platform where we can understand how the infrastructure will receive our changes. However, if we try to find that control through stability and isolation, we miss our end goal of agility. In order to truly embrace DevOps, we want to maximize the sharing of resources, so that we can treat our infrastructure, in a very holistic way, as a single codeable entity. In other words, we need to level the boat without sitting still on the water and not moving at all.
Control is difficult to find in an environment with a lot of moving parts. In order to find it, each part must fundamentally be working towards a common goal. On the water, we found a controlled platform through hours of practice of a single technique at a shared rhythm. Your infrastructure is much more complex, but the solution must also have a singular approach.
Get Moving with DevOps Automation
VMTurbo begins the problem of finding that singularity in a complex infrastructure by leveraging a common data model. That single abstraction begins to understand how all the entities across your environment, from applications to physical servers, all the way through the underlying storage array, are interconnected in their resource demand relationships.
Once an understanding, in software, is attained of how each of those entities interconnect, the next step becomes achieving a desired state. Of the infinite states that your environment could be in at any time, a desired state is a very small subset in which application performance is actually assured.
The complexities and data involved in your dynamic environment make clear that this problem is beyond human scale, even with all the visibility and insight in the world. So how do you solve it? Rule sets defined by humans are still static. Constrained optimization? Nope. That won’t work. Given the sheer scale of the problem, by the time any algorithm based on constrained optimization found a desired state, the data it used as inputs would no longer be relevant.
So let’s bring it back to the boat, with everyone taking personal responsibility for their actions, we achieve a common goal. The answer in your environment must be intelligent workload management. VMTurbo leverages market-based principles to allow each workload to act selfishly and determine where it can best access the entire of bundle of resources it needs to maximize performance.
A controlled platform can only be realized if a single platform is making every decision in a unified manner across your environment. In order to make that engine real-time, you’ll need demand driven control, where each workload is constantly seeking the best access to the compute, storage, and network resources it needs. Across your entire virtual estate, VMTurbo will answer the question of when, where, and why to run a workload, across all aspects of planning, deployment, and production.
So go ahead and automate the configuration of that n-tiered application. Just make sure that you’re leveraging VMTurbo (and our REST API) to provide a demand-driven control platform for resource distribution. We’ll level the boat by automating service assurance to every application. You can make it go fast, and propel your organization forward.