Is your Docker management approach fit for the Docker revolution?

There’s nothing new under the sun. Or so stated a Linux administrator with whom I had a conversation with at a recent VMUG about Docker management. Docker, he said, was just an evolutionary step in service delivery, not a revolutionary one. To a certain extent, this is true.

Applications have been logically segmented across shared resources, in one form or another, since mainframes ran data centers, and Linux container technologies based on cgroups and namespaces are almost a decade old. Yet, despite certain evolutionary aspects, Docker has a truly revolutionary story to tell (and several well known investors ready to help it fend off increased competition).

That story is perfectly encapsulated in the name of the company itself. Global shipping was an enormous challenge, until manufacturers agreed on a single container, and global infrastructure providers standardized on it. This allowed for seamless movement of inventory across the globe without specialization for the goods contained within.

docker management

On their own website, it’s “run anywhere, anytime” functionality that Docker focuses on:

“Whether on-premise bare metal or data center VMs or public clouds, workload deployment is less constrained by infrastructure technology and is instead driven by business priorities and policies. Furthermore, the Docker Engine’s lightweight runtime enables rapid scale-up and scale-down in response to changes in demand.”

Across Dev, QA, and Prod, both on-prem and in the cloud, workloads can now be moved with no infrastructure dependences. Sounds fantastic. So what’s the drawback? Management.

The more we segment our systems, the more data is necessary to describe the state of our environment at any given time. Furthermore, the more knobs and levers exposed to administrators in software, the more complex it becomes to try to maintain performance across an entire estate.

A quick google search for Docker management reveals tools that really fall into two categories, those designed for viewing and those designed for doing. Tools that are designed for viewing offer all sorts of dashboarding that show resource utilization and allocation. A larger segment of tools, the ones designed for doing, allow for various methods of automation, largely around deployment and configuration management.

Both types of these tools have a certain amount of utility. Tools designed for doing, in particular, will be able to leverage Docker to automate tasks that were previously immensely laborious.

However, a large gap in these toolsets still exists. As we drastically increase the mobility of workloads, while further segmenting application architecture, simply presenting and trending data for human consumption will not be remotely adequate for maintaining service levels.

With Docker, we are exponentially increasing the data points needed to describe an environment, while simultaneously increasing the breadth of decisions that can be used to act upon it. Humans simply cannot understand the real time data and produce an optimal decision set to assure performance. When you think about it, even beginning down the path of leveraging tools designed for data collection, trending, and alerting is a fruitless effort.

So tools designed for viewing are inadequate. What about the tools designed for doing? As stated before, these tools have the ability to increase productivity by automating various types of tasks. However, performance cannot be solved with “policies and business priorities.” Let’s think about this. Best practice can help in reducing performance risks, but it can never drive better performance because a static heuristic is not aware of dynamic and unpredictable workload behavior nor resource dependencies in a shared model.

How VMTurbo helps with Docker Management

Docker’s engine enables unprecedented levels of mobility and agility. However, if you can’t guarantee performance leveraging tools designed for viewing and doing, how can you feel comfortable actually putting Docker in your production systems?

Let’s bring it back to the global shipping industry. Businesses don’t drive their most critical decisions based on historical inventory levels. Policies and priorities are important, yes, but in reality its demand that guides the movement of goods around the world.

VMTurbo, at its core, delegates decision making to the workload. By leveraging our common data model, control begins with satisfying runtime demand of an application workload, whether that workload is an on-prem VM, an OpenStack instance, or a Docker container.

VMTurbo understands how that demand exposes itself across all the supply resources that a workload needs to perform. From basic compute and storage resources, to network dependencies on other workloads, VMTurbo inserts basic economic theory around supply and demand to allow workloads to make decisions about how to manage tradeoffs and best access the entire bundle of resources that they uniquely need.

As we embark on this revolution in service delivery, let’s not get bogged down in antiquated tools designed for visibility and resource monitoring. Let’s reject the fallacy that policies alone can guarantee service levels across a dynamic environment. Rather, in order to solve performance, lets’ leverage demand-driven control to actually solve performance.

Leave a Reply

Your email address will not be published. Required fields are marked *