No More Tiers: Be Careful What you Think you are Controlling

February 20th, 2015 by

No More TiersYou’ve seen me talking recently about the idea of reserving infrastructure, and why it is not a good idea. Along with the practice of buying new hardware and treating it as special for the purpose of only certain applications, another long-standing practice is present in many data centers: storage tiering.

The concept of tiering is good, but the important thing to take into account with storage tiering is the use-case. One of the most common flaws in implementing data tiers is the way in which we allocate our applications into them. The key phrase in there is “we allocate”. In other words, there is a manual intervention in the placement of application workloads.

No Straight Lines Here

You read all the whitepapers, and “best practices” guides when you build the application server. Distributing your storage across all the different performance and protection tiers has become part of the manual checklist when building the environment. So, why is this bad?

The real questions about the architectural decisions can boil down to a simple one: Does the application behave like you expect it to behave? The assumptions you’ve made about how it is meant to act have not actually created a performance enhancement, they have actually created constraints.

There’s that word again: constraint. It is a very powerful word, and what is even more powerful is the fact that constraints aren’t static. Unless you’re application is static (Spoiler alert: it isn’t), or has linear behavior at all times (Spoiler alert #2: it doesn’t), your constraints won’t be static or linear either.

Blurring the lines

With the advent of hyperconverged strategies, and converged storage solutions, a lot of this has become more fuzzy. The performance is still very dynamic and along with that, so are the constraints.

Modern data centers often make use of multiple tiers of storage. Some will also employ storage abstractions at the software layer to provide pooled storage. This is what the industry is calling SDS (Software-defined storage). It can occur within a single hardware platform, or across multiple platforms. We won’t go into all the possibilities or vendor offerings here.

How is it now? Ok, try now! How about now?

Remember that our application and underlying infrastucture are completely dynamic now. Lots of moving parts are in play, so this brings us to some important notes that I’ve been talking about:

Ultimately, this leads to what I’ve written in our corporate blog about Turbonomic, but beyond the product, the core of what I’m talking about is the idea of trusting the system over the assumption. The systems are sometimes built on assumptions, but the lesson we have to learn is that the constraints move around.

By thinking about the infrastructure as a system, we can begin to trust the system and the workload to build a relationship. Unless you’re keen on using up some of your 35,000 decisions a day then we have to rethink how we treat our infrastructure.

Image source: http://activerain.trulia.com/blogsview/4516690/plastic-covered-furniture–are-you-still-around-

Leave a Reply

Your email address will not be published. Required fields are marked *