Application Performance: What’s an Orchestra without a Conductor?

September 15th, 2014 by

Homer Simpson conducting - would this orchestra's performance be better without a conductor?Applications: an orchestra without a conductor?

Applications, as we discussed, don’t really care where they run – provided they receive all resources to satisfy the demand. But then why do people manually limit their placement by using all sort of rules and constraints? The more constraints you implement the less agility your applications have – which will limit their mobility and ability to receive resources in the best possible ways.

We considered the risks of constraining resources and realized that it’d be better that the application demand would control the supply instead of some manual rules. However, there are other types of constraints which do not belong to resource consumption and may create an impression that they are not related to performance. But is it really the case?

A big family of constraints that have to be implemented manually is related to compliance. We reviewed some of them: anti-affinity to keep different types of applications (retail and investment in a large bank) physically separate, licensing compliance (to comply with maximum number of database instances per physical core), security and other business rules. As they don’t deal with resources directly there is a temptation just to implement these constraints as business rules and compute them when needed.

But first of all, these rules require maintenance which may be visible. Imagine that you have hundreds of database instances which have to stay within licensing compliance. If you use rules, every database instance will be part of its own rule that will likely define which hosts they need to run on. And then the same database instance can participate in more than one rule.

For example, it could be part of a retail application suite which will have to run separately from an investment suite. So these rules have to be maintained and computed together, and again, their number will depend on the total number of units of workload to be placed and the total number of targets – like hosts and datastores.

Such a scenario could trigger exponential growth. But beyond the maintenance cost, it will actually have huge impact on performance and efficiency. The less agility you have in implementing these rules, the less flexibility you provide for your application to get the resources it needs. Which may cause to over-provisioning to have at least some resource guarantees. You may end up having a very rigid manual and static workload distribution across physical entities. But the situation will become even worse when these entities need to run in a cloud that separates them from the physical infrastructure even more.

There are many business rules and orchestration engines implementing all these policies. And all these engines are completely unaware of the workload demand and application performance requirements as they solve a different task. This is yet another example of the modern IT challenge – there is a tool for every task but there is no single solution which can guarantee your performance, efficiency and agility at the same time. Or is there?

Image Source:

Leave a Reply

Your email address will not be published. Required fields are marked *