Optimize Container Platforms

Self-managing Workloads Optimize Dynamic Container Environments

Organizations are adopting containers to bring apps and services to market faster. But the dynamic complexity of these environments challenges infrastructure teams. That complexity is beyond human scale.

Key Features & Benefits

Self-managing workloads optimize container platforms so IT organizations can scale and accelerate cloud native strategies.

Minimal human intervention – no thresholds to set!

Automated rescheduling of pods assures performance

Intelligent cluster scaling ensures elastic infrastructure

Full-stack control unites DevOps and Infrastructure

Turbonomic Supports Kubernetes, Red Hat OpenShift, Cloud Foundry & Mesos

Turbonomic for Amazon EKS, Azure AKS, Google GKE, and Pivotal PKS Webinar Preview


Watch the full webinar here. 

AI-Generated Continuous Placement & Rescheduling

Turbonomic provides continuous workload placement actions at the container and VM level. Whether a Kubernetes Pod, Cloud Foundry Container, or VM, placement decisions are based on container demand for memory and CPU and the available supply of VM and host resources, including CPU, memory, network, IO, ready-queue, swapping and ballooning. The analytics automatically account for affinity/anti-affinity rules, as well as resource quotas.


No Noisy Neighbor Contention

Workloads that peak together are automatically redistributed to satisfy their exact resource needs and avoid “noisy neighbor” contention.

Underlying Resources Always Service Demand

Container workloads rely on nodes or cells that can service demand. Turbonomic places nodes or cells on hosts or storage with the right resource capacity to ensure container workloads get what they need when they need it.

No Latency Due to Resource Fragmentation

Resource fragmentation occurs when the CPU and Memory requirements of new container workloads cannot be scheduled to a node or cell. Turbonomic avoids this issue by rescheduling existing container workloads before placing new workloads. In this example, CPU/Memory from the green pod on “Node” would be rescheduled to “Node 1” to make room for the new pod on “Node.”

Better Rightsizing, Better Scaling

When a container workload, requires additional CPU or Memory, Turbonomic can appropriately scale the workload, while understanding the availability of the underlying resources. In addition to assuring performance, this capability avoids the operational work of setting static thresholds. Rightsizing a container also ensures you are cloning the best configuration possible when horizontally scaling.

Intelligent Cluster Scaling

When container workload demand increases, Turbonomic will automatically scale the underlying node/cell and determine which host and datastore to run it on.

See what our Workload Automation for Hybrid Cloud can do for you.

Decisions in under an hour. Payback in less than 3 months.

Download Free Trial