fbpx

Spotinst Ocean is a managed infrastructure scaling service for Kubernetes and ECS that adjusts infrastructure capacity and size to meet the needs of containerized applications running on the cluster.

Ocean monitors for pending Kubernetes Pods or ECS Tasks and automatically adjusts the size of the cluster based on the workload’s constraints and labels.
Ocean ensures that the cluster’s resources are utilized and scales down underutilized, expendable nodes to ensure maximal cost optimization.

The Anatomy of Ocean

Ocean for Kubernetes

Spotinst Ocean integration with Kubernetes clusters is comprised out of two components

  • Spotinst Controller (SPT-CTL)

A Pod that lives within the k8s cluster, responsible for collecting metrics and events. The events are being pushed via one way secured link to the Spotinst Ocean Saas for business logic and capacity scale up/down activities.

 

  • Spotinst Ocean SaaS

Ocean SaaS layer is responsible to aggregate the metrics from the SPT-CTL and build the cluster topology. Using the aggregated metrics, the SaaS component is applying other business logic algorithms such as Spot / Preemptible Instances availability, prediction, and Instance size/type recommendation to increase performance and optimize costs via workload density instance pricing models, across On-Demand / Reserved (RIs / CUD) and excess capacity nodes(Spot Instances / Preemptible VMs).

 

Ocean for ECS

Spotinst Ocean integration with ECS clusters is executed via API calls between the Spotinst SaaS and the AWS ECS Service.

As Tasks and Services are deployed via the Amazon CLI, API or Console UI, the Spotinst Ocean platform scans the cluster and identifies tasks that have no place to be scheduled. Once a  scale-up is triggered, the Spotinst Auto-Scaler will locate the most optimal instance type, size, and lifecycle for cost and utilization. As new instances register to the ECS cluster via their User Data scripts, the ECS task scheduler will be able to place the pending tasks.

With appropriate headroom configured, a buffer of spare capacity will be maintained, based on the cluster’s most common tasks. The headroom allows for Incoming tasks to be scheduled immediately, eliminating the wait-time until new instances spin up and register to the cluster.

At the same time, the Autoscaler constantly runs simulations of the cluster, looking to optimize the resource allocations via bin-packing algorithms. When an instance whose tasks may be otherwise distributed across the cluster is identified, a scale-down will be triggered, and the instance will be drained and terminated.

The result is an optimally utilized, and cost-efficient cluster of container instances.