fbpx

Elastigroup will work with a designated pod inside your Kubernetes cluster that will report constant updates about the clusters’ condition via a one-way link.

Using that information, the Elastigroup will scale the cluster up or down according to the overall nodes utilization and your pods’ needs.

In order to create this connection between your k8s cluster and Elastigroup, you will need to implement a small configuration change on both the k8s cluster side and the Elastigroup side as instructed below;

 

k8s cluster configuration

In order to run the Spotinst in-cluster autoscaler, you’ll need to run the following controller application in your k8s cluster spotinst-kubernetes-cluster-controller
This controller needs parameters

  1. spotinst.token – The Spotinst access token (can be generated from the Spotinst console / API)
  2. spotinst.account – The spotinst account Id (Learn more about Accounts and Organizations)
  3. spotinst.cluster-identifier – This identifier should be identical to the clusterIdentifier that was configured on the Elastigroup. (should be matched with the Elastigroup parameter thirdPartiesIntegration.kubernetes.clusterIdentifier )

 

Follow installation instructions by preferred method:

Troubleshooting

If you see the following banner appear at the top of your Elastigroup dashboard, that means your Spotinst Controller is not reporting a heartbeat to the Elastigroup SaaS platform:

 

To troubleshoot the issue see the following steps:

  1. Double check the configuration of your configMap.yaml and make sure the parameters are set correctly:
    kind: ConfigMap
    apiVersion: v1
    metadata:
     name: spotinst-kubernetes-cluster-controller-config
     namespace: kube-system 
    data:
     spotinst.token: <API_TOKEN>
     spotinst.account: <ACCOUNT_ID>
     spotinst.cluster-identifier: <CLUSTER_ID>
  2. To see if the controller is running run the following command on your kubectl enabled terminal:
    kubectl get pods -n kube-system | grep spotinst
  3. Makes sure there is only 1 Spotinst controller pod in the cluster. If there is more than one:
    1. Delete the old deployment via
      kubectl delete -f http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/cluster-controller/spotinst-kubernetes-cluster-controller.yaml
    2. Reinstall the controller (see the top of this page).
  4. If the controller pod appears to be running but is not responding:
    1. The controller needs DNS resolution. Make sure the DNS pods aren’t in a pending state:
      kubectl get pods -n kube-system
    2. If they are, check for the reason via
      kubectl describe pod 'dns-pod-name' -n kube-system
    3. Try restarting the controller pod.
  5.  If the above steps or reinstallation of the controller do not solve your issue, you can get the Spotinst controller logs with these steps:
    1. The following command displays current pods running in the kube-system. Get the spotinst-controller pod name from the result:
      kubectl get pods -n kube-system
    2. Log into your spotinst controller using the following command:

      kubectl exec -ti spotinst-kubernetes-cluster-controller-68b75c4794-bkmm7 bash -n kube-system
    3. Change the path:  cd /log/spotinst
    4. Perform:
      cat spotinst-kubernetes-controller.log to get the logs
    5. Contact Spotinst Support via online chat or email to support@spotinst.com.