Setup Your Environment

  1. Generate a Spotinst API token
  2. Setup an AWS IAM user
  3. Create s3/gs cluster state store

In order to get started quickly, we’ve gathered some shell scripts for your convenience, download here (see full link):


In case of updating existing scripts, remove the PROTOKUBE_IMAGE variable (and unset it in the current shell) and update the URL of NODEUP as follows:

export NODEUP_URL="http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v1.10.0-beta.1/nodeup/linux/amd64/nodeup"


  1. 00-env.sh – Setup environment variables on your local machines (required variables need to be modified by you).
  2. 01-create.sh – Create the cluster.
  3. 02-validate.sh – Validate that the cluster is up and running (master & nodes).
  4. 03-dashboard.sh – Install the dashboard addon (the dashboard will be available via https://master-ip/ui).
  5. 04-edit-cluster.sh – Edit the cluster spec.
  6. 05-edit-ig.sh – Edit the instance groups spec.
  7. 06-update.sh – Apply the changes to the cloud.
  8. 07-roll.sh – Apply the rolling-update.
  9. 08-create-ig.sh – Create a new instance group.
  10. 09-upgrade.sh – Upgrade the cluster.
  11. 10-delete.sh – Delete the cluster.

Environment Setup

mkdir spotinst-kops && cd spotinst-kops
wget http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v{VERSION}/scripts/kops-spotinst.tar.gz
tar -xzvf kops-spotinst.tar.gz
cd kops-spotinst
chmod +x *.sh

DNS Setup

Configure your DNS as a sub-hosted zone under a Route53 domain

Example:  amiram.ek8s.com  (reference commands)

ID=$(uuidgen) && aws route53 create-hosted-zone --name amiram.ek8s.com --caller-reference $ID | jq .DelegationSet.NameServers


Using the output, create a file called subdomain.json:

$ cat subdomain.json
  "Comment": "Create a subdomain NS record in the parent domain",
  "Changes": [
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "amiram.ek8s.com",
        "Type": "NS",
        "TTL": 300,
        "ResourceRecords": [
            "Value": "ns-x.awsdns-16.com"
            "Value": "ns-y.awsdns-53.org"
            "Value": "ns-z.awsdns-35.co.uk"
            "Value": "ns-k.awsdns-39.net"

Now, let’s route traffic to the correct subdomain *.amiram.ek8s.com  using the following command:

aws route53 change-resource-record-sets \
 --hosted-zone-id PARENT_ZONE_ID \
 --change-batch file://subdomain.json


You can grab PARENT_ZONE_ID  using the following command;

aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="ek8s.com.") | .Id'
 Note: This example assumes you have jq installed locally.


Variable Setup

vim 00-env.sh


To configure an highly-available 3 nodes cluster, set the  KOPS_MASTER_COUNT="3"


Create the cluster
Important! before running the kops create cluster command please make sure that you have a public key installed on your working station at ~/.ssh/id_rsa.pub

Behind the scenes, it actually runs the following command, sourcing the 00-env.sh file and running kops create cluster with the following parameters

. 00-env.sh && kops create cluster \
    --name $KOPS_CLUSTER_NAME \
    --zones $KOPS_CLUSTER_ZONES \
    --cloud $KOPS_CLOUD_PROVIDER \
    --master-size $KOPS_MASTER_SIZE \
    --master-count $KOPS_MASTER_COUNT \
    --node-size $KOPS_NODE_SIZE \
    --logtostderr --v 2 \
    --bastion \
    --networking calico \
 Note: the --topology private, --networking calico and --bastion flags which means that the cluster will be created using private subnets in a VPC, using the calico network driver (the most common one) and for management access, we will use the bastion server to access the worker nodes.
 Note: If the AWS account has EC2 Classic the product needs to be configured in the kops create cluster script as Linux/UNIX (Amazon VPC) since it’s not the default. To do so, add the line --spotinst-product "Linux/UNIX (Amazon VPC)" \  to01-create.sh
Optional – In order to extract the config to a YML file, run the following command:
. 00-env.sh && kops create cluster \
--master-size $KOPS_MASTER_SIZE \
--master-count $KOPS_MASTER_COUNT \
--node-size $KOPS_NODE_SIZE \
--node-count $KOPS_NODE_COUNT \
--dry-run -o yaml > kops-create.yaml


Validate the cluster

The Elastigroup Controller

Elastigroup will work with a designated pod inside your Kubernetes cluster that will report constant updates about the clusters’ condition via a one-way link. Using that information, the Elastigroup will scale the cluster up or down according to the overall nodes utilization and your pods’ needs. In order to create this connection KOPS will install a controller on each Kubernetes cluster it launches. You can read more about the Kubernetes controller here.

Install the UI dashboard

$ ./03-dashboard.sh
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

$ ./04-get-password.sh
Using cluster from kubectl context: amiram.ek8s.com