Setup Your Environment

  1. Generate a Spotinst API token
  2. Setup an AWS IAM user
  3. Create s3/gs cluster state store

In order to get started quickly, we’ve gathered some shell scripts for your convenience; download here. (see full link)


In case of updating existing scripts, update the URLs of both NODEUP and PROTOKUBE as follows:


export NODEUP_URL="http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v1.9.0-alpha.3/nodeup/linux/amd64/nodeup"


export PROTOKUBE_IMAGE="http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v1.9.0-alpha.3/protokube/images/protokube.tar.gz"



  1. 00-env.sh – Setup environment variables on your local machines (required variables need to be modified by you).
  2. 01-create.sh – Create the cluster.
  3. 02-validate.sh – Validate that the cluster is up and running (master & nodes).
  4. 03-dashboard.sh – Install the dashboard addon (the dashboard will be available via https://master-ip/ui).
  5. 04-edit-cluster.sh – Edit the cluster spec.
  6. 05-edit-ig.sh – Edit the instance groups spec.
  7. 06-update.sh – Apply the changes to the cloud.
  8. 07-roll.sh – Apply the rolling-update.
  9. 08-create-ig.sh – Create a new instance group.
  10. 09-upgrade.sh – Upgrade the cluster.
  11. 10-delete.sh – Delete the cluster.

Environment Setup

mkdir spotinst-kops && cd spotinst-kops
wget http://spotinst-public.s3.amazonaws.com/integrations/kubernetes/kops/v1.8.1/scripts/kops-spotinst.tar.gz
tar -xzvf kops-spotinst.tar.gz
cd kops-spotinst
chmod +x *.sh

DNS Setup

Configure your DNS as a sub-hosted zone under a Route53 domain

Example:  amiram.ek8s.com  (reference commands)

ID=$(uuidgen) && aws route53 create-hosted-zone --name amiram.ek8s.com --caller-reference $ID | jq .DelegationSet.NameServers


Using the output, create a file called subdomain.json:

$ cat subdomain.json
  "Comment": "Create a subdomain NS record in the parent domain",
  "Changes": [
      "Action": "CREATE",
      "ResourceRecordSet": {
        "Name": "amiram.ek8s.com",
        "Type": "NS",
        "TTL": 300,
        "ResourceRecords": [
            "Value": "ns-x.awsdns-16.com"
            "Value": "ns-y.awsdns-53.org"
            "Value": "ns-z.awsdns-35.co.uk"
            "Value": "ns-k.awsdns-39.net"

Now, let’s route traffic to the correct subdomain *.amiram.ek8s.com  using the following command:

aws route53 change-resource-record-sets \
 --hosted-zone-id PARENT_ZONE_ID \
 --change-batch file://subdomain.json


You can grab PARENT_ZONE_ID  using the following command;

aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="ek8s.com.") | .Id'
 Note: This example assumes you have jq installed locally.


Variable Setup

vim 00-env.sh

Configure your environment variable for the cluster creation, including,AWS_ACCESS_KEY_IDAWS_SECRET_ACCESS_KEYSPOTINST_TOKENKOPS_STATE_STOREKOPS_CLUSTER_NAME

To configure an highly-available 3 nodes cluster, set the  KOPS_MASTER_COUNT="3"



Create the cluster

Important! before running the kops create cluster command please make sure that you have a public key installed on your working station at ~/.ssh/id_rsa.pub

Behind the scenes, it actually runs the following command, sourcing the 00-env.sh file and running kops create cluster with the following parameters

. 00-env.sh && kops create cluster \
    --name $KOPS_CLUSTER_NAME \
    --zones $KOPS_CLUSTER_ZONES \
    --cloud $KOPS_CLOUD_PROVIDER \
    --master-size $KOPS_MASTER_SIZE \
    --master-count $KOPS_MASTER_COUNT \
    --node-size $KOPS_NODE_SIZE \
    --logtostderr --v 2 \
    --bastion \
    --networking calico \
 Note: the --topology private, --networking calico and --bastion flags which means that the cluster will be created using private subnets in a VPC, using the calico network driver (the most common one) and for management access, we will use the bastion server to access the worker nodes.


Optional – In order to extract the config to a YML file, run the following command:
. 00-env.sh && kops create cluster \
--master-size $KOPS_MASTER_SIZE \
--master-count $KOPS_MASTER_COUNT \
--node-size $KOPS_NODE_SIZE \
--node-count $KOPS_NODE_COUNT \
--dry-run -o yaml > kops-create.yaml


Validate the cluster



Install the UI dashboard

$ ./03-dashboard.sh
serviceaccount "kubernetes-dashboard" created
clusterrolebinding "kubernetes-dashboard" created
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created

$ ./04-get-password.sh
Using cluster from kubectl context: amiram.ek8s.com