fbpx

Finished to handle prerequisites? managed to install the integration?

Great! let’s create our clusters.

Here’s 3 methods to get started:

Option 1: Create A New Cluster

  1. Download the KOPS binary from GitHub releases.
  2. Verify the required scripts from prerequisites are available. if not, Download the updated scripts and open the archive file.
  3. Update all variables in 00-env.sh.
  4. Execute 01-create.sh.

 

Option 2: Upgrade an existing cluster managed by Spotinst Elastigroups (created by previous versions of KOPS) to Ocean

  1. Unset NODEUP_URL environment variable (if using 00-env.sh, remove the first line which loads a hidden file called .internal).
  2. Enable Spotinst support by toggling the feature flag:

    for ElastiGroup:

    export KOPS_FEATURE_FLAGS="+Spotinst"

    for Ocean:

    export KOPS_FEATURE_FLAGS="+Spotinst,SpotinstOcean"
  3. Unset KOPS_CLOUD_PROVIDER environment variable, if you wish to use the same 01-create.sh file, remove –cloud $KOPS_CLOUD_PROVIDER argument from the kops create cluster command.
  4. Edit your cluster configuration files
    1. cluster.spec
    2. config

    replace each cloudProvider: spotinst with cloudProvider: aws. This step MUST be done manually by editing the files in S3.
  5. Replace the KOPS binary.

Option 3: Migrate an existing cluster managed by AWS Auto Scaling Groups to Elastigroup/Ocean (created by all versions of KOPS)

  1. To perform the migration with no downtime, migrate the masters one-by-one:
    1. For each master:
      1. Drain the node (kubectl drain <node>)
      2. Scale down the Auto Scaling Group by one instance
      3. Import the Auto Scaling Group into Spotinst. Make sure the Elastigroup’s name equals the Auto Scaling Group name. 
    2. Repeat the steps with the next master node, until all nodes are running in Spotinst, and all masters Auto Scaling Groups were downscaled to zero. If all masters were running in the same Auto Scaling Group prior to the import, in step a.3 increase the capacity of your master Elastigroup instead of creating another one.
  2. In a case of a single Instance Group, migrate worker nodes all-at-once to an Elastigroup/Ocean:
    1. Import the Auto Scaling Group into Spotinst.
    2. For each node:
      1. Drain the node (kubectl drain <node>)
    3. Once all pods migrated to run on Spotinst nodes, scale down the Auto Scaling Group to zero.
  3. In a case of multiple worker node Instance Groups:
    Note: Multiple worker node groups are only supported for Elastigroups in KOPS 1.11 (via multiple Elastigroups on the same cluster). For Ocean support (several Launch Specifications on the same cluster) use KOPS version 1.12
    1. Import Auto Scaling Group into a Spotinst Elastigroup.
    2. For each node:
      1. Drain the node (kubectl drain <node>)
    3. Once all pods migrated to run on Spotinst nodes, scale down the Auto Scaling Group to zero.
    4. Repeat steps a-c for each ASG.
  4. Enable Spotinst support by toggling the feature flag:

    for ElastiGroup:

    export KOPS_FEATURE_FLAGS="+Spotinst"

    for Ocean:

    export KOPS_FEATURE_FLAGS="+Spotinst,SpotinstOcean"
  5. Clean up (optional – only delete ASGs after validating the migration is up and running):
    1. Delete all Auto Scaling Groups
    2. Delete all Launch Configurations