Quickly Ignite a Kubernetes cluster on CoreOS

In my previous post I explored the usage of libvirt and CoreOS with the main goal to quickly setup a Kubernetes cluster with k3s. Admittedly I did give myself a lot of challenges which lead to me not being able to deploy a complete cluster, but only a single k3s node.

Sudo make me a VM

Having not had enough challenges I wanted to play around with Makefiles because I needed to run the same commands a lot with regularly changing parameters. That’s why I made this abomination of a Makefile. I know I could have written everything in a simple Bash script, but my goal is to learn things and so I did. I can now spin up and destroy a 3 node Kubernetes cluster using nothing but

make download
PROJECT=kube1 make
PROJECT=kube2 make
PROJECT=kube3 make
# play around with kubernetes
PROJECT=kube1 make delete
PROJECT=kube2 make delete
PROJECT=kube3 make delete

If I need to ssh into one of the instances I can just do it with PROJECT=kube2 make ssh.

Setting up the nodes

The first node is a bit special since this one needs to initialize the cluster, so there’s a different Butane config file for this node:

variant: fcos
version: 1.6.0
passwd:
  users:
    - name: core
      ssh_authorized_keys: 
        - "ssh-ed25519 XYZ"
      shell: /bin/bash
storage:
  files:
    - path: /etc/systemd/zram-generator.conf
      mode: 0644
      contents:
        inline: |
          # This config file enables a /dev/zram0 device with the default settings
          [zram0]
          zram-size = ram / 2
          compression-algorithm = lzo
    # Set vim as default editor
    # We use `zz-` as prefix to make sure this is processed last in order to
    # override any previously set defaults.
    - path: /etc/profile.d/zz-default-editor.sh
      overwrite: true
      contents:
        inline: |
          export EDITOR=vim
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: ${IGN_HOSTNAME}
    - path: /etc/vconsole.conf
      mode: 0644
      contents:
        inline: KEYMAP=be
    - path: /etc/rancher/k3s/config.yaml
      mode: 0666
      contents:
        inline: |
            write-kubeconfig-mode: "0644"
            node-label:
            - "server=first"
            - "something=amazing"
            cluster-init: true
            token: meh-123
            disable-apiserver: false
            disable-controller-manager: false
            disable-scheduler: false
    - path: /etc/NetworkManager/system-connections/ens2.nmconnection
      mode: 0600
      contents:
        inline: |
          [connection]
          id=enp1s0
          type=ethernet
          interface-name=enp1s0
          [ipv4]
          address1=${IGN_IP}/16,10.0.0.1
          dns=1.1.1.1
          may-fail=false
          method=manual
systemd:
  units:
    # Install tools as a layered package with rpm-ostree
    - name: rpm-ostree-install-tools.service
      enabled: true
      contents: |
        [Unit]
        Description=Layer tools with rpm-ostree
        Wants=network-online.target
        After=network-online.target
        # We run before `zincati.service` to avoid conflicting rpm-ostree
        # transactions.
        Before=zincati.service
        ConditionPathExists=!/var/lib/%N.stamp

        [Service]
        Type=oneshot
        RemainAfterExit=yes
        # `--allow-inactive` ensures that rpm-ostree does not return an error
        # if the package is already installed. This is useful if the package is
        # added to the root image in a future Fedora CoreOS release as it will
        # prevent the service from failing.
        ExecStart=/usr/bin/rpm-ostree install -y --allow-inactive vim qemu-guest-agent htop
        ExecStart=/bin/touch /var/lib/%N.stamp
        ExecStart=/bin/systemctl --no-block reboot

        [Install]
        WantedBy=multi-user.target

    - name: install-k3s.service
      enabled: true
      contents: |
        [Unit]
        Description=Install kubernetes
        Wants=rpm-ostree-install-tools.service
        After=rpm-ostree-install-tools.service
        Before=zincati.service
        ConditionPathExists=!/var/lib/%N.stamp

        [Service]
        Type=oneshot
        RemainAfterExit=yes

        ExecStart=/bin/sh -c '/usr/bin/curl -sfL https://get.k3s.io | sh -'
        ExecStart=systemctl enable k3s.service
        ExecStart=/bin/touch /var/lib/%N.stamp
        ExecStart=/bin/systemctl --no-block reboot

        [Install]
        WantedBy=multi-user.target

    - name: print-k3s.service
      enabled: true
      contents: |
        [Unit]
        Description=Print KubeInfo
        Wants=install-k3s
        After=install-k3s
        After=k3s.service

        [Service]
        Type=oneshot
        RemainAfterExit=yes

        ExecStart=sh -c 'kubectl cluster-info | tee /etc/issue.d/33_kubectl.issue'

        [Install]
        WantedBy=multi-user.target

Once the first node is setup, we can deploy new agents with the following template

variant: fcos
version: 1.6.0
passwd:
  users:
    - name: core
      ssh_authorized_keys: 
        - "ssh-ed25519 XYZ"
      shell: /bin/bash
storage:
  files:
    - path: /etc/systemd/zram-generator.conf
      mode: 0644
      contents:
        inline: |
          # This config file enables a /dev/zram0 device with the default settings
          [zram0]
          zram-size = ram / 2
          compression-algorithm = lzo
    # Set vim as default editor
    # We use `zz-` as prefix to make sure this is processed last in order to
    # override any previously set defaults.
    - path: /etc/profile.d/zz-default-editor.sh
      overwrite: true
      contents:
        inline: |
          export EDITOR=vim
    - path: /etc/hostname
      mode: 0644
      contents:
        inline: ${IGN_HOSTNAME}
    - path: /etc/vconsole.conf
      mode: 0644
      contents:
        inline: KEYMAP=be
    - path: /etc/rancher/k3s/config.yaml
      mode: 0666
      contents:
        inline: |
            node-label:
            - "server=agent"
            - "something=amazing"
            token: meh-123
    - path: /etc/NetworkManager/system-connections/ens2.nmconnection
      mode: 0600
      contents:
        inline: |
          [connection]
          id=enp1s0
          type=ethernet
          interface-name=enp1s0
          [ipv4]
          address1=${IGN_IP}/16,10.0.0.1
          dns=1.1.1.1
          may-fail=false
          method=manual
systemd:
  units:
    # Install tools as a layered package with rpm-ostree
    - name: rpm-ostree-install-tools.service
      enabled: true
      contents: |
        [Unit]
        Description=Layer tools with rpm-ostree
        Wants=network-online.target
        After=network-online.target
        # We run before `zincati.service` to avoid conflicting rpm-ostree
        # transactions.
        Before=zincati.service
        ConditionPathExists=!/var/lib/%N.stamp

        [Service]
        Type=oneshot
        RemainAfterExit=yes

        ExecStart=/usr/bin/rpm-ostree install -y --allow-inactive vim qemu-guest-agent htop
        ExecStart=/bin/touch /var/lib/%N.stamp
        ExecStart=/bin/systemctl --no-block reboot

        [Install]
        WantedBy=multi-user.target

    - name: install-k3s.service
      enabled: true
      contents: |
        [Unit]
        Description=Install kubernetes
        Wants=rpm-ostree-install-tools.service
        After=rpm-ostree-install-tools.service
        # We run before `zincati.service` to avoid conflicting rpm-ostree
        # transactions.
        Before=zincati.service
        ConditionPathExists=!/var/lib/%N.stamp

        [Service]
        Type=oneshot
        RemainAfterExit=yes

        ExecStart=/bin/sh -c '/usr/bin/curl -sfL https://get.k3s.io | K3S_URL=https://10.0.12.12:6443 K3S_TOKEN=meh-123 sh -'
        ExecStart=systemctl enable k3s-agent.service
        ExecStart=/bin/touch /var/lib/%N.stamp
        ExecStart=/bin/systemctl --no-block reboot

        [Install]
        WantedBy=multi-user.target

So after running make a couple of times we do have a working Kubernetes cluster with 3 nodes

Local config

Those who read my previous post probably noticed that I also challenged myself to use nix, so let’s deviate a bit and setup kubectl locally using nix-shell.

First we’ll need to get our kubernetes config into ~/.kube/admin.conf. The content of the file can be extracted from the first node with the command k3s kubectl config view --raw

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0t...
    server: https://127.0.0.1:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: LS0t...
    client-key-data: LS0t...

What’s important here is to make sure to change cluster.server from 127.0.0.1 to the IP of the first node (in my case 10.0.12.12).

Next we need to configure the nix-shell to not only install kubectl but to also to ensure that we use our admin.conf file when running kubectl (and for fun we’re also installing k9s).

{ pkgs ? import <nixpkgs> {} }:

pkgs.mkShell {
  buildInputs = [
    pkgs.xz
    pkgs.bash
    pkgs.kubectl
    pkgs.k9s
  ];
  shellHook = ''
    alias ll='ls -alh'
    export set KUBECONFIG=~/.kube/admin.conf
  '';
}

With the above config we can use kubectl from our host to manage the Kubernetes cluster in our VM’s

My first “real” application

To at least have anything deployed in my cluster I went for the Guestbook application. One caveat is that exposing the application requires some extra configuration in the form of an Ingress config which will be picked up by Traefik.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: guestbook-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: frontend
            port:
              number: 80

After deploying it with kubectl apply -f ingess.yaml the application was live and reachable from my cluster