Install kubernetes on banana pi
After installing kubernetes master node on dietpi using odroid XU4, we want to add new nodes to the cluster. The idea is […]

After installing kubernetes master node on dietpi using odroid XU4, we want to add new nodes to the cluster.

The idea is to use another Single Board Computer as a new node for our kubernetes cluster.

The SBC that we will use here is a banana pi.

Install kubernetes

As root:

su -

Use the script from borg-odroid:

wget https://github.com/unixorn/blog-scripts/raw/master/arm-k8s/borg-odroid

set the script as executable

chmod u+x borg-odroid

Execute the scritp:

./borg-odroid

Troubleshootings

When installing kubernetes on the bananapi, the script ended with some failures.

Process: 14640 ExecStart=/usr/bin/dockerd --storage-driver=devicemapper -H fd:// (code=exited, status=1/FAILURE)
Main PID: 14640 (code=exited, status=1/FAILURE)

To fix this issue you need to edit the docker start script :

vim /lib/systemd/system/docker.service

Change fd:// to unix:// in the file.

then restart docker

systemctl restart docker

Join the kubernetes cluster

sudo kubeadm join 10.0.0.18:6443 --token dlpmkf.t6dg7wr2qb0znvvh     --discovery-token-ca-cert-hash sha256:a0fdbe98c6d5c1abd78494dfa41d72f40df441a2d9c747c639eedf15446f5c48

This command may fail when the token is expired, you will have to renew the token from the master node :

kubeadm token create

Check the node is part of the cluster

root@lnt-cluster-master-node:~# kubectl get pods --all-namespaces -o wide
NAMESPACE              NAME                                              READY   STATUS             RESTARTS   AGE     IP           NODE                      NOMINATED NODE   READINESS GATES
kube-system            coredns-f9fd979d6-lvxxw                           0/1     Running            1          2d21h   10.244.0.5   lnt-cluster-master-node              
kube-system            coredns-f9fd979d6-zrjtl                           0/1     Running            1          2d21h   10.244.0.4   lnt-cluster-master-node              
kube-system            etcd-lnt-cluster-master-node                      1/1     Running            3          2d19h   10.0.0.18    lnt-cluster-master-node              
kube-system            kube-apiserver-lnt-cluster-master-node            1/1     Running            3          2d19h   10.0.0.18    lnt-cluster-master-node              
kube-system            kube-controller-manager-lnt-cluster-master-node   1/1     Running            3          2d19h   10.0.0.18    lnt-cluster-master-node              
kube-system            kube-flannel-ds-22nmx                             1/1     Running            0          13m     10.0.0.4     lioserver                            
kube-system            kube-flannel-ds-4v8dw                             1/1     Running            2          2d21h   10.0.0.18    lnt-cluster-master-node              
kube-system            kube-flannel-ds-z2r6f                             1/1     Running            0          21m     10.0.0.5     bpi-iot-ros-ai                       
kube-system            kube-proxy-fz6ct                                  1/1     Running            0          13m     10.0.0.4     lioserver                            
kube-system            kube-proxy-kb8db                                  1/1     Running            1          2d21h   10.0.0.18    lnt-cluster-master-node              
kube-system            kube-proxy-s65d4                                  1/1     Running            0          21m     10.0.0.5     bpi-iot-ros-ai                       
kube-system            kube-scheduler-lnt-cluster-master-node            1/1     Running            3          2d19h   10.0.0.18    lnt-cluster-master-node              
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-7zjwg        1/1     Running            0          2d19h   10.244.0.6   lnt-cluster-master-node              
kubernetes-dashboard   kubernetes-dashboard-665f4c5ff-tznzx              0/1     CrashLoopBackOff   11         2d19h   10.244.0.7   lnt-cluster-master-node              

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *