Kubernetes@home for Cloudy

In the Kasperry PI project, we deployed a Kubernetes cluster consisting of 4 Raspberry PI in a home environment.

These 4 nodes could represent Cloudy nodes, and once Kubernetes is set up, services can be managed over the Kubernetes Master. There are also different dashboards for Kubernetes available to help in the management of the services.

The benefits of using a Kubernetes cluster for services shared over Cloudy with a community are a higher resilience of the offered services. The Kubernetes Master (which can also be used in Multi-Master mode to avoid a single point of failure in the Master) takes care that all services are up, and if any node in the home cluster goes down, Kubernetes will arrange that the failing service will be offered by another node of the cluster.

For the home cluster, 4 Raspberry Pi were used, all were the model 4:
– Master node: Raspberry PI 4 with 4Gb and booting with an external SSD
– Infra nodes: Raspberry PI 4 with 4Gb and botting with an external SSD
– Two worker nodes: Raspberry PI 4. (we used nodes with 4GB but they can have less)

The architecture is shown in the following picture:

SSD Disk: From our experience, it is recommended to use an SSD Disk on the master node and Infra node because they have a database running. If a database or persistent storage is not needed, then the Infra nodes can be omitted.

Infra node: It is common to have running infrastructure-related apps, e.g. a database which is making use of non-ephemeral storage, apps which require to mount the container network to the host like a load balancer or an NFS server for sharing files with other nodes, etc.

Operating System: We installed Ubuntu on the Raspberry Pi and we did not use the default Raspian OS or the new Raspberry OS. One of the reasons is that Ubuntu is a more optimized OS for running server applications. In addition, the Ubuntu community is more active about Kubernetes and it has more information. So, we decided to give Ubuntu a try for this deployment.

The cluster of the Raspberry Pi can be seen in the following picture, along with some additional devices we used for other purposes.

Performance: The system has been running stable at 13% of CPU capacity most of the time. The cluster has been tested deploying multiple Pods, which contain Docker containers. The following figure is obtained from DataDog dashboard and shows the number of pods running in our home cluster for several days.

Special thanks for this work to the Kaspberry PI project, Kasperry PI @ Albert Sabate.

The code is available at

https://github.com/AlbertSabate/kasperry

For building the cluster, please refer to the tutorials at https://kasperry.io/

Development of a distributed and decentralised monitoring system for Guifi.net

Development of a distributed and decentralised monitoring system for Guifi.net

Monitoring the infrastructure of Guifi.net is essential for the operation of the network. A centralized monitoring solution, however, is sensitive to failures and does not follow the spirit of a collaborative effort.

Within the Lightkone project, we have developed a decentralized monitoring solution for Guifi.net, in which the monitoring server software that is hosted on Cloudy nodes can join the overall network monitoring task and in a self-organizing way take over part of that work.

The conceptual system is shown in the next figure. Each device of the network infrastructure of Guifi.net (in the center of the picture) is monitored by several servers.

These monitoring servers coordinate with each other over a distributed database about which server takes care of the monitoring of a set of routers (network devices). This coordination is done continuously and the current assignment is always updated to the database in order to assure that if any server fails, another server will jump in and make sure that all network devices continue to be monitored. In the next figure, the components of the monitoring servers are shown. There is a key component named “assign” to organize the monitoring assignment of each server. The monitoring task is done by the “ping” and “snmp” components.

The implementation of the monitoring servers was done in the Go language and the code is available in our Gitlab repository. We used the AntidoteDB database as a distributed storage service.

You can learn more about the monitoring system in our technical paper published at IEEE SOCA 2019.

Experimenting with Kubernetes in Cloudy

We aim to run Kubernetes in a set of Raspberry Pi and to expose the Kubernetes operation in the Cloudy Web GUI.

In order to communicate Kubernetes operations with Cloudy, we implemented the serf-publisher component. Within the Service Controller function, serf-publisher communicates both with the Kubernetes API and through Cloudy’s avahi-ps with Serf.

We create a new service with

kubectl expose deployment/nginx ‑n Cloudy ‑‑type=NodePort

Through serf-publisher, the operations becomes published in Cloudy’s Web GUI:

The code of serf-publisher can be found in this github repository.

Detailed documentation (in Spanish) about this project can be found at here.

Cloudy with IPFS-Cluster over WAN

We interconnected three sites with Cloudy nodes over the Internet.

In this setting, we have the situation of a heterogeneous network, with some Cloudy nodes at the same location and others remotely connected over the Internet.

IPFS-Cluster was installed on the Cloudy nodes in order to explore the possibility of having a private IPFS network in this environment.

We observed some timeout issues with IPFS-Cluster and needed to increase for the Raft consensus protocol some network latency related parameter values in the IPFS-Cluster configuration file.

Finally we achieved to successfully connect 7 instances over a heterogeneous network.

Reference (in Spanish):
Leopoldo Álvarez Huerta: Cloud comunitaría y servicios distribuidos sobre IPFS

High availability services in microcloud@home with Cloudy on several Raspberry Pi

The goal is to have high availability of the services we deploy at home in case any of the Raspberry Pi gets disconnected.
The hardware we used were three Raspberry Pi.

The solution to achieve high availability consisted of a set of software components, hardware and open technologies, i.e. Raspberry, Cloudy, Docker, Swarm, Node-Red, Mosquitto, keepalived, syncthing, which we integrated to make up the local microcloud, an infrastructure that we can run at home.
We use keepalived to have a floating IP to access the cluster services in the event of a node failure.
The configuration of the containers is replicated with the SyncThing software.

The obtained microcloud offers high availability to any Docker service that we execute on the nodes and has the following characteristics:

– It is scalable and flexible, allowing us to add more nodes, internal or external, to adjust the resources of the microcloud to our needs.
– Thanks to the technology of Docker Swarm, it allows executing complex distributed systems of pre-configured services from a single configuration file.
– It gives us full control over our services.
– It gives us full control over our data.
– The cost is less than 50 Euros per computing node.

Reference (in Spanish):
José Elías Rael Gutierrez: Diseño e implementación de una microCloud abierta para IoT