New Serf bootstrap server

The IP address of the Serf bootstrap server in Guifi has changed.

The new IP address is

You can update the new bootstrap server in your Cloudy instance in the Serf configuration menu.

Install Cloudy on several hosts with Ansible

To install Cloudy in only one host, you may run the cloudynitzar script, which will install all packages on your Linux machine.

To install on several hosts, an Ansible playbook is a convenient way to do it.
You will need to install Ansible on your device.

Download the repository cloudynitzar, edit the file “hosts” and add the IPs of the machines where you want to install Cloudy,


You may need to add the public keys of the target devices to your know_hosts file on the device where you run Ansible.
e.g. ssh-keyscan >> ~/.ssh/known_hosts

Run the playbook with: ansible-playbook -i hosts playbook.yml –ask-pass –extra-vars “hosts=cloudy user=your_user_name”
(you may need to add the parameter -K or –ask-become-pass if Ansible fails due to a password error.)

Kubernetes@home for Cloudy

In the Kasperry PI project, we deployed a Kubernetes cluster consisting of 4 Raspberry PI in a home environment.

These 4 nodes could represent Cloudy nodes, and once Kubernetes is set up, services can be managed over the Kubernetes Master. There are also different dashboards for Kubernetes available to help in the management of the services.

The benefits of using a Kubernetes cluster for services shared over Cloudy with a community are a higher resilience of the offered services. The Kubernetes Master (which can also be used in Multi-Master mode to avoid a single point of failure in the Master) takes care that all services are up, and if any node in the home cluster goes down, Kubernetes will arrange that the failing service will be offered by another node of the cluster.

For the home cluster, 4 Raspberry Pi were used, all were the model 4:
– Master node: Raspberry PI 4 with 4Gb and booting with an external SSD
– Infra nodes: Raspberry PI 4 with 4Gb and botting with an external SSD
– Two worker nodes: Raspberry PI 4. (we used nodes with 4GB but they can have less)

The architecture is shown in the following picture:

SSD Disk: From our experience, it is recommended to use an SSD Disk on the master node and Infra node because they have a database running. If a database or persistent storage is not needed, then the Infra nodes can be omitted.

Infra node: It is common to have running infrastructure-related apps, e.g. a database which is making use of non-ephemeral storage, apps which require to mount the container network to the host like a load balancer or an NFS server for sharing files with other nodes, etc.

Operating System: We installed Ubuntu on the Raspberry Pi and we did not use the default Raspian OS or the new Raspberry OS. One of the reasons is that Ubuntu is a more optimized OS for running server applications. In addition, the Ubuntu community is more active about Kubernetes and it has more information. So, we decided to give Ubuntu a try for this deployment.

The cluster of the Raspberry Pi can be seen in the following picture, along with some additional devices we used for other purposes.

Performance: The system has been running stable at 13% of CPU capacity most of the time. The cluster has been tested deploying multiple Pods, which contain Docker containers. The following figure is obtained from DataDog dashboard and shows the number of pods running in our home cluster for several days.

Special thanks for this work to the Kaspberry PI project, Kasperry PI @ Albert Sabate.

The code is available at

For building the cluster, please refer to the tutorials at

Development of a distributed and decentralised monitoring system for

Development of a distributed and decentralised monitoring system for

Monitoring the infrastructure of is essential for the operation of the network. A centralized monitoring solution, however, is sensitive to failures and does not follow the spirit of a collaborative effort.

Within the Lightkone project, we have developed a decentralized monitoring solution for, in which the monitoring server software that is hosted on Cloudy nodes can join the overall network monitoring task and in a self-organizing way take over part of that work.

The conceptual system is shown in the next figure. Each device of the network infrastructure of (in the center of the picture) is monitored by several servers.

These monitoring servers coordinate with each other over a distributed database about which server takes care of the monitoring of a set of routers (network devices). This coordination is done continuously and the current assignment is always updated to the database in order to assure that if any server fails, another server will jump in and make sure that all network devices continue to be monitored. In the next figure, the components of the monitoring servers are shown. There is a key component named “assign” to organize the monitoring assignment of each server. The monitoring task is done by the “ping” and “snmp” components.

The implementation of the monitoring servers was done in the Go language and the code is available in our Gitlab repository. We used the AntidoteDB database as a distributed storage service.

You can learn more about the monitoring system in our technical paper published at IEEE SOCA 2019.

Experimenting with Kubernetes in Cloudy

We aim to run Kubernetes in a set of Raspberry Pi and to expose the Kubernetes operation in the Cloudy Web GUI.

In order to communicate Kubernetes operations with Cloudy, we implemented the serf-publisher component. Within the Service Controller function, serf-publisher communicates both with the Kubernetes API and through Cloudy’s avahi-ps with Serf.

We create a new service with

kubectl expose deployment/nginx ‑n Cloudy ‑‑type=NodePort

Through serf-publisher, the operations becomes published in Cloudy’s Web GUI:

The code of serf-publisher can be found in this github repository.

Detailed documentation (in Spanish) about this project can be found at here.