Using Liqo to borrow resources

Scenario

Many times, as students, we would like to use the environment of our PCs but also leverage the hardware capabilities that the University provided us. This is pretty useful in particular when the university was provided of specific hardware facilities such as GPUs. On the other hand, the system admins want to keep control of resources, limit the amount of consumption of a user, etc.

In this tutorial, we provide a proof-of concept example of how Liqo can be used to offload desktop applications on remote clusters. More precisely, we offload an instance of Blender, a graphical application that runs much faster if the hosting computer includes a powerful GPU.

Introduction

This tutorial shows how to install Liqo on two k3s clusters from scratch and then run a real application on the foreign cluster. In particular, we use a desktop application containeraized with the KubernetesOnDesktop project, which aims at executing traditional Desktop applications in a remote environment, while showing their graphical user interface locally.

Obviously, for the purpose of this demo it is better (but not required) that the remote machine features an NVIDIA graphic card, while the local machine can be a traditional laptop. To be more specific, we will execute a Blender pod in a foreign cluster (that is represented in the local cluster as a virtual node named liqo-<...>) and a viewer pod in the local cluster.

Note: from now on, following the Liqo naming, when we talk about “local cluster” we refer to the one that runs the cloudify script (see afterwards), and when we talk about “foreign cluster” we refer to the other one.

The KubernetesOnDesktop project

KubernetesOnDesktop (KoD) aims at developing a cloud infrastructure to run desktop applications in a remote Kubernetes cluster. In a nutshell, KoD splits traditional desktop applications in a backend (running the actual application) and a frontend, showing the graphical interface and interacting with the (desktop) user. This enables desktop applications to be executed also on a remote machine, while keeping their GUI locally.

Technically, KoD leverages a client/server VNC+PulseAudio+SSH infrastructure that enables to start the application pod in a k8s remote node and redirects its GUI (through VNC) and the audio (through PulseAudio+SSH) in a second pod scheduled in the node where the cloudify application is running. The communication between the two components leverages several kubernetes primitives, such as deployments, jobs, services and secrets. For further information see the KubernetesOnDesktop GitHub page.

So far, KoD supports firefox, libreoffice and blender, the latter with the capability to exploit any NVIDIA GPU (through the NVIDIA CUDA driver) available in the remote node. In any case, thanks to the massive use of templates, many more applications can be easily supported.

When executing the cloudify command, the application will create:

  • a secret containing an ssh key that allows the two application components to communicate securely;
  • a deployment containing the application (e.g. blender) and the VNC server, whose pod will be scheduled on the remote cluster;
  • a ClusterIP service that allows the remote pod to be reachable from other pods in the cluster by using K8s DNS for services;
  • a pod executing the VNC viewer, started in the local machine (i.e., on the same node where you run cloudify).

Installation of the required software

To install all the required software we need to follow these steps:

  1. Install k3s in both clusters;
  2. Install Liqo in both clusters;
  3. Install KubernetesOnDesktop in the local cluster. Note that the foreign cluster simply runs a vanilla Liqo, without any other software.

Install k3s

Assuming you already have two Linux machines (or Virtual Machines) up and running in the same LAN, we can install k3s by using the official script as documented in the K3s Quick-Start Guide. So, you only need to run the following command:

curl -sfL https://get.k3s.io | sh -

A tiny customization of the above default install is required for Liqo to work. When the script ends, you need to modify the /etc/systemd/system/k3s.service file by adding the --kube-apiserver-arg anonymous-auth=true service execution parameter as in the following command:

sudo sed -i "s#server#server --kube-apiserver-arg anonymous-auth=true#" /etc/systemd/system/k3s.service
After this operation, your 'k3s.service' file should look like the following:
Note: If you want to exploit any available NVIDIA GPUs in the foreign cluster, you have to follow the additional steps below (in the foreign cluster):

Now you need to apply the changes by executing the following commands to restart the k3s service:

systemctl daemon-reload
systemctl restart k3s.service

Finally, to facilitate the interactions with K3s, we suggest to modify the default setup in order to allow the kubectl command to interact with the installed cluster without sudo. This can be achieved by copying the k3s.yaml config file in a user folder, change its owner and export the KUBECONFIG environment variable as follows:

mkdir -p $HOME/.kube
sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/
sudo chown $USER:$USER $HOME/.kube/k3s.yaml
export KUBECONFIG="$HOME/.kube/k3s.yaml"

NOTE: You need to export the KUBECONFIG environment variable each time you open a new terminal by running, as above, export KUBECONFIG="$HOME/.kube/k3s.yaml". If you want to make KUBECONFIG environment variable permanent, you can add it to your shell configuration file by executing the following command:

echo 'export KUBECONFIG="$HOME/.kube/k3s.yaml"' >> $HOME/.bashrc

Before proceeding with the Liqo installation, wait for all pods to be in Running status; for this, you can execute the command kubectl get pod --all-namespaces.

Install Liqo

To install Liqo, you have to (1) export manually the required environment variables and (2) use the script provided in the project. This can be done with the following commands:

export POD_CIDR=10.42.0.0/16
export SERVICE_CIDR=10.43.0.0/16
curl https://raw.githubusercontent.com/liqotech/liqo/master/install.sh | bash

For detailed information see the Liqo Installation Guide; particularly, check that your Liqo instance works properly.

Before proceding with the installation of KubernetesOnDesktop in one of the two clusters, wait for all the pods in liqo namespace to be up and running in both clusters. You can check it by executing kubectl get pod -n liqo in both clusters.

Since both (virtual) machines are connected to the same local area network, each Liqo cluster will automatically join the foreign one thanks the Liqo Discovery and Peering features.

Install KubernetesOnDesktop

Now that both k3s and Liqo are up and running, we can install KubernetesOnDesktop by executing the following command:

sudo curl -L https://raw.githubusercontent.com/netgroup-polito/KubernetesOnDesktop/master/install.sh | sudo bash -s -- --remote

Now we are ready to run the KubernetesOnDesktop cloudify script as described in the next section.

Run the KubernetesOnDesktop demo

To run the demo we need to execute the cloudify command with the following command:

cloudify -t 500 -r pod -s blender
Expand here to understand the meaning of the different parameters:

The cloudify application will create the kod namespace (if not present) and will apply on it the liqo.io/enabled=true label so that this namespace could be extended to the Liqo foreign cluster (See Exploit foreign cluster resources).

In addition, cloudify adds a label to the local node to allow k3s to schedule the pods according to the node affinity specified in the kubernetes/deployment.yaml, particularly that the application pod must be executed in a remote node. Similarly, kubernetes/vncviewer.yaml specifies that the viewer must be executed on the local node.

Since the foreign cluster is abstracted by Liqo as a local cluster node, the cloudify application does not need to directly interact with the foreign cluster to deploy the application components. Additionally, the communication between pods requires nothing but traditional Kubernetes services and the service name DNS resolution works as usual. Additionally, even though there are two separated clusters and the pods are spread across them, a ClusterIP service is enough, being the foreign cluster a virtual node of the local cluster.

Check the created resources and where the pods are running

When the GUI appears on the machine running the cloudify script, you can inspect the created resources by running the following commands:

kubectl get deployment -n kod    # This will show you the application deployment (blender in this example)
kubectl get jobs -n kod          # This will show you the vncviewer job
kubectl get secrets -n kod       # This will show you the secret containing the ssh key
kubectl get pod -n kod -o wide   # This will show you the running pods and which node they are scheduled on

The above commands can be executed in both the clusters, paying attention to the namespace. In fact, the kod namespace will be reflected in the foreign cluster by adding a suffix as follows kod-<...>. So, to retrieve that namespace, execute the following in the foreign cluster:

kubectl get namespaces | grep 'kod-'

Now, you can execute all the kubectl commands listed above also in the foreign cluster, by replacing the namespace with the one obtained with the previous command. In this case, you should see that only the secret, the deployment and the application pod (in this example blender) are present in this cluster. This is because the other resources (related to vncviewer) are present only in the local cluster, where the GUI is actually displayed.

Cleanup the KubernetesOnDesktop installation

To clean up the KubernetesOnDesktop installation you need to execute the following command:

sudo KUBECONFIG=$KUBECONFIG cloudify-uninstall

Note: During the uninstall process, you will be asked whether to completely remove the kod namespace. Just type “yes” and then press “Enter” to complete the process.

Teardown k3s and Liqo

To teardown k3s and Liqo just run the following commands on both nodes:

k3s-uninstall.sh
rm $HOME/.kube/k3s.yaml

Note: If you added the export KUBECONFIG instruction to your bashrc to make the KUBECONFIG environment variable permanent, as described in Install k3s section, then run also the following command:

sed -i 's#export KUBECONFIG="$HOME/.kube/k3s.yaml"##' $HOME/.bashrc