Many times, as students, we would like to use the environment of our PCs but also leverage the hardware capabilities that the University provided us. This is pretty useful in particular when the university was provided of specific hardware facilities such as GPUs. On the other hand, the system admins want to keep control of resources, limit the amount of consumption of a user, etc.
In this tutorial, we provide a proof-of concept example of how Liqo can be used to offload desktop applications on remote clusters. More precisely, we offload an instance of Blender, a graphical application that runs much faster if the hosting computer includes a powerful GPU.
This tutorial shows how to install Liqo on two k3s clusters from scratch and then run a real application on the foreign cluster. In particular, we use a desktop application containeraized with the KubernetesOnDesktop project, which aims at executing traditional Desktop applications in a remote environment, while showing their graphical user interface locally.
Obviously, for the purpose of this demo it is better (but not required) that the remote machine features an NVIDIA graphic card, while the local machine can be a traditional laptop.
To be more specific, we will execute a Blender
pod in a foreign cluster (that is represented in the local cluster as a virtual node named
liqo-<...>) and a viewer
pod in the local cluster.
Note: from now on, following the Liqo naming, when we talk about “local cluster” we refer to the one that runs the
cloudify script (see afterwards), and when we talk about “foreign cluster” we refer to the other one.
KubernetesOnDesktop (KoD) aims at developing a cloud infrastructure to run desktop applications in a remote Kubernetes cluster. In a nutshell, KoD splits traditional desktop applications in a backend (running the actual application) and a frontend, showing the graphical interface and interacting with the (desktop) user. This enables desktop applications to be executed also on a remote machine, while keeping their GUI locally.
Technically, KoD leverages a client/server VNC+PulseAudio+SSH infrastructure that enables to start the application
pod in a k8s remote node and redirects its GUI (through VNC) and the audio (through PulseAudio+SSH) in a second
pod scheduled in the node where the
cloudify application is running.
The communication between the two components leverages several kubernetes primitives, such as
For further information see the KubernetesOnDesktop GitHub page.
So far, KoD supports firefox, libreoffice and blender, the latter with the capability to exploit any NVIDIA GPU (through the NVIDIA CUDA driver) available in the remote node. In any case, thanks to the massive use of templates, many more applications can be easily supported.
When executing the
cloudify command, the application will create:
secretcontaining an ssh key that allows the two application components to communicate securely;
deploymentcontaining the application (e.g. blender) and the VNC server, whose
podwill be scheduled on the remote cluster;
servicethat allows the remote
podto be reachable from other
pods in the cluster by using K8s DNS for services;
podexecuting the VNC viewer, started in the local machine (i.e., on the same node where you run
To install all the required software we need to follow these steps:
Assuming you already have two Linux machines (or Virtual Machines) up and running in the same LAN, we can install k3s by using the official script as documented in the K3s Quick-Start Guide. So, you only need to run the following command:
curl -sfL https://get.k3s.io | sh -
A tiny customization of the above default install is required for Liqo to work.
When the script ends, you need to modify the
/etc/systemd/system/k3s.service file by adding the
--kube-apiserver-arg anonymous-auth=true service execution parameter as in the following command:
sudo sed -i "s#server#server --kube-apiserver-arg anonymous-auth=true#" /etc/systemd/system/k3s.service
Now you need to apply the changes by executing the following commands to restart the
systemctl daemon-reload systemctl restart k3s.service
Finally, to facilitate the interactions with K3s, we suggest to modify the default setup in order to allow the
kubectl command to interact with the installed cluster without
This can be achieved by copying the
k3s.yaml config file in a user folder, change its owner and export the
KUBECONFIG environment variable as follows:
mkdir -p $HOME/.kube sudo cp /etc/rancher/k3s/k3s.yaml $HOME/.kube/ sudo chown $USER:$USER $HOME/.kube/k3s.yaml export KUBECONFIG="$HOME/.kube/k3s.yaml"
NOTE: You need to export the
KUBECONFIGenvironment variable each time you open a new terminal by running, as above,
export KUBECONFIG="$HOME/.kube/k3s.yaml". If you want to make
KUBECONFIGenvironment variable permanent, you can add it to your shell configuration file by executing the following command:
echo 'export KUBECONFIG="$HOME/.kube/k3s.yaml"' >> $HOME/.bashrc
Before proceeding with the Liqo installation, wait for all pods to be in
Running status; for this, you can execute the command
kubectl get pod --all-namespaces.
To install Liqo, you have to (1) export manually the required environment variables and (2) use the script provided in the project. This can be done with the following commands:
export POD_CIDR=10.42.0.0/16 export SERVICE_CIDR=10.43.0.0/16 curl https://raw.githubusercontent.com/liqotech/liqo/master/install.sh | bash
For detailed information see the Liqo Installation Guide; particularly, check that your Liqo instance works properly.
Before proceding with the installation of KubernetesOnDesktop in one of the two clusters, wait for all the
namespace to be up and running in both clusters.
You can check it by executing
kubectl get pod -n liqo in both clusters.
Since both (virtual) machines are connected to the same local area network, each Liqo cluster will automatically join the foreign one thanks the Liqo Discovery and Peering features.
sudo curl -L https://raw.githubusercontent.com/netgroup-polito/KubernetesOnDesktop/master/install.sh | sudo bash -s -- --remote
Now we are ready to run the KubernetesOnDesktop
cloudify script as described in the next section.
To run the demo we need to execute the
cloudify command with the following command:
cloudify -t 500 -r pod -s blender
cloudify application will create the
namespace (if not present) and will apply on it the
label so that this
namespace could be extended to the Liqo foreign cluster (See Exploit foreign cluster resources).
cloudify adds a label to the local
node to allow k3s to schedule the pods according to the node affinity specified in the
kubernetes/deployment.yaml, particularly that the application
pod must be executed in a remote node.
kubernetes/vncviewer.yaml specifies that the viewer must be executed on the local node.
Since the foreign cluster is abstracted by Liqo as a local cluster node, the
cloudify application does not need to directly interact with the foreign cluster to deploy the application components.
Additionally, the communication between pods requires nothing but traditional Kubernetes services and the service name DNS resolution works as usual.
Additionally, even though there are two separated clusters and the
pods are spread across them, a
ClusterIP service is enough, being the foreign cluster a virtual node of the local cluster.
When the GUI appears on the machine running the
cloudify script, you can inspect the created resources by running the following commands:
kubectl get deployment -n kod # This will show you the application deployment (blender in this example) kubectl get jobs -n kod # This will show you the vncviewer job kubectl get secrets -n kod # This will show you the secret containing the ssh key kubectl get pod -n kod -o wide # This will show you the running pods and which node they are scheduled on
The above commands can be executed in both the clusters, paying attention to the
In fact, the
namespace will be reflected in the foreign cluster by adding a suffix as follows
kod-<...>. So, to retrieve that
namespace, execute the following in the foreign cluster:
kubectl get namespaces | grep 'kod-'
Now, you can execute all the
kubectl commands listed above also in the foreign cluster, by replacing the
namespace with the one obtained with the previous command.
In this case, you should see that only the
deployment and the application
pod (in this example blender) are present in this cluster.
This is because the other resources (related to vncviewer) are present only in the local cluster, where the GUI is actually displayed.
To clean up the KubernetesOnDesktop installation you need to execute the following command:
sudo KUBECONFIG=$KUBECONFIG cloudify-uninstall
Note: During the uninstall process, you will be asked whether to completely remove the
kodnamespace. Just type “yes” and then press “Enter” to complete the process.
To teardown k3s and Liqo just run the following commands on both nodes:
k3s-uninstall.sh rm $HOME/.kube/k3s.yaml
Note: If you added the
export KUBECONFIGinstruction to your bashrc to make the
KUBECONFIGenvironment variable permanent, as described in Install k3s section, then run also the following command:
sed -i 's#export KUBECONFIG="$HOME/.kube/k3s.yaml"##' $HOME/.bashrc