Peer to a foreign cluster

Once Liqo is installed in your cluster, you can start establishing new peerings. Specifically, you can rely on three different methods to peer with other clusters:

  1. LAN Discovery: automatic discovery of neighboring clusters available in the same LAN. This looks similar to the automatic discovery of Wi-Fi hotspots and it is particularly suitable when your cluster is composed of a single node (e.g., in combination of K3s).
  2. DNS Discovery: automatic discovery of the clusters associated with a specific DNS domain (e.g.;, by scraping the existence of specific DNS entries. This looks similar to the discovery of voice-over-IP SIP servers and it is mostly oriented to big organizations that wish to adopt Liqo in production.
  3. Manual Configuration: manual addition of specific clusters to the list of known ones. This method is particularly appropriate for testing purposes, as well as to try Liqo outside LAN, without requiring any DNS configuration.

LAN Discovery

Liqo is able to automatically discover any available clusters running on the same LAN, as well as to make your cluster discoverable by others.

The currently available and established peerings can be easily monitored through the dashboard. Just log in, and the home view will present you the list of “Available Peers” (i.e. all discovered foreign clusters). By default, Liqo tries to establish peerings with on-LAN remote clusters as soon as they are discovered. Hence, you may already observe a remote cluster presented inside the “Connected Peers” section of the dashboard.

Using kubectl, you can also manually obtain the list of discovered foreign clusters:

DNS Discovery

The DNS discovery procedure requires two orthogonal actions to be enabled.

  1. Register your own cluster into your DNS server to make it discoverable by others (the required parameters are presented in the section below).
  2. Connect to a foreign cluster, specifying the remote domain.

Register the home cluster

To allow other cluster to peer with your cluster(s), you need to register a set of DNS records that specify the cluster(s) available in your domain, with the different parameters required to establish the connection.

First, you have to retrieve the Cluster ID (i.e. a unique identifier generated by Liqo for identification purposes during the peering process) of your cluster:

kubectl get configmap -n liqo cluster-id -o jsonpath='{.data.cluster-id}'

Now, it is possible to configure the records necessary to enable the DNS discovery process. In the following, we present a bind9-like configuration for an hypothetical domain It exposes one Liqo-enabled cluster named liqo-cluster, with the API server accessible at Remember to adapt the configuration according to your setup.                           PTR   SRV     0 0 6443   TXT     "id=${YOUR_CLUSTER_ID}"
                                               "namespace=liqo"   A
Expand here to know more about the meaning of each record.

Connect to a remote cluster

In order to leverage the DNS discovery to peer to a remote cluster, it is necessary to specify the remote domain. This operation can be easily performed through the graphical dashboard: click on the “+” icon located near Available Peers and then select “Add domain”. Here, you need to configure the following parameters:

  1. Domain: the domain where the cluster you want to peer with is located.
  2. Name: a mnemonic name to identify the domain.
  3. Join: to specify whether to automatically trigger the peering procedure.
Using kubectl, it is also possible to perform the same configuration.

Manual Configuration

If the cluster you want to peer with is not present in your LAN, and you do not want to configure the DNS discovery, it is possible to set-up a manual peering via the graphical dashboard.

First, you have to collect some information about the remote cluster. In particular:

  1. The API Server address: the address of the remote cluster. It should be accessible without NAT from the Home cluster.

    To this end, you need to export the right KUBECONFIG variable pointing to the remote cluster and then extract the address of the Kubernetes API server:

    kubectl config view -o jsonpath='{.clusters[*].cluster.server}'

    Note: if you have multiple clusters listed in your kubeconfig, the last command will return the API server address for each of them.

  2. The Cluster ID: the UUID which identifies the remote cluster. It can be obtained through the dashboard instance on the remote cluster from the Home Page near the title. Cluster ID may also be retrieved via kubectl:

kubectl get configmap -n liqo cluster-id -o jsonpath='{.data.cluster-id}'

Now, through the dashboard of the home cluster, you can configure the new peering. Click on the “+” icon located near Available Peers and then select “Add Remote Peer”. Here, you need to configure the parameters obtained during the previous step. Additionally, you also have to set the following values:

  1. Name: Name of the cluster (i.e. a mnemonic name, which can also be identical to the clusterID)
  2. Discovery Type: Manual
  3. AllowedUntrustedCA: True (i.e., peer to remote cluster even if its connection is not totally secure, as the cluster name is not signed by a trusted certification authority)
  4. Join: True (i.e. automatically trigger the peering procedure)

Note: In order to establish a bi-directional peering, the entire procedure needs to be repeated also in the opposite direction.

Using kubectl, it is also possible to perform the same configuration.

Peering checking

Presence of the virtual-node

If the peering has been correctly performed, you should see a virtual node (named liqo-*) in addition to your physical nodes:

kubectl get nodes

NAME                                      STATUS   ROLES
master-node                               Ready    master
worker-node-1                             Ready    <none>
worker-node-2                             Ready    <none>
liqo-9a596a4b-591c-4ac6-8fd6-80258b4b3bf9 READY    agent    <-- This is the virtual node

Verify that the resulting infrastructure works correctly

You are now ready to verify that the resulting infrastructure works correctly, which is presented in the next step.