Selective Offloading

Imagine to have some requirements:

you want deployments spanning across a multi-cluster architecture but using only local resources and, the ones offered by provider-3 managed clusters".

Liqo can help you manage such a scenario by simply creating a Liqo namespace.

Create the Liqo namespace

As described in the namespace offloading dedicated section, a Liqo namespace is composed of:

  • a local Namespace.
  • a NamespaceOffloading resource containing the desired configuration.
  1. You can start by creating a local namespace called, for example, “liqo-test":
kubectl create namespace liqo-test
  1. Now, the Liqo part: you can create an associated NamespaceOffloading resource inside the namespace to enable the offloading:
cat << "EOF" | kubectl apply -f -
kind: NamespaceOffloading
  name: offloading
  namespace: liqo-test
      - matchExpressions:
        - key:
          operator: In
          - provider-3

You do not have to specify the PodOffloadingStrategy and NamespaceMappingStrategy fields at the resource creation. The default values, enforced automatically by Liqo controllers, match the previous requirements: “pod could be deployed both locally and remotely, and the remote namespace name is the default one".

The clusterSelector allows you to choose target clusters for your namespace offloading. In this case, the chosen filter is"provider-3. After the resource creation, your deployment topology is ready to be used.

Check remote namespaces presence

You can check if the topology just built is compliant with the requirements specified in the NamespaceOffloading object. There should be a remote namespace only inside the cluster-3:

kubectl get namespaces 

The namespace name should be “liqo-test-yourHomeClusterID", due to the NamespaceMappingStrategy default value:

NAME                                              STATUS   
liqo-test-b5de574d-a0a6-4a2a-8bc8-ac8c726862c5    Active   

You can export this name as an environment variable:

REMOTE_NAMESPACE=$(kubectl get namespace | grep "liqo-test" | cut -d " " -f1)

The cluster-2 should not have any remote namespace with that name:

kubectl get namespaces $REMOTE_NAMESPACE

Analyze the namespaceOffloading status

The offloading process was successful, so the NamespaceOffloading resource should have the OffloadingPhase equal to Ready and a vector of RemoteNamespaceConditions for each remote cluster. In this case, there are two single-condition vectors:

  • Vector with the condition OffloadingRequired set to False, for the cluster without the remote namespaces (cluster-2).
  • Vector with the condition Ready set to True, for the cluster with the remote namespace (cluster-3).
kubectl get namespaceoffloadings offloading -n liqo-test -o yaml
   offloadingPhase: Ready
   remoteNamespaceName: liqo-test-b5de574d-a0a6-4a2a-8bc8-ac8c726862c5
      b38f5c32-a877-4f82-8bde-2fd0c5c8f862:     <========== 1° vector
         - lastTransitionTime: "2021-07-31T10:00:53Z"
           message: You have not selected this cluster through ClusterSelector fields
           reason: ClusterNotSelected
           status: "False"
           type: OffloadingRequired             <========== OffloadingRequired condition
      b07938e3-d241-460c-a77b-e286c0f733c7:     <========== 2° vector
         - lastTransitionTime: "2021-07-31T10:01:03Z"
           message: Namespace correctly offloaded on this cluster
           reason: RemoteNamespaceCreated
           status: "True"
           type: Ready                          <========== Ready condition

All the possible values of RemoteNamespaceConditions are described in the section about NamespaceOffloading status

Now that you have checked the topology, you can try to deploy a simple application inside it. During the deployment, you can test the PodOffloadingStrategy enforcement and how the violation of the offloading constraints is managed.