Customizing the Kubernetes scheduling logic

The default Kubernetes scheduler is not Liqo-aware, hence we may need to use some tricks to force the scheduler to start a pod on a given cluster.

Default scheduling behavior

By default, the Kubernetes scheduler selects the node with the highest amount of free resources. Given that the virtual node summarizes all the resources shared by a given foreign cluster (no matter how many remote physical nodes are involved), is very likely that the above node will be perceived as fatter than any physical node available locally. Hence, very likely, new pods will be scheduled on that node.

However, in general, you cannot know which node (either local, or in the foreign cluster) will be selected: it simply depends on the amount of available resources.

To schedule a pod on a given cluster, you have to follow one of the options below.

Scheduling a pod in a remote cluster using the ‘’ label

First, you need to configure a Kubernetes namespace that spans also across foreign clusters, which can be achieved by setting the label, as follows (which refers to namespace liqo-demo):

# Create a new namespace named 'liqo-demo'
kubectl create namespace liqo-demo
# Associate the '' label to the above namespace
kubectl label namespace liqo-demo

Second, you need to start a pod whose specification includes the nodeSelector tag set to virtual-node, as follows:

apiVersion: v1
kind: Pod
  name: nginx
    app: liqo-demo
  - name: nginx
    image: nginxdemos/hello
    imagePullPolicy: IfNotPresent
      - containerPort: 80
        name: web
    type: virtual-node

Scheduling a pod in a remote cluster using the ‘taint’ mechanism

  • add a toleration for taints:
    - effect: NoExecute
      value: "true"