The default Kubernetes scheduler is not Liqo-aware, hence we may need to use some tricks to force the scheduler to start a pod on a given cluster.
By default, the Kubernetes scheduler selects the node with the highest amount of free resources. Given that the virtual node summarizes all the resources shared by a given foreign cluster (no matter how many remote physical nodes are involved), is very likely that the above node will be perceived as fatter than any physical node available locally. Hence, very likely, new pods will be scheduled on that node.
However, in general, you cannot know which node (either local, or in the foreign cluster) will be selected: it simply depends on the amount of available resources.
To schedule a pod on a given cluster, you have to follow one of the options below.
First, you need to configure a Kubernetes namespace that spans also across foreign clusters, which can be achieved by setting the
liqo.io/enabled=true label, as follows (which refers to namespace
# Create a new namespace named 'liqo-demo' kubectl create namespace liqo-demo # Associate the 'liqo.io/enabled' label to the above namespace kubectl label namespace liqo-demo liqo.io/enabled=true
Second, you need to start a pod whose specification includes the
nodeSelector tag set to
virtual-node, as follows:
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: liqo-demo spec: containers: - name: nginx image: nginxdemos/hello imagePullPolicy: IfNotPresent ports: - containerPort: 80 name: web nodeSelector: type: virtual-node
taints: - effect: NoExecute key: virtual-node.liqo.io/not-allowed value: "true"