八 pod:pod的调度——将 Pod 指派给节点( 五 )

删除pod,删除标签
[root@k8scloude1 pod]# kubectl get pod --show-labelsNAMEREADYSTATUSRESTARTSAGELABELSpod11/1Running032mrun=pod1[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted[root@k8scloude1 pod]# kubectl get pod --show-labelsNo resources found in pod namespace.[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-node/k8scloude2 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2No resources found[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloudeNo resources found注意:如果两台主机的标签是一致的,那么通过在这两台机器上进行打分,哪个机器分高,pod就运行在哪个pod上
给k8s集群的master节点打标签
[root@k8scloude1 pod]# kubectl label nodes k8scloude1 k8snodename=k8scloude1node/k8scloude1 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude1NAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master7d2hv1.21.0创建pod,nodeSelector:k8snodename: k8scloude1指定pod运行在标签为k8snodename=k8scloude1的节点上
[root@k8scloude1 pod]# vim schedulepod5.yaml [root@k8scloude1 pod]# cat schedulepod5.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:nodeSelector:k8snodename: k8scloude1containers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod5.yamlpod/pod1 created因为k8scloude1上有污点,所以pod不能运行在k8scloude1上,pod状态为Pending
[root@k8scloude1 pod]# kubectl get podNAMEREADYSTATUSRESTARTSAGEpod10/1Pending09s删除pod , 删除标签
[root@k8scloude1 pod]# kubectl delete pod pod1pod "pod1" deleted[root@k8scloude1 pod]# kubectl get podNo resources found in pod namespace.[root@k8scloude1 pod]# kubectl label nodes k8scloude1 k8snodename-node/k8scloude1 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude1No resources found3.5 使用亲和性与反亲和性调度podnodeSelector 提供了一种最简单的方法来将 Pod 约束到具有特定标签的节点上 。亲和性和反亲和性扩展了你可以定义的约束类型 。使用亲和性与反亲和性的一些好处有:

  • 亲和性、反亲和性语言的表达能力更强 。nodeSelector 只能选择拥有所有指定标签的节点 。亲和性、反亲和性为你提供对选择逻辑的更强控制能力 。
  • 你可以标明某规则是“软需求”或者“偏好” , 这样调度器在无法找到匹配节点时仍然调度该 Pod 。
  • 你可以使用节点上(或其他拓扑域中)运行的其他 Pod 的标签来实施调度约束,而不是只能使用节点本身的标签 。这个能力让你能够定义规则允许哪些 Pod 可以被放置在一起 。
亲和性功能由两种类型的亲和性组成:
  • 节点亲和性功能类似于 nodeSelector 字段,但它的表达能力更强 , 并且允许你指定软规则 。
  • Pod 间亲和性/反亲和性允许你根据其他 Pod 的标签来约束 Pod 。
节点亲和性概念上类似于 nodeSelector,它使你可以根据节点上的标签来约束 Pod 可以调度到哪些节点上 。节点亲和性有两种:
  • requiredDuringSchedulingIgnoredDuringExecution: 调度器只有在规则被满足的时候才能执行调度 。此功能类似于 nodeSelector ,  但其语法表达能力更强 。
  • preferredDuringSchedulingIgnoredDuringExecution: 调度器会尝试寻找满足对应规则的节点 。如果找不到匹配的节点,调度器仍然会调度该 Pod 。
在上述类型中,IgnoredDuringExecution 意味着如果节点标签在 Kubernetes 调度 Pod 后发生了变更,Pod 仍将继续运行
你可以使用 Pod 规约中的 .spec.affinity.nodeAffinity 字段来设置节点亲和性 。
查看nodeAffinity字段解释
[root@k8scloude1 pod]# kubectl explain pods.spec.affinity.nodeAffinityKIND:PodVERSION:v1RESOURCE: nodeAffinity <Object>DESCRIPTION:Describes node affinity scheduling rules for the pod.Node affinity is a group of node affinity scheduling rules.FIELDS:#软策略preferredDuringSchedulingIgnoredDuringExecution <[]Object>The scheduler will prefer to schedule pods to nodes that satisfy theaffinity expressions specified by this field, but it may choose a node thatviolates one or more of the expressions. The node that is most preferred isthe one with the greatest sum of weights, i.e. for each node that meets allof the scheduling requirements (resource request, requiredDuringSchedulingaffinity expressions, etc.), compute a sum by iterating through theelements of this field and adding "weight" to the sum if the node matchesthe corresponding matchExpressions; the node(s) with the highest sum arethe most preferred.#硬策略requiredDuringSchedulingIgnoredDuringExecution <Object>If the affinity requirements specified by this field are not met atscheduling time, the pod will not be scheduled onto the node. If theaffinity requirements specified by this field cease to be met at some pointduring pod execution (e.g. due to an update), the system may or may not tryto eventually evict the pod from its node.

推荐阅读