[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running013s10.244.112.159k8scloude2<none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted
创建pod,preferredDuringSchedulingIgnoredDuringExecution参数表示:节点最好具有一个键名为 xx
且取值大于 600
的标签 。
[root@k8scloude1 pod]# vim preferredDuringSchedule1.yaml [root@k8scloude1 pod]# cat preferredDuringSchedule1.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 2preference:matchExpressions:- key: xxoperator: Gtvalues:- "600"containers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f preferredDuringSchedule1.yamlpod/pod1 created
因为preferredDuringSchedulingIgnoredDuringExecution是软策略,尽管k8scloude2,k8scloude3都不满足xx>600,但是还是能成功创建pod
[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running07s10.244.251.213k8scloude3<none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted
3.5.3 节点亲和性权重你可以为 preferredDuringSchedulingIgnoredDuringExecution
亲和性类型的每个实例设置 weight 字段 , 其取值范围是 1 到 100 。当调度器找到能够满足 Pod 的其他调度请求的节点时,调度器会遍历节点满足的所有的偏好性规则,并将对应表达式的 weight 值加和 。最终的加和值会添加到该节点的其他优先级函数的评分之上 。在调度器为 Pod 作出调度决定时,总分最高的节点的优先级也最高 。
给节点打标签
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 yy=59node/k8scloude2 labeled[root@k8scloude1 pod]# kubectl label nodes k8scloude3 yy=72node/k8scloude3 labeled
创建pod,preferredDuringSchedulingIgnoredDuringExecution指定了2条软策略,但是权重不一样:weight: 2 和 weight: 10
[root@k8scloude1 pod]# vim preferredDuringSchedule2.yaml [root@k8scloude1 pod]# cat preferredDuringSchedule2.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:affinity:nodeAffinity:preferredDuringSchedulingIgnoredDuringExecution:- weight: 2preference:matchExpressions:- key: xxoperator: Gtvalues:- "60"- weight: 10preference:matchExpressions:- key: yyoperator: Gtvalues:- "60"containers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f preferredDuringSchedule2.yamlpod/pod1 created
存在两个候选节点,因为yy>60这条规则的weight权重大,所以pod运行在k8scloude3
[root@k8scloude1 pod]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running010s10.244.251.214k8scloude3<none><none>[root@k8scloude1 pod]# kubectl delete pod pod1 --forcewarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.pod "pod1" force deleted
3.6 Pod 拓扑分布约束你可以使用 拓扑分布约束(Topology Spread Constraints) 来控制 Pod 在集群内故障域之间的分布,故障域的示例有区域(Region)、可用区(Zone)、节点和其他用户自定义的拓扑域 。这样做有助于提升性能、实现高可用或提升资源利用率 。
推荐阅读
- JAVA开发搞了一年多的大数据,究竟干了点啥
- JAVA的File对象
- 原神须弥地区死域有什么效果
- 原神巡林小队三号林区位置在哪
- 不思议迷宫幽灵船八个部件位置在哪
- 原神巡林小队二号林区位置在哪
- 原神巡林小队一号林区位置在哪
- 阴阳锅防空洞的密码有哪些
- 自费研究生的培养方式是统分还是自筹呢 培养方式统分
- 光伏发电的政府补助政策 光伏发电新政策