k8scloude2节点删除标签
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-node/k8scloude2 labeled[root@k8scloude1 pod]# kubectl get nodes --show-labelsNAMESTATUSROLESAGEVERSIONLABELSk8scloude1Readycontrol-plane,master7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=k8scloude2Ready<none>7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linuxk8scloude3Ready<none>7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux
列出含有标签k8snodename=k8scloude2的节点
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2#列出含有标签k8snodename=k8scloude2的节点[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2NAMESTATUSROLESAGEVERSIONk8scloude2Ready<none>7d1hv1.21.0[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-node/k8scloude2 labeled
对所有节点设置标签
[root@k8scloude1 pod]# kubectl label nodes --all k8snodename=cloudenode/k8scloude1 labelednode/k8scloude2 labelednode/k8scloude3 labeled
列出含有标签k8snodename=cloude的节点
#列出含有标签k8snodename=cloude的节点[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=cloudeNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master7d1hv1.21.0k8scloude2Ready<none>7d1hv1.21.0k8scloude3Ready<none>7d1hv1.21.0#删除标签[root@k8scloude1 pod]# kubectl label nodes --all k8snodename-node/k8scloude1 labelednode/k8scloude2 labelednode/k8scloude3 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=cloudeNo resources found
--overwrite参数,标签的覆盖
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2node/k8scloude2 labeled#标签的覆盖[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloudeerror: 'k8snodename' already has a value (k8scloude2), and --overwrite is false#--overwrite参数,标签的覆盖[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude --overwritenode/k8scloude2 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2No resources found[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloudeNAMESTATUSROLESAGEVERSIONk8scloude2Ready<none>7d1hv1.21.0[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename-node/k8scloude2 labeled
Tips:如果不想在k8scloude1的ROLES里看到control-plane,则可以通过取消标签达到目的:kubectl label nodes k8scloude1 node-role.kubernetes.io/control-plane- 进行取消标签
[root@k8scloude1 pod]# kubectl get nodes --show-labelsNAMESTATUSROLESAGEVERSIONLABELSk8scloude1Readycontrol-plane,master7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude1,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=k8scloude2Ready<none>7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude2,kubernetes.io/os=linuxk8scloude3Ready<none>7d1hv1.21.0beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8scloude3,kubernetes.io/os=linux[root@k8scloude1 pod]# kubectl label nodes k8scloude1 node-role.kubernetes.io/control-plane-
3.4.3 通过标签控制pod在哪个节点运行给k8scloude2节点打上标签k8snodename=k8scloude2
[root@k8scloude1 pod]# kubectl label nodes k8scloude2 k8snodename=k8scloude2node/k8scloude2 labeled[root@k8scloude1 pod]# kubectl get nodes -l k8snodename=k8scloude2NAMESTATUSROLESAGEVERSIONk8scloude2Ready<none>7d1hv1.21.0[root@k8scloude1 pod]# kubectl get podsNo resources found in pod namespace.
创建pod,nodeSelector:k8snodename: k8scloude2 指定pod运行在标签为k8snodename=k8scloude2的节点上
[root@k8scloude1 pod]# vim schedulepod4.yaml[root@k8scloude1 pod]# cat schedulepod4.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1namespace: podspec:nodeSelector:k8snodename: k8scloude2containers:- image: nginximagePullPolicy: IfNotPresentname: pod1resources: {}ports:- name: httpcontainerPort: 80protocol: TCPhostPort: 80dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 pod]# kubectl apply -f schedulepod4.yamlpod/pod1 created
可以看到pod运行在k8scloude2节点
[root@k8scloude1 pod]# kubectl get pod -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running021s10.244.112.158k8scloude2<none><none>
推荐阅读
- JAVA开发搞了一年多的大数据,究竟干了点啥
- JAVA的File对象
- 原神须弥地区死域有什么效果
- 原神巡林小队三号林区位置在哪
- 不思议迷宫幽灵船八个部件位置在哪
- 原神巡林小队二号林区位置在哪
- 原神巡林小队一号林区位置在哪
- 阴阳锅防空洞的密码有哪些
- 自费研究生的培养方式是统分还是自筹呢 培养方式统分
- 光伏发电的政府补助政策 光伏发电新政策