cordon节点,drain驱逐节点,delete 节点( 三 )

drain驱逐节点:drain=cordon+evicted
drain k8scloude2节点,--delete-emptydir-data删除数据,--ignore-daemonsets忽略daemonsets
[root@k8scloude1 deploy]# kubectl drain k8scloude2node/k8scloude2 cordonederror: unable to drain node "k8scloude2", aborting command...There are pending nodes to be drained: k8scloude2cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-k5dmjcannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-nsbfs, kube-system/kube-proxy-lpj8z[root@k8scloude1 deploy]# kubectl get nodeNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready,SchedulingDisabled<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.0[root@k8scloude1 deploy]# kubectl drain k8scloude2 --ignore-daemonsetsnode/k8scloude2 already cordonederror: unable to drain node "k8scloude2", aborting command...There are pending nodes to be drained: k8scloude2error: cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-k5dmj[root@k8scloude1 deploy]# kubectl drain k8scloude2 --ignore-daemonsets --force --delete-emptydir-datanode/k8scloude2 already cordonedWARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-nsbfs, kube-system/kube-proxy-lpj8zevicting pod pod/nginx-6cf858f6cf-sf2w6evicting pod pod/nginx-6cf858f6cf-5rrk4evicting pod kube-system/metrics-server-bcfb98c76-k5dmjevicting pod pod/nginx-6cf858f6cf-58wndevicting pod pod/nginx-6cf858f6cf-mb2ftevicting pod pod/nginx-6cf858f6cf-89wj9evicting pod pod/nginx-6cf858f6cf-nq6zvpod/nginx-6cf858f6cf-5rrk4 evictedpod/nginx-6cf858f6cf-mb2ft evictedpod/nginx-6cf858f6cf-sf2w6 evictedpod/nginx-6cf858f6cf-58wnd evictedpod/nginx-6cf858f6cf-nq6zv evictedpod/nginx-6cf858f6cf-89wj9 evictedpod/metrics-server-bcfb98c76-k5dmj evictednode/k8scloude2 evicted查看pod,k8scloude2节点被drain之后,pod都调度到了k8scloude3节点 。
节点被drain驱逐的本质就是删除节点上的pod,k8scloude2节点被drain驱逐之后,k8scloude2上运行的pod会被删除 。
deploy是一个控制器,会监控pod的副本数,当k8scloude2上的pod被驱逐之后 , 副本数少于10,于是在可调度的节点创建pod,补足副本数 。
单独的pod不具备再生性 , 删除之后就真删除了,如果k8scloude3被驱逐,则pod pod1会被删除,其他可调度节点也不会再生一个pod1 。
[root@k8scloude1 deploy]# kubectl get pod -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESnginx-6cf858f6cf-7gh4z1/1Running084s10.244.251.240k8scloude3<none><none>nginx-6cf858f6cf-7lmfd1/1Running085s10.244.251.238k8scloude3<none><none>nginx-6cf858f6cf-86wxr1/1Running06m14s10.244.251.237k8scloude3<none><none>nginx-6cf858f6cf-9bn2b1/1Running085s10.244.251.243k8scloude3<none><none>nginx-6cf858f6cf-9njrj1/1Running06m14s10.244.251.236k8scloude3<none><none>nginx-6cf858f6cf-bqk2w1/1Running084s10.244.251.241k8scloude3<none><none>nginx-6cf858f6cf-hchtb1/1Running06m14s10.244.251.234k8scloude3<none><none>nginx-6cf858f6cf-hjddp1/1Running084s10.244.251.244k8scloude3<none><none>nginx-6cf858f6cf-pl7ww1/1Running06m14s10.244.251.235k8scloude3<none><none>nginx-6cf858f6cf-sgxfg1/1Running084s10.244.251.242k8scloude3<none><none>pod11/1Running041m10.244.251.216k8scloude3<none><none>查看node节点状态
[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready,SchedulingDisabled<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.04.3 uncordon节点要取消drain某个节点,直接uncordon即可,没有undrain操作 。
[root@k8scloude1 deploy]# kubectl undrain k8scloude2Error: unknown command "undrain" for "kubectl"Did you mean this?drainRun 'kubectl --help' for usage.uncordon k8scloude2节点,节点恢复调度
[root@k8scloude1 deploy]# kubectl uncordon k8scloude2node/k8scloude2 uncordoned[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready<none>8dv1.21.0k8scloude3Ready<none>8dv1.21.0把deploy副本数变为0,再变为10,再观察pod分布
[root@k8scloude1 deploy]# kubectl scale deploy nginx --replicas=0deployment.apps/nginx scaled[root@k8scloude1 deploy]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESpod11/1Running052m10.244.251.216k8scloude3<none><none>[root@k8scloude1 deploy]# kubectl scale deploy nginx --replicas=10deployment.apps/nginx scaledk8scloude2节点恢复可调度pod状态
[root@k8scloude1 deploy]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESnginx-6cf858f6cf-4sqj81/1Running06s10.244.112.172k8scloude2<none><none>nginx-6cf858f6cf-cjqxv1/1Running06s10.244.112.176k8scloude2<none><none>nginx-6cf858f6cf-fk69r1/1Running06s10.244.112.175k8scloude2<none><none>nginx-6cf858f6cf-ghznd1/1Running06s10.244.112.173k8scloude2<none><none>nginx-6cf858f6cf-hnxzs1/1Running06s10.244.251.246k8scloude3<none><none>nginx-6cf858f6cf-hshnm1/1Running06s10.244.112.171k8scloude2<none><none>nginx-6cf858f6cf-jb5sh1/1Running06s10.244.112.170k8scloude2<none><none>nginx-6cf858f6cf-l9xlm1/1Running06s10.244.112.174k8scloude2<none><none>nginx-6cf858f6cf-pgjlb1/1Running06s10.244.251.247k8scloude3<none><none>nginx-6cf858f6cf-rlnh61/1Running06s10.244.251.245k8scloude3<none><none>pod11/1Running052m10.244.251.216k8scloude3<none><none>

推荐阅读