cordon节点,drain驱逐节点,delete 节点

目录

  • 一.系统环境
  • 二.前言
  • 三.cordon节点
    • 3.1 cordon节点概览
    • 3.2 cordon节点
    • 3.3 uncordon节点
  • 四.drain节点
    • 4.1 drain节点概览
    • 4.2 drain 节点
    • 4.3 uncordon节点
  • 五.delete 节点
    • 5.1 delete节点概览
    • 5.2 delete节点
一.系统环境服务器版本docker软件版本Kubernetes(k8s)集群版本CPU架构CentOS Linux release 7.4.1708 (Core)Docker version 20.10.12v1.21.9x86_64Kubernetes集群架构:k8scloude1作为master节点,k8scloude2,k8scloude3作为worker节点
服务器操作系统版本CPU架构进程功能描述k8scloude1/192.168.110.130CentOS Linux release 7.4.1708 (Core)x86_64docker , kube-apiserver,etcd,kube-scheduler,kube-controller-manager,kubelet,kube-proxy,coredns , calicok8s master节点k8scloude2/192.168.110.129CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet , kube-proxy,calicok8s worker节点k8scloude3/192.168.110.128CentOS Linux release 7.4.1708 (Core)x86_64docker,kubelet,kube-proxy,calicok8s worker节点二.前言本文介绍cordon节点,drain驱逐节点 , delete 节点,在对k8s集群节点执行维护(例如内核升级、硬件维护等)时候会用到 。
cordon节点,drain驱逐节点,delete 节点的前提是已经有一套可以正常运行的Kubernetes集群 , 关于Kubernetes(k8s)集群的安装部署,可以查看博客《Centos7 安装部署Kubernetes(k8s)集群》https://www.cnblogs.com/renshengdezheli/p/16686769.html
三.cordon节点3.1 cordon节点概览cordon 节点会使其停止调度 , 会将node状态调为SchedulingDisabled,之后再创建新pod , 新pod不会被调度到该节点 , 原有的pod不会受到影响,仍正常对外提供服务 。
3.2 cordon节点创建目录存放yaml文件
[root@k8scloude1 ~]# mkdir deploy[root@k8scloude1 ~]# cd deploy/使用--dry-run生成deploy配置文件
[root@k8scloude1 deploy]# kubectl create deploy nginx --image=nginx --dry-run=client -o yaml >nginx.yaml[root@k8scloude1 deploy]# cat nginx.yamlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: nginxname: nginxspec:replicas: 1selector:matchLabels:app: nginxstrategy: {}template:metadata:creationTimestamp: nulllabels:app: nginxspec:containers:- image: nginxname: nginxresources: {}status: {}修改deploy配置文件,replicas: 5表示副本数为 5,deploy将创建5个pod
[root@k8scloude1 deploy]# vim nginx.yaml #修改配置文件:# replicas: 5副本数修改为5#terminationGracePeriodSeconds: 0宽限期修改为0# imagePullPolicy: IfNotPresent镜像下载策略为存在镜像就不下载[root@k8scloude1 deploy]# cat nginx.yamlapiVersion: apps/v1kind: Deploymentmetadata:creationTimestamp: nulllabels:app: nginxname: nginxspec:replicas: 5selector:matchLabels:app: nginxstrategy: {}template:metadata:creationTimestamp: nulllabels:app: nginxspec:terminationGracePeriodSeconds: 0containers:- image: nginxname: nginximagePullPolicy: IfNotPresentresources: {}status: {}创建deploy和使用pod yaml文件创建pod
[root@k8scloude1 deploy]# cat pod.yamlapiVersion: v1kind: Podmetadata:creationTimestamp: nulllabels:run: pod1name: pod1spec:terminationGracePeriodSeconds: 0containers:- image: nginximagePullPolicy: IfNotPresentname: n1resources: {}dnsPolicy: ClusterFirstrestartPolicy: Alwaysstatus: {}[root@k8scloude1 deploy]# kubectl apply -f pod.yamlpod/pod1 created[root@k8scloude1 deploy]# kubectl apply -f nginx.yamldeployment.apps/nginx created查看pod,可以看到deploy生成5个pod(nginx-6cf858f6cf-XXXXXXX),还有一个pod1 。
[root@k8scloude1 deploy]# kubectl get pods -o wideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATESnginx-6cf858f6cf-fwhmh1/1Running052s10.244.251.217k8scloude3<none><none>nginx-6cf858f6cf-hr6bn1/1Running052s10.244.251.218k8scloude3<none><none>nginx-6cf858f6cf-j2ccs1/1Running052s10.244.112.161k8scloude2<none><none>nginx-6cf858f6cf-l7n4w1/1Running052s10.244.112.162k8scloude2<none><none>nginx-6cf858f6cf-t6qxq1/1Running052s10.244.112.163k8scloude2<none><none>pod11/1Running060s10.244.251.216k8scloude3<none><none>【cordon节点,drain驱逐节点,delete 节点】假设某天要对k8scloude2进行维护测试 , 不希望k8scloude2节点上被分配新的pod,可以对某个节点执行cordon之后,此节点就不会再调度新的pod了
cordon k8scloude2节点,k8scloude2节点变为SchedulingDisabled状态
[root@k8scloude1 deploy]# kubectl cordon k8scloude2node/k8scloude2 cordoned[root@k8scloude1 deploy]# kubectl get nodesNAMESTATUSROLESAGEVERSIONk8scloude1Readycontrol-plane,master8dv1.21.0k8scloude2Ready,SchedulingDisabled<none>7d23hv1.21.0k8scloude3Ready<none>7d23hv1.21.0

推荐阅读