当前位置: 首页 > news >正文

网站跳出率多少合适wordpress 转 typecho

网站跳出率多少合适,wordpress 转 typecho,西安工商注册平台官网,威海网站建设 孔1 Kubernetes(k8s) 集群升级过程 Kubernetes 使用 kubeadm 工具来管理集群组件的升级。在集群节点层面#xff0c;升级 Kubernetes(k8s)集群的过程可以分为以下几个步骤#xff1a; 1#xff09;检查当前环境和配置是否满足升级要求。 2#xff09;升级master主节点升级 Kubernetes(k8s)集群的过程可以分为以下几个步骤 1检查当前环境和配置是否满足升级要求。 2升级master主节点如果是多master则master轮着升级。 3升级worker工作节点。 4升级网络插件。 在软件层面升级 Kubernetes(k8s)集群的过程可以分为以下几个步骤 1升级kubeadm 2节点执行drain操作 3升级各个组件 4取消drain操作 5升级kubelet和kubectl 注意Kubernetes(k8s)集群升级的时候是不能跨版本升级的比如Kubernetes(k8s)集群 1.19.x可以升级为1.20.y但是Kubernetes(k8s)集群 1.19.x不能直接升级为 1.21.y只能从一个次版本升级到下一个次版本或者在次版本相同时升级补丁版本。 也就是说升级时不可以跳过次版本。 例如只能从 1.y 升级到 1.y1而不能从 1.y 升级到 1.y2。 2 升级master主节点 2.1 升级kubeadm  Kubernetes(k8s)集群版本是v1.21.0。 $ kubectl get nodes NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 33m v1.21.0 ops-worker-1 Ready none 30m v1.21.0 ops-worker-2 Ready none 30m v1.21.0 查看可用的kubeadm版本: yum list --showduplicates kubeadm --disableexcludeskubernetes Loaded plugins: fastestmirror, langpacks, releasever-adapter, update-motd Loading mirror speeds from cached hostfile Installed Packages kubeadm.x86_64 1.21.0-0 k8s Available Packages kubeadm.x86_64 1.6.0-0 k8s kubeadm.x86_64 1.6.1-0 k8s kubeadm.x86_64 1.6.2-0 k8s kubeadm.x86_64 1.6.3-0 k8s kubeadm.x86_64 1.6.4-0 k8s kubeadm.x86_64 1.6.5-0 k8s kubeadm.x86_64 1.6.6-0 k8s kubeadm.x86_64 1.6.7-0 k8s kubeadm.x86_64 1.6.8-0 k8s kubeadm.x86_64 1.6.9-0 k8s kubeadm.x86_64 1.6.10-0 k8s kubeadm.x86_64 1.6.11-0 k8s kubeadm.x86_64 1.6.12-0 k8s kubeadm.x86_64 1.6.13-0 k8s kubeadm.x86_64 1.7.0-0 k8s kubeadm.x86_64 1.7.1-0 k8s kubeadm.x86_64 1.7.2-0 k8s kubeadm.x86_64 1.7.3-1 k8s kubeadm.x86_64 1.7.4-0 k8s kubeadm.x86_64 1.7.5-0 k8s kubeadm.x86_64 1.7.6-1 k8s kubeadm.x86_64 1.7.7-1 k8s kubeadm.x86_64 1.7.8-1 k8s kubeadm.x86_64 1.7.9-0 k8s kubeadm.x86_64 1.7.10-0 k8s kubeadm.x86_64 1.7.11-0 k8s kubeadm.x86_64 1.7.14-0 k8s kubeadm.x86_64 1.7.15-0 k8s kubeadm.x86_64 1.7.16-0 k8s kubeadm.x86_64 1.8.0-0 k8s kubeadm.x86_64 1.8.0-1 k8s kubeadm.x86_64 1.8.1-0 k8s kubeadm.x86_64 1.8.2-0 k8s kubeadm.x86_64 1.8.3-0 k8s kubeadm.x86_64 1.8.4-0 k8s kubeadm.x86_64 1.8.5-0 k8s kubeadm.x86_64 1.8.6-0 k8s kubeadm.x86_64 1.8.7-0 k8s kubeadm.x86_64 1.8.8-0 k8s kubeadm.x86_64 1.8.9-0 k8s kubeadm.x86_64 1.8.10-0 k8s kubeadm.x86_64 1.8.11-0 k8s kubeadm.x86_64 1.8.12-0 k8s kubeadm.x86_64 1.8.13-0 k8s kubeadm.x86_64 1.8.14-0 k8s kubeadm.x86_64 1.8.15-0 k8s kubeadm.x86_64 1.9.0-0 k8s kubeadm.x86_64 1.9.1-0 k8s kubeadm.x86_64 1.9.2-0 k8s kubeadm.x86_64 1.9.3-0 k8s kubeadm.x86_64 1.9.4-0 k8s kubeadm.x86_64 1.9.5-0 k8s kubeadm.x86_64 1.9.6-0 k8s kubeadm.x86_64 1.9.7-0 k8s kubeadm.x86_64 1.9.8-0 k8s kubeadm.x86_64 1.9.9-0 k8s kubeadm.x86_64 1.9.10-0 k8s kubeadm.x86_64 1.9.11-0 k8s kubeadm.x86_64 1.10.0-0 k8s kubeadm.x86_64 1.10.1-0 k8s kubeadm.x86_64 1.10.2-0 k8s kubeadm.x86_64 1.10.3-0 k8s kubeadm.x86_64 1.10.4-0 k8s kubeadm.x86_64 1.10.5-0 k8s kubeadm.x86_64 1.10.6-0 k8s kubeadm.x86_64 1.10.7-0 k8s kubeadm.x86_64 1.10.8-0 k8s kubeadm.x86_64 1.10.9-0 k8s kubeadm.x86_64 1.10.10-0 k8s kubeadm.x86_64 1.10.11-0 k8s kubeadm.x86_64 1.10.12-0 k8s kubeadm.x86_64 1.10.13-0 k8s kubeadm.x86_64 1.11.0-0 k8s kubeadm.x86_64 1.11.1-0 k8s kubeadm.x86_64 1.11.2-0 k8s kubeadm.x86_64 1.11.3-0 k8s kubeadm.x86_64 1.11.4-0 k8s kubeadm.x86_64 1.11.5-0 k8s kubeadm.x86_64 1.11.6-0 k8s kubeadm.x86_64 1.11.7-0 k8s kubeadm.x86_64 1.11.8-0 k8s kubeadm.x86_64 1.11.9-0 k8s kubeadm.x86_64 1.11.10-0 k8s kubeadm.x86_64 1.12.0-0 k8s kubeadm.x86_64 1.12.1-0 k8s kubeadm.x86_64 1.12.2-0 k8s kubeadm.x86_64 1.12.3-0 k8s kubeadm.x86_64 1.12.4-0 k8s kubeadm.x86_64 1.12.5-0 k8s kubeadm.x86_64 1.12.6-0 k8s kubeadm.x86_64 1.12.7-0 k8s kubeadm.x86_64 1.12.8-0 k8s kubeadm.x86_64 1.12.9-0 k8s kubeadm.x86_64 1.12.10-0 k8s kubeadm.x86_64 1.13.0-0 k8s kubeadm.x86_64 1.13.1-0 k8s kubeadm.x86_64 1.13.2-0 k8s kubeadm.x86_64 1.13.3-0 k8s kubeadm.x86_64 1.13.4-0 k8s kubeadm.x86_64 1.13.5-0 k8s kubeadm.x86_64 1.13.6-0 k8s kubeadm.x86_64 1.13.7-0 k8s kubeadm.x86_64 1.13.8-0 k8s kubeadm.x86_64 1.13.9-0 k8s kubeadm.x86_64 1.13.10-0 k8s kubeadm.x86_64 1.13.11-0 k8s kubeadm.x86_64 1.13.12-0 k8s kubeadm.x86_64 1.14.0-0 k8s kubeadm.x86_64 1.14.1-0 k8s kubeadm.x86_64 1.14.2-0 k8s kubeadm.x86_64 1.14.3-0 k8s kubeadm.x86_64 1.14.4-0 k8s kubeadm.x86_64 1.14.5-0 k8s kubeadm.x86_64 1.14.6-0 k8s kubeadm.x86_64 1.14.7-0 k8s kubeadm.x86_64 1.14.8-0 k8s kubeadm.x86_64 1.14.9-0 k8s kubeadm.x86_64 1.14.10-0 k8s kubeadm.x86_64 1.15.0-0 k8s kubeadm.x86_64 1.15.1-0 k8s kubeadm.x86_64 1.15.2-0 k8s kubeadm.x86_64 1.15.3-0 k8s kubeadm.x86_64 1.15.4-0 k8s kubeadm.x86_64 1.15.5-0 k8s kubeadm.x86_64 1.15.6-0 k8s kubeadm.x86_64 1.15.7-0 k8s kubeadm.x86_64 1.15.8-0 k8s kubeadm.x86_64 1.15.9-0 k8s kubeadm.x86_64 1.15.10-0 k8s kubeadm.x86_64 1.15.11-0 k8s kubeadm.x86_64 1.15.12-0 k8s kubeadm.x86_64 1.16.0-0 k8s kubeadm.x86_64 1.16.1-0 k8s kubeadm.x86_64 1.16.2-0 k8s kubeadm.x86_64 1.16.3-0 k8s kubeadm.x86_64 1.16.4-0 k8s kubeadm.x86_64 1.16.5-0 k8s kubeadm.x86_64 1.16.6-0 k8s kubeadm.x86_64 1.16.7-0 k8s kubeadm.x86_64 1.16.8-0 k8s kubeadm.x86_64 1.16.9-0 k8s kubeadm.x86_64 1.16.10-0 k8s kubeadm.x86_64 1.16.11-0 k8s kubeadm.x86_64 1.16.11-1 k8s kubeadm.x86_64 1.16.12-0 k8s kubeadm.x86_64 1.16.13-0 k8s kubeadm.x86_64 1.16.14-0 k8s kubeadm.x86_64 1.16.15-0 k8s kubeadm.x86_64 1.17.0-0 k8s kubeadm.x86_64 1.17.1-0 k8s kubeadm.x86_64 1.17.2-0 k8s kubeadm.x86_64 1.17.3-0 k8s kubeadm.x86_64 1.17.4-0 k8s kubeadm.x86_64 1.17.5-0 k8s kubeadm.x86_64 1.17.6-0 k8s kubeadm.x86_64 1.17.7-0 k8s kubeadm.x86_64 1.17.7-1 k8s kubeadm.x86_64 1.17.8-0 k8s kubeadm.x86_64 1.17.9-0 k8s kubeadm.x86_64 1.17.11-0 k8s kubeadm.x86_64 1.17.12-0 k8s kubeadm.x86_64 1.17.13-0 k8s kubeadm.x86_64 1.17.14-0 k8s kubeadm.x86_64 1.17.15-0 k8s kubeadm.x86_64 1.17.16-0 k8s kubeadm.x86_64 1.17.17-0 k8s kubeadm.x86_64 1.18.0-0 k8s kubeadm.x86_64 1.18.1-0 k8s kubeadm.x86_64 1.18.2-0 k8s kubeadm.x86_64 1.18.3-0 k8s kubeadm.x86_64 1.18.4-0 k8s kubeadm.x86_64 1.18.4-1 k8s kubeadm.x86_64 1.18.5-0 k8s kubeadm.x86_64 1.18.6-0 k8s kubeadm.x86_64 1.18.8-0 k8s kubeadm.x86_64 1.18.9-0 k8s kubeadm.x86_64 1.18.10-0 k8s kubeadm.x86_64 1.18.12-0 k8s kubeadm.x86_64 1.18.13-0 k8s kubeadm.x86_64 1.18.14-0 k8s kubeadm.x86_64 1.18.15-0 k8s kubeadm.x86_64 1.18.16-0 k8s kubeadm.x86_64 1.18.17-0 k8s kubeadm.x86_64 1.18.18-0 k8s kubeadm.x86_64 1.18.19-0 k8s kubeadm.x86_64 1.18.20-0 k8s kubeadm.x86_64 1.19.0-0 k8s kubeadm.x86_64 1.19.1-0 k8s kubeadm.x86_64 1.19.2-0 k8s kubeadm.x86_64 1.19.3-0 k8s kubeadm.x86_64 1.19.4-0 k8s kubeadm.x86_64 1.19.5-0 k8s kubeadm.x86_64 1.19.6-0 k8s kubeadm.x86_64 1.19.7-0 k8s kubeadm.x86_64 1.19.8-0 k8s kubeadm.x86_64 1.19.9-0 k8s kubeadm.x86_64 1.19.10-0 k8s kubeadm.x86_64 1.19.11-0 k8s kubeadm.x86_64 1.19.12-0 k8s kubeadm.x86_64 1.19.13-0 k8s kubeadm.x86_64 1.19.14-0 k8s kubeadm.x86_64 1.19.15-0 k8s kubeadm.x86_64 1.19.16-0 k8s kubeadm.x86_64 1.20.0-0 k8s kubeadm.x86_64 1.20.1-0 k8s kubeadm.x86_64 1.20.2-0 k8s kubeadm.x86_64 1.20.4-0 k8s kubeadm.x86_64 1.20.5-0 k8s kubeadm.x86_64 1.20.6-0 k8s kubeadm.x86_64 1.20.7-0 k8s kubeadm.x86_64 1.20.8-0 k8s kubeadm.x86_64 1.20.9-0 k8s kubeadm.x86_64 1.20.10-0 k8s kubeadm.x86_64 1.20.11-0 k8s kubeadm.x86_64 1.20.12-0 k8s kubeadm.x86_64 1.20.13-0 k8s kubeadm.x86_64 1.20.14-0 k8s kubeadm.x86_64 1.20.15-0 k8s kubeadm.x86_64 1.21.0-0 k8s kubeadm.x86_64 1.21.1-0 k8s kubeadm.x86_64 1.21.2-0 k8s kubeadm.x86_64 1.21.3-0 k8s kubeadm.x86_64 1.21.4-0 k8s kubeadm.x86_64 1.21.5-0 k8s kubeadm.x86_64 1.21.6-0 k8s kubeadm.x86_64 1.21.7-0 k8s kubeadm.x86_64 1.21.8-0 k8s kubeadm.x86_64 1.21.9-0 k8s kubeadm.x86_64 1.21.10-0 k8s kubeadm.x86_64 1.21.11-0 k8s kubeadm.x86_64 1.21.12-0 k8s kubeadm.x86_64 1.21.13-0 k8s kubeadm.x86_64 1.21.14-0 k8s kubeadm.x86_64 1.22.0-0 k8s kubeadm.x86_64 1.22.1-0 k8s kubeadm.x86_64 1.22.2-0 k8s kubeadm.x86_64 1.22.3-0 k8s kubeadm.x86_64 1.22.4-0 k8s kubeadm.x86_64 1.22.5-0 k8s kubeadm.x86_64 1.22.6-0 k8s kubeadm.x86_64 1.22.7-0 k8s kubeadm.x86_64 1.22.8-0 k8s kubeadm.x86_64 1.22.9-0 k8s kubeadm.x86_64 1.22.10-0 k8s kubeadm.x86_64 1.22.11-0 k8s kubeadm.x86_64 1.22.12-0 k8s kubeadm.x86_64 1.22.13-0 k8s kubeadm.x86_64 1.22.14-0 k8s kubeadm.x86_64 1.22.15-0 k8s kubeadm.x86_64 1.22.16-0 k8s kubeadm.x86_64 1.22.17-0 k8s kubeadm.x86_64 1.23.0-0 k8s kubeadm.x86_64 1.23.1-0 k8s kubeadm.x86_64 1.23.2-0 k8s kubeadm.x86_64 1.23.3-0 k8s kubeadm.x86_64 1.23.4-0 k8s kubeadm.x86_64 1.23.5-0 k8s kubeadm.x86_64 1.23.6-0 k8s kubeadm.x86_64 1.23.7-0 k8s kubeadm.x86_64 1.23.8-0 k8s kubeadm.x86_64 1.23.9-0 k8s kubeadm.x86_64 1.23.10-0 k8s kubeadm.x86_64 1.23.11-0 k8s kubeadm.x86_64 1.23.12-0 k8s kubeadm.x86_64 1.23.13-0 k8s kubeadm.x86_64 1.23.14-0 k8s kubeadm.x86_64 1.23.15-0 k8s kubeadm.x86_64 1.23.16-0 k8s kubeadm.x86_64 1.23.17-0 k8s kubeadm.x86_64 1.24.0-0 k8s kubeadm.x86_64 1.24.1-0 k8s kubeadm.x86_64 1.24.2-0 k8s kubeadm.x86_64 1.24.3-0 k8s kubeadm.x86_64 1.24.4-0 k8s kubeadm.x86_64 1.24.5-0 k8s kubeadm.x86_64 1.24.6-0 k8s kubeadm.x86_64 1.24.7-0 k8s kubeadm.x86_64 1.24.8-0 k8s kubeadm.x86_64 1.24.9-0 k8s kubeadm.x86_64 1.24.10-0 k8s kubeadm.x86_64 1.24.11-0 k8s kubeadm.x86_64 1.24.12-0 k8s kubeadm.x86_64 1.24.13-0 k8s kubeadm.x86_64 1.24.14-0 k8s kubeadm.x86_64 1.24.15-0 k8s kubeadm.x86_64 1.24.16-0 k8s kubeadm.x86_64 1.24.17-0 k8s kubeadm.x86_64 1.25.0-0 k8s kubeadm.x86_64 1.25.1-0 k8s kubeadm.x86_64 1.25.2-0 k8s kubeadm.x86_64 1.25.3-0 k8s kubeadm.x86_64 1.25.4-0 k8s kubeadm.x86_64 1.25.5-0 k8s kubeadm.x86_64 1.25.6-0 k8s kubeadm.x86_64 1.25.7-0 k8s kubeadm.x86_64 1.25.8-0 k8s kubeadm.x86_64 1.25.9-0 k8s kubeadm.x86_64 1.25.10-0 k8s kubeadm.x86_64 1.25.11-0 k8s kubeadm.x86_64 1.25.12-0 k8s kubeadm.x86_64 1.25.13-0 k8s kubeadm.x86_64 1.25.14-0 k8s kubeadm.x86_64 1.26.0-0 k8s kubeadm.x86_64 1.26.1-0 k8s kubeadm.x86_64 1.26.2-0 k8s kubeadm.x86_64 1.26.3-0 k8s kubeadm.x86_64 1.26.4-0 k8s kubeadm.x86_64 1.26.5-0 k8s kubeadm.x86_64 1.26.6-0 k8s kubeadm.x86_64 1.26.7-0 k8s kubeadm.x86_64 1.26.8-0 k8s kubeadm.x86_64 1.26.9-0 k8s kubeadm.x86_64 1.27.0-0 k8s kubeadm.x86_64 1.27.1-0 k8s kubeadm.x86_64 1.27.2-0 k8s kubeadm.x86_64 1.27.3-0 k8s kubeadm.x86_64 1.27.4-0 k8s kubeadm.x86_64 1.27.5-0 k8s kubeadm.x86_64 1.27.6-0 k8s kubeadm.x86_64 1.28.0-0 k8s kubeadm.x86_64 1.28.1-0 k8s kubeadm.x86_64 1.28.2-0 k8s 升级kubeadm到1.21.9-0版本: $ yum install -y kubeadm-1.21.9-0 --disableexcludeskubernetes Loaded plugins: fastestmirror, langpacks, releasever-adapter, update-motd Loading mirror speeds from cached hostfile Resolving Dependencies -- Running transaction check --- Package kubeadm.x86_64 0:1.21.0-0 will be updated --- Package kubeadm.x86_64 0:1.21.9-0 will be an update -- Finished Dependency ResolutionDependencies ResolvedPackage Arch Version Repository SizeUpdating:kubeadm x86_64 1.21.9-0 k8s 9.1 MTransaction SummaryUpgrade 1 PackageTotal download size: 9.1 M Downloading packages: No Presto metadata available for k8s f41c806d2113e9b88efd9f70e3a07da3cd0597f5361f7842058e06b9601ff7fc-kubeadm-1.21.9-0.x86_64.rpm | 9.1 MB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transactionUpdating : kubeadm-1.21.9-0.x86_64 1/2 Cleanup : kubeadm-1.21.0-0.x86_64 2/2 Verifying : kubeadm-1.21.9-0.x86_64 1/2 Verifying : kubeadm-1.21.0-0.x86_64 2/2 Updated:kubeadm.x86_64 0:1.21.9-0 Complete! kubeadm upgrade plan验证升级计划COMPONENT CURRENT TARGET 告诉我们组件可以从当前版本升级到的版本。 $ kubeadm upgrade plan kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.21.0 [upgrade/versions] kubeadm version: v1.21.9 I1210 20:34:17.564937 5980 version.go:254] remote version is much newer: v1.28.4; falling back to: stable-1.21 [upgrade/versions] Target version: v1.21.14 [upgrade/versions] Latest version in the v1.21 series: v1.21.14Components that must be upgraded manually after you have upgraded the control plane with kubeadm upgrade apply: COMPONENT CURRENT TARGET kubelet 3 x v1.21.0 v1.21.14Upgrade to the latest version in the v1.21 series:COMPONENT CURRENT TARGET kube-apiserver v1.21.0 v1.21.14 kube-controller-manager v1.21.0 v1.21.14 kube-scheduler v1.21.0 v1.21.14 kube-proxy v1.21.0 v1.21.14 CoreDNS v1.8.0 v1.8.0 etcd 3.4.13-0 3.4.13-0You can now apply the upgrade by executing the following command:kubeadm upgrade apply v1.21.14Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.14._____________________________________________________________________The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a yes mark in the MANUAL UPGRADE REQUIRED column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the PREFERRED VERSION column.API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________2.2 升级各个组件 上一步是升级kubeadm ,接下来升级各个组件kube-apiserverkube-controller-manager等等 kubeadm upgrade apply v1.21.9升级各个组件到1.21.9版本如果etcd这个组件不想升级可以加上选项:kubeadm upgrade apply v1.21.9 --etcd-upgradefalse。 可以提前drain节点后面drain也可以。 腾空节点通过将节点标记为不可调度并腾空节点为节点作升级准备kubectl drain --ignore-daemonsets。 $ kubectl drain ops-master-1 --ignore-daemonsets node/k8scloude1 cordoned error: unable to drain node k8scloude1, aborting command...There are pending nodes to be drained:k8scloude1 error: cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-bcfb98c76-j4gs8, kubernetes-dashboard/dashboard-metrics-scraper-7f458d9467-9knf9因为有本地数据需要加--delete-emptydir-data选项。 $ kubectl drain ops-master-1 --ignore-daemonsets --delete-emptydir-data node/ops-master-1 already cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-whfb8, kube-system/kube-proxy-srhgz node/ops-master-1 drained 升级各个组件--etcd-upgradefalse表示etcd数据库不升级。 $ kubeadm upgrade apply v1.21.9 --etcd-upgradefalse [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade/version] You have chosen to change the cluster version to v1.21.9 [upgrade/versions] Cluster version: v1.21.0 [upgrade/versions] kubeadm version: v1.21.9 [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y [upgrade/prepull] Pulling images required for setting up a Kubernetes cluster [upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection [upgrade/prepull] You can also perform this action in beforehand using kubeadm config images pull [upgrade/apply] Upgrading your Static Pod-hosted control plane to version v1.21.9... Static pod: kube-apiserver-ops-master-1 hash: 94a1393eb20a9652cea984f5907ed147 Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd [upgrade/staticpods] Writing new Static Pod manifests to /etc/kubernetes/tmp/kubeadm-upgraded-manifests053748490 [upgrade/staticpods] Preparing for kube-apiserver upgrade [upgrade/staticpods] Renewing apiserver certificate [upgrade/staticpods] Renewing apiserver-kubelet-client certificate [upgrade/staticpods] Renewing front-proxy-client certificate [upgrade/staticpods] Renewing apiserver-etcd-client certificate [upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-apiserver.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-20-38-23/kube-apiserver.yaml [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-ops-master-1 hash: 94a1393eb20a9652cea984f5907ed147 Static pod: kube-apiserver-ops-master-1 hash: 94a1393eb20a9652cea984f5907ed147 Static pod: kube-apiserver-ops-master-1 hash: 26e768b6e5881ba632db749b44ed30c5 [apiclient] Found 1 Pods for label selector componentkube-apiserver[upgrade/staticpods] Component kube-apiserver upgraded successfully! [upgrade/staticpods] Preparing for kube-controller-manager upgrade [upgrade/staticpods] Renewing controller-manager.conf certificate [upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-controller-manager.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-20-38-23/kube-controller-manager.yaml [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: dfcbd862b8c83e8fbd17b289c6d2d47b Static pod: kube-controller-manager-ops-master-1 hash: 0074b909bb20d0a0ab9caa7f07b66191 [apiclient] Found 1 Pods for label selector componentkube-controller-manager [upgrade/staticpods] Component kube-controller-manager upgraded successfully! [upgrade/staticpods] Preparing for kube-scheduler upgrade [upgrade/staticpods] Renewing scheduler.conf certificate [upgrade/staticpods] Moved new manifest to /etc/kubernetes/manifests/kube-scheduler.yaml and backed up old manifest to /etc/kubernetes/tmp/kubeadm-backup-manifests-2023-12-10-20-38-23/kube-scheduler.yaml [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: f77a726558e477a8bc7504218c9d2ccd Static pod: kube-scheduler-ops-master-1 hash: 07614ccea2a58fd20ecb645d2901f0c9 [apiclient] Found 1 Pods for label selector componentkube-scheduler [upgrade/staticpods] Component kube-scheduler upgraded successfully! [upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane to Nodes with label node-role.kubernetes.io/master (deprecated) [upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers to control plane Nodes [upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace [kubelet] Creating a ConfigMap kubelet-config-1.21 in namespace kube-system with the configuration for the kubelets in the cluster [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy[upgrade/successful] SUCCESS! Your cluster was upgraded to v1.21.9. Enjoy![upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you havent already done so. 现在ops-master-1节点是不可调度状态的。 $ kubectl get node NAME STATUS ROLES AGE VERSION ops-master-1 Ready,SchedulingDisabled control-plane,master 49m v1.21.0 ops-worker-1 Ready none 46m v1.21.0 ops-worker-2 Ready none 46m v1.21.0 解除ops-master-1节点的保护通过将节点标记为可调度让其重新上线。 $ kubectl uncordon ops-master-1 node/ops-master-1 uncordoned 现在ops-master-1节点是Ready状态的。 $ kubectl get node NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 53m v1.21.0 ops-worker-1 Ready none 50m v1.21.0 ops-worker-2 Ready none 50m v1.21.0 2.2 升级 kubelet 和 kubectl 升级 kubelet 和 kubectl到1.21.9版本。 $ yum install -y kubelet-1.21.9-0 kubectl-1.21.9-0 --disableexcludeskubernetes 重新加载配置文件并重启 kubelet。 $ systemctl daemon-reload ;systemctl restart kubelet 此时op-master-1节点的版本就变为v1.21.9了k8s集群的master节点升级成功如果有多个master则步骤一样但是第二个master节点不需要执行kubeadm upgrade apply v1.21.9命令第二台master节点把kubeadm upgrade apply v1.21.9变为kubeadm upgrade node。 $ kubectl get node NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 143m v1.21.9 ops-worker-1 Ready none 140m v1.21.0 ops-worker-2 Ready none 140m v1.21.0 3 升级worker工作节点 3.1 升级kubeadm $ yum install -y kubeadm-1.21.9-0 --disableexcludeskubernetes 通过将节点标记为不可调度并腾空节点为节点作升级准备。  如果本地有数据建议使用--delete-emptydir-data选项。 $ kubectl drain ops-worker-1 --ignore-daemonsets node/ops-worker-1 cordoned WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-g6ntb, kube-system/kube-proxy-nf2rm evicting pod kube-system/coredns-59d64cd4d4-qf9rx pod/coredns-59d64cd4d4-qf9rx evicted node/ops-worker-1 evicted kubectl get node NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 144m v1.21.9 ops-worker-1 Ready,SchedulingDisabled none 141m v1.21.0 ops-worker-2 Ready none 141m v1.21.0 对于工作节点 kubeadm upgrade node 命令会升级本地的 kubelet 配置。 $ kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with kubectl -n kube-system get cm kubeadm-config -o yaml [preflight] Running pre-flight checks [preflight] Skipping prepull. Not a control plane node. [upgrade] Skipping phase. Not a control plane node. [kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager. 通过将ops-worker-1节点标记为可调度让其重新上线。  $ kubectl uncordon ops-worker-1 node/ops-worker-1 uncordoned $ kubectl get nodes NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 145m v1.21.9 ops-worker-1 Ready none 142m v1.21.0 ops-worker-2 Ready none 142m v1.21.0 3.2 升级kubelet和kubectl 升级kubelet和kubectl到1.21.9版本。 $ yum install -y kubelet-1.21.9-0 kubectl-1.21.9-0 --disableexcludeskubernetes Loaded plugins: fastestmirror, langpacks, releasever-adapter, update-motd Loading mirror speeds from cached hostfile Resolving Dependencies -- Running transaction check --- Package kubectl.x86_64 0:1.21.0-0 will be updated --- Package kubectl.x86_64 0:1.21.9-0 will be an update --- Package kubelet.x86_64 0:1.21.0-0 will be updated --- Package kubelet.x86_64 0:1.21.9-0 will be an update -- Finished Dependency ResolutionDependencies ResolvedPackage Arch Version Repository SizeUpdating:kubectl x86_64 1.21.9-0 k8s 9.6 Mkubelet x86_64 1.21.9-0 k8s 20 MTransaction SummaryUpgrade 2 PackagesTotal download size: 30 M Downloading packages: No Presto metadata available for k8s (1/2): f53d5be18ac04fa2eebe0f27a984fbc1197a31f1ed4e92c3762f0f584fcd502c-kubectl-1.21.9-0.x86_64.rpm | 9.6 MB 00:00:00 (2/2): 6e68c2e2eb926e163f53a7d64000334c6cae982841fffee350f5003793a63a9c-kubelet-1.21.9-0.x86_64.rpm | 20 MB 00:00:01 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ Total 28 MB/s | 30 MB 00:00:01 Running transaction check Running transaction test Transaction test succeeded Running transactionUpdating : kubelet-1.21.9-0.x86_64 1/4 Updating : kubectl-1.21.9-0.x86_64 2/4 Cleanup : kubectl-1.21.0-0.x86_64 3/4 Cleanup : kubelet-1.21.0-0.x86_64 4/4 Verifying : kubectl-1.21.9-0.x86_64 1/4 Verifying : kubelet-1.21.9-0.x86_64 2/4 Verifying : kubelet-1.21.0-0.x86_64 3/4 Verifying : kubectl-1.21.0-0.x86_64 4/4 Updated:kubectl.x86_64 0:1.21.9-0 kubelet.x86_64 0:1.21.9-0 Complete! 重新加载配置文件并重启kubelet。 $ systemctl daemon-reload ;systemctl restart kubelet systemctl daemon-reload ;systemctl restart kubelet $ kubectl get nodes NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 145m v1.21.9 ops-worker-1 Ready none 142m v1.21.9 ops-worker-2 Ready none 142m v1.21.0 节点ops-worker-2升级步骤和ops-worker-1节点一模一样。 等节点ops-worker-2升级完成后整个Kubernetes(k8s) 集群就升级完毕了版本都是v1.21.9。 $ kubectl get nodes NAME STATUS ROLES AGE VERSION ops-master-1 Ready control-plane,master 147m v1.21.9 ops-worker-1 Ready none 144m v1.21.9 ops-worker-2 Ready none 144m v1.21.9
http://www.tj-hxxt.cn/news/142524.html

相关文章:

  • 中石化第四建设有限公司网站网站备案为什么这么慢
  • 智能路由器 建网站免费做微商代理
  • 音乐网站样式设计营销型网站建设调查表
  • uehtml 网站源码dedecms部署两个网站
  • 专做特产的网站网站设置了字体为黑体怎么改字体
  • 学生个人网站设计广州seo代理
  • 网站建设与制作dw8教程wordpress 加链接
  • 珠海哪个公司建设网站好网站开发的总结
  • 最优秀的佛山网站建设南京宣传片制作公司
  • 网站服务器和网站坪山新区网站建设
  • 东盟建设工程有限公司网站线上运营思路
  • 做众筹网站需要什么条件网络热词
  • 北京南站到故宫地铁怎么坐网址大全域名解析
  • 网站有必要备案吗王者荣耀网站开发目的
  • 制作团体网站博物馆网站做的最好的
  • 淄博建设网站公司做外卖网站需要多少钱
  • 建一个网站多少钱北京市保障性住房建设投资中心官方网站
  • 龙华网站建设招聘免费考研论坛
  • 江苏建设一体化平台网站上海设计网站建设
  • 泉州app网站开发价格低网页设计作业html代码大全
  • 网站调研怎样做太原论坛
  • 网站建设策划ppt海阳seo排名优化培训
  • 法律电商如何做网站推广营销wordpress忘了密码
  • ftp做网站企业网站php
  • 修改wordpress自带小工具搜索seo是什么意思
  • 福州网站关键词为审核资质帮别人做的网站
  • 质量好网站建设商家磐安网站建设公司
  • 糗事百科网站 源码wordpress 自定义功能
  • wordpress 文字插件下载seo工具助力集群式网站升级
  • 网站导航页面设计建筑公司网站源码开源