淘宝客网站备案信息,网站设计网络推广商贸平台,wordpress 优化速度,做装修网站卖钱Ubuntu 22.04系统安装部署Kubernetes v1.29.13集群 简介Kubernetes 的工作流程概述Kubernetes v1.29.13 版本Ubuntu 22.04 系统安装部署 Kubernetes v1.29.13 集群 1 环境准备1.1 集群IP规划1.2 初始化步骤#xff08;各个节点都需执行#xff09;1.2.1 主机名与IP地址解析1.… Ubuntu 22.04系统安装部署Kubernetes v1.29.13集群 简介Kubernetes 的工作流程概述Kubernetes v1.29.13 版本Ubuntu 22.04 系统安装部署 Kubernetes v1.29.13 集群 1 环境准备1.1 集群IP规划1.2 初始化步骤各个节点都需执行1.2.1 主机名与IP地址解析1.2.2 关闭swap1.2.3 禁用防火墙1.2.4 时钟同步1.2.5 配置内核转发和网桥过滤 1.3 安装 Kubernetes 必备组件1.3.1 安装 Docker 或 containerd 作为容器运行时方法一Dockercri-dockerd方法二Containerd 1.3.2 安装 kubeadm、kubelet 和 kubectl 2 MASTER节点初始化2.1 主节点初始化2.2 初始化master主节点 3 Woker节点配置及添加3.1 节点添加3.2 验证集群状态 4 部署Web应用测试 简介
Kubernetes简称 K8s是一个开源的容器编排平台它能够自动化地部署、扩展和管理容器化应用。Kubernetes 支持的容器运行时包括 Docker 和 containerd 等。它通过提供集群管理、服务发现、负载均衡、自动扩展等功能极大地简化了容器化应用的管理。
Kubernetes 的主要组件包括 Master 节点控制平面 API ServerKubernetes 控制平面的入口所有的 API 请求都会经过它它负责协调集群中的各项操作。Scheduler负责监控集群中的资源情况并将 Pod 调度到合适的节点上。Controller Manager负责执行集群控制的任务确保集群中各项资源处于预期状态。etcd一个分布式键值存储用于保存所有集群的数据和状态。 Node 节点工作节点 Kubelet每个节点上运行的代理负责管理容器的生命周期确保容器的运行状态与 API Server 中的描述一致。Kube Proxy负责服务发现和负载均衡代理集群中的网络流量到正确的 Pod 上。Container Runtime容器运行时通常是 Docker 或 containerd负责启动和管理容器。 Pod是 Kubernetes 中最小的可调度单位可以理解为一个容器或多个容器的集合容器共享网络和存储资源。 Deployment管理 Pod 的副本集自动创建、删除、滚动更新 Pods。 Service为一组 Pod 提供一个稳定的访问入口通常通过 DNS 名称来访问。 Namespace将集群资源分隔成不同的虚拟集群支持多租户。
Kubernetes 的工作流程概述
容器化应用的部署在 Kubernetes 中首先需要定义一个包含应用容器的 Pod指定应用的镜像、存储卷、网络策略等。调度和管理Scheduler 会选择一个合适的 Node 节点来部署这个 PodKubelet 会在节点上启动容器并向 API Server 报告状态。服务发现与负载均衡当多个 Pod 运行在不同的节点上时Kubernetes 的 Service 会为它们提供一个统一的访问入口实现负载均衡。
Kubernetes v1.29.13 版本
Kubernetes 版本 1.29.13 作为一个较新的版本提供了一些改进和新特性主要关注稳定性、性能和安全性等方面。每个版本都会修复已知的问题并加入一些增强功能。升级到这个版本的集群可以享受到更好的性能表现和更稳定的运行环境。 Ubuntu 22.04 系统安装部署 Kubernetes v1.29.13 集群
以下是部署 Kubernetes 集群的概念步骤 准备环境 操作系统Ubuntu 22.04 是一个长期支持LTS版本适用于生产环境。硬件要求至少有两台服务器一台作为 Master 节点至少一台作为 Worker 节点。网络确保集群的节点之间可以通信必要时配置防火墙和端口转发。 安装 Kubernetes 必备组件 安装 Docker此篇选择dockercri-dockerd 或 containerd 作为容器运行时。安装 kubeadm、kubelet 和 kubectl 这些 Kubernetes 组件。 初始化 Master 节点 使用 kubeadm init 初始化 Kubernetes 集群的控制平面。配置 kubectl使得控制节点可以管理集群。 加入 Worker 节点 在 Worker 节点上运行 kubeadm join 命令加入到已经初始化的 Master 节点。 配置网络插件 Kubernetes 需要网络插件来管理 Pod 之间的网络通信常用的插件有 Calico、Flannel推荐、Weave 等。安装并配置网络插件。 验证集群 使用 kubectl get nodes -owide 检查集群节点详细状态确保 Master 节点和 Worker 节点都处于 Ready 状态。 部署应用 创建 Deployment、Service 等资源部署容器化应用。 1 环境准备
#使用kubeadm部署Kubernetes集群的前提条件支持Kubernetes运行的Linux主机例如Debian、RedHat及其变体等每主机2GB以上的内存以及2颗以上的CPU各主机间能够通过网络正常通信支持各节点位于不同的网络中独占的hostname、MAC地址以及product_uuid主机名能够正常解析放行由Kubernetes使用到的各端口或直接禁用iptables禁用各主机的上的Swap设备各主机时间同步
#准备代理服务以便接入registry.k8s.io或根据部署过程提示的方法获取相应的Image
#重要提示kubeadm不仅支持集群部署还支持集群升级、卸载、更新数字证书等功能目前kubeadm为各节点默认生成的SSL证书的有效期限为1年在到期之前需要renew这些证书1.1 集群IP规划
1、OS: Ubuntu 22.04 LTS 2、Docker27.0CGroup Driver: systemd 3、Kubernetesv1.29.13
CRI: cri-dockerd, CNI: CoreOS Flannel
4、网络环境
节点网络11.0.1.0/16Pod网络10.244.0.0/16Service网络10.96.0.0/12
系统版本IP主机hostsUbuntu 22.0411.0.1.100k8s-master01.dinginx.org k8s-master01 kubeapi.dinginx.org kubeapiUbuntu 22.0411.0.1.101k8s-node01.dinginx.org k8s-node01Ubuntu 22.0411.0.1.102k8s-node02.dinginx.org k8s-node02Ubuntu 22.0411.0.1.103k8s-node03.dinginx.org k8s-node03
1.2 初始化步骤各个节点都需执行
1.2.1 主机名与IP地址解析
cat /etc/hosts EOF
11.0.1.100 k8s-master01.dinginx.org k8s-master01 kubeapi.dinginx.org k8sapi.dinginx.org kubeapi
11.0.1.101 k8s-node01.dinginx.org k8s-node01
11.0.1.102 k8s-node02.dinginx.org k8s-node02
11.0.1.103 k8s-node03.dinginx.org k8s-node03
EOF1.2.2 关闭swap
sed -i /swap/s/^/# / /etc/fstab swapoff -a#检验
rootubuntu2204:~# free -htotal used free shared buff/cache available
Mem: 1.9Gi 320Mi 431Mi 1.0Mi 1.2Gi 1.4Gi
Swap: 0B 0B 0B1.2.3 禁用防火墙
apt install ufw
ufw disable
ufw status1.2.4 时钟同步
#更改时区
timedatectl set-timezone Asia/Shanghai#安装Chrony包
apt install chrony -y
##修改配置文件
cat EOF | tee /etc/chrony.conf
server ntp.aliyun.com iburst
stratumweight 0
driftfile /var/lib/chrony/drift
rtcsync
makestep 10 3
bindcmdaddress 127.0.0.1
bindcmdaddress ::1
keyfile /etc/chrony.keys
commandkey 1
generatecommandkey
logchange 0.5
logdir /var/log/chrony
EOF1.2.5 配置内核转发和网桥过滤
#添加网桥过滤和内核转发配置文件
cat /etc/sysctl.conf EOF
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
net.ipv4.ip_forward1
br_netfilter
overlay
EOF
##刷新配置
sysctl --system#检查
lsmod1.3 安装 Kubernetes 必备组件
1.3.1 安装 Docker 或 containerd 作为容器运行时
方法一Dockercri-dockerd
#安装docker及依赖
apt-get update
apt-get install ca-certificates curl gnupg
install -m 0755 -d /etc/apt/keyrings##使用国内源
curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod ar /etc/apt/keyrings/docker.gpg
echo \deb [arch$(dpkg --print-architecture) signed-by/etc/apt/keyrings/docker.gpg] https://mirrors.aliyun.com/docker-ce/linux/ubuntu \$(. /etc/os-release echo $VERSION_CODENAME) stable | \sudo tee /etc/apt/sources.list.d/docker.list /dev/nullapt update apt install -y docker-ce#安装cri-dockerd
curl -LO https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.15/cri-dockerd_0.3.15.3-0.ubuntu-jammy_amd64.deb
dpkg -i cri-dockerd_0.3.15.3-0.ubuntu-jammy_amd64.deb
systemctl enable --now cri-docker方法二Containerd
#下载软件包
wget https://github.com/containerd/containerd/releases/download/v1.7.24/containerd-1.7.24-linux-amd64.tar.gz
dpkg -i containerd-1.7.24-linux-amd64.tar.gz#创建配置文件
mkdir /etc/containerd/#生成配置文件
containerd conifg default /etc/containerd/config.toml #修改67行内容
sed -i s/registry.k8s.io\/pause:3.8/registry.k8s.io\/pause:3.9/ /etc/containerd/config.toml
sed -i s/SystemdCgroup false/SystemdCgroup true/ /etc/containerd/config.toml
#启动containerd服务
systemctl enable --now containerd.service
[rootubuntu2204 ~]# ls /var/run/containerd/ -l
total 0
srw-rw---- 1 root root 0 Dec 9 11:05 containerd.sock #若未启动则无此文件
srw-rw---- 1 root root 0 Dec 9 11:05 containerd.sock.ttrpc
drwx--x--x 2 root root 40 Dec 9 11:05 io.containerd.runtime.v1.linux
drwx--x--x 2 root root 40 Dec 9 11:05 io.containerd.runtime.v2.task[rootubuntu2204 ~]# containerd --version
containerd containerd.io 1.7.23 57f17b0a6295a39009d861b89e3b3b87b005ca27##nerdctl工具可单独运行容器
wget https://github.com/containerd/nerdctl/releases/download/v2.0.2/nerdctl-2.0.2-linux-amd64.tar.gz
tar zxvf nerdctl-2.0.2-linux-amd64.tar.gz -C /usr/local/
cp /usr/local/bin/nerdctl /usr/bin/
#检验
[rootubuntu2204 ~]# nerdctl --version
nerdctl version 2.0.2
[rootubuntu2204 ~]# nerdctl --help
nerdctl is a command line interface for containerdConfig file ($NERDCTL_TOML): /etc/nerdctl/nerdctl.tomlUsage: nerdctl [flags]helpers.Management commands:......
1.3.2 安装 kubeadm、kubelet 和 kubectl
#安装部署kubelet、kubectl、kubeadm
##使用清华源 apt-cache madison kubeadm 查看软件版本
apt update apt-get install -y apt-transport-https
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo deb [signed-by/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.tuna.tsinghua.edu.cn/kubernetes/core:/stable:/v1.31/deb/ / |tee /etc/apt/sources.list.d/kubernetes.list
apt update apt install -y kubelet kubeadm kubectl#锁定版本防止自动升级
apt-mark hold kubelet kubeadm kubectl
#解除锁定 apt-mark unhold kubelet kubeadm kubectl#修改配置文件
sed -in.bak /^ExecStart/c ExecStart/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugincni --cni-bin-dir/opt/cni/bin --cni-cache-dir/var/lib/cni/cache --cni-conf-dir/etc/cni/net.d /lib/systemd/system/cri-docker.servicesystemctl daemon-reload systemctl restart cri-docker.service#1.30版本以后kubelet配置在/etc/default/kubelet下
cat EOF | tee /etc/default/kubelet
KUBELET_EXTRA_ARGS--cgroup-driversystemd
KUBELET_KUBEADM_ARGS--container-runtimeremote --container-runtime-endpoint/var/run/cri-dockerd.sock
EOF#适用1.29及以前版本
mkdir -pv /etc/sysconfig/ echo KUBELET_KUBEADM_ARGS--container-runtimeremote --container-runtime-endpoint/var/run/cri-dockerd.sock |tee /etc/sysconfig/kubelet#设置开机自启动
systemctl enable kubelet2 MASTER节点初始化
2.1 主节点初始化
kubeadm config images list
kubeadm config images list --image-repositoryregistry.aliyuncs.com/google_containers
docker image pull registry.aliyuncs.com/google_containers/pause:3.9
docker image tag registry.aliyuncs.com/google_containers/pause:3.9 registry.k8s.io/pause:3.9kubeadm config images pull \--image-repositoryregistry.aliyuncs.com/google_containers \--cri-socket unix:///var/run/cri-dockerd.sockkubeadm init --control-plane-endpointkubeapi.dinginx.org --kubernetes-versionv1.29.11 --pod-network-cidr10.244.0.0/16 --service-cidr10.96.0.0/12 --token-ttl0 --cri-socket unix:///run/cri-dockerd.sock --image-repositoryregistry.aliyuncs.com/google_containers2.2 初始化master主节点
#初始化master主节点
rootk8s-master01:~# kubeadm init \--control-plane-endpointkubeapi.dinginx.org \--kubernetes-versionv1.29.13 \--pod-network-cidr10.244.0.0/16 \--service-cidr10.96.0.0/12 \--token-ttl0 \--cri-socket unix:///var/run/cri-dockerd.sock \--image-repositoryregistry.aliyuncs.com/google_containers #使用阿里云镜像...... # 显示如下内容表示节点初始化成功[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!# 第1个步骤提示 Kubernetes集群管理员认证到Kubernetes集群时使用的kubeconfig配置文件
To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join kubeapi.dinginx.org:6443 --token lk5whx.00xug3dwizebomql \--discovery-token-ca-cert-hash sha256:03a18aaca2f33be62f131f7cc111af1c541489519d86254f0d053777f54eb6bf \--control-plane Then you can join any number of worker nodes by running the following on each as root:kubeadm join kubeapi.dinginx.org:6443 --token lk5whx.00xug3dwizebomql \--discovery-token-ca-cert-hash sha256:03a18aaca2f33be62f131f7cc111af1c541489519d86254f0d053777f54eb6bf #复制认证为Kubernetes系统管理员的配置文件至目标用户例如当前用户root的家目录下
[rootmaster01 ~]# mkdir .kube
[rootmaster01 ~]# cp /etc/kubernetes/admin.conf .kube/config## 或者可复制到其他worker节点赋予管理员权限
[rootk8s-node01 ~]#mkdir -p ~/.kube
[rootk8s-master01 ~]# scp -i /etc/kubernetes/admin.conf 11.0.1.11:~/.kube/config#显示信息证明配置成功
rootmaster01:~# kubectl config view --raw
apiVersion: v1
clusters:
- cluster:certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJRnM5c0xMaitZV0F3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBeE16QXhNVFF4TVRWYUZ3MHpOVEF4TWpneE1UUTJNVFZhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUM2aVpEa00xbWFjd3lvcVpEdlVXVndReUhBbkQzd2M1dTF4VjRpNEMyVDFQQ1E2cXZuaDZCZUFCOWoKVXdFV2R0dnFpMllBYnEyMnExZ0t3NXMvSml1dGpHZFNiUkdseVYwY3JGVHNqV2lyVzdacWhTbzdMdHV5SXBDRwpnVU0xNEVUanFkMDIwNlZaajRWaU9PQjYwd3VlOEZXLzR4eXlvMzBudURsT0RCLzQyWjVVclhsdDhKU242WURSClhua2d5ZnZqR1BMS3JaMWp6d0kzWDJRblVHaURqUGdaVWduQkJveFp5Q1QvZTR5SlFGY2xHV2NZK2pEQ2NMMlkKN3hZVDl2ckN6SE54eVlpanQ0SWtSRkd5R0FxSXFacWNCaTcwTHdrNWdpQnNwblBLUllDeGxlN0xjNHdjbVpKcgpFbFJJU08wT3lMUXFleWtzc1hVQkJPTGhVMTVMQWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRUF3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSL3BJUTVrSWdjaEdzNE9QR2JWdFNzU2w4TmtqQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQnExU2VFdlRrQQpjNmQ2ZVFUejd1dVc2Y0lhT0pFdy9XcWk5NDF5eit5L1N3YS93dGJ5QVgwc3pxMklPc0JlcTFiL05BQTV3TnR0Cmc1dlV1ajJGa1FlbGRVWnVWYXBFV0VDYjAyMjFZdHY3NS9zTlRMcTJudlpLN0orb3BOZG5qNHFjZXlTKzlmaXgKUEVMa3AyUnU1ZUZveFlhblZwck12UHlyRG5wbktRWk5mT2lxZDcrcXlreWRVNHR2ZFh4Z0NiSnpoWXJzbE1PYQoycTIvWVpIQ1d0UHdaZklhRTR3bnJvVlJUVzJ1MWJKZEh4QzRjMkRhSDljZkhMWllRZ3E4cmk5aUxKNlZNYUNWCkEzd3BVNFhYazVDVTVwdm0waVpkbEkvdFNKSDdUaWFCYXRMZTB0L1cwbjV6R0w1b0FZUFp4TmlKNi9YTktaZ3gKREFEdFBYMjJFeU42Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kserver: https://kubeapi.dinginx.org:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-adminkubernetes
current-context: kubernetes-adminkubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLVENDQWhHZ0F3SUJBZ0lJQ2RLK21nM2ZYcjB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBeE16QXhNVFF4TVRWYUZ3MHlOakF4TXpBeE1UUTJNVFphTUR3eApIekFkQmdOVkJBb1RGbXQxWW1WaFpHMDZZMngxYzNSbGNpMWhaRzFwYm5NeEdUQVhCZ05WQkFNVEVHdDFZbVZ5CmJtVjBaWE10WVdSdGFXNHdnZ0VpTUEwR0NTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFDdEtEMFUKa3NsWUpBR1o4bFE5UFdsZTc4UU84NVgxbU5vRUo3RURub3pFa05iOXNCbkl1WnJ5NVpYUXpMNndJbkMwVUFaYgpDS2RvL2h6SU5GLy9ySHhmQmhFVkNnN044WExmZ0FXbjMvM09PejlXRnlnOWViamdmNmpUVTBydHRHYkJCdS9xCjBiL2RpNnorV0ZtamhQWTBYZnpCMXFuMThkYjRRRmdoRFlzaTZiTk4ycEtuc3RLNGtYVFI2K05RK1Z4RGo4TTgKRVJLWjVDOGZMRmJuSExnRkxBdXY2YTZEQy9YZ0lHdlBUWGd1UitzWmpmVEQxWkQwdUUrTDZuOTBOcGM2U1FacAo4THR6M1g1LzJnRStrclpjMEtYcTl5VlBodVJIY1dxVHQ1clNRZXhTWit4RCt0Z3pzR3FFNm5rQnhVYlAzY3VpCld6WkQ1cURmWTRTUlpkM1hBZ01CQUFHalZqQlVNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUsKQmdnckJnRUZCUWNEQWpBTUJnTlZIUk1CQWY4RUFqQUFNQjhHQTFVZEl3UVlNQmFBRkgra2hEbVFpQnlFYXpnNAo4WnRXMUt4S1h3MlNNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUJBUUNPM2RFZVpCK25OOVVGSnBWTUU1TVc1dlJyCklJdjlzODR3NWdTT0dUTlVTaVRlSFFUYVByTHppT2luRXE1RzNRMXZKNTh3TFNwdHNlWVo5R2xHK1dpM2Q5RzEKSFJoejcwRW9OSVFqWjEwMWNyMnpkMUFwVkR3aUlMczg5VG5WTzJyOGtFcS9GZ2hYYmtuaXhhZGJ0OERFVUxSbQpUaTlsTFQ5aDMva1pWQnFoVUJFcndjTmxWbkJrL1dIQ0lENjFGZHBVNFdEY0pQZzQxeCszdWkrTS9QVGJKV0c1CmZIOEtCdTdSWS9yMFFxbXVCREw5M1h4enRQOFJpaTgycUdTZ0tobXVnc0dPeEpXV2hSU0M1VzV5elBNcTh2eEYKQ2czSlZaU09tVGVZQkI1TW5RLzdjaDZuWjZvZEh1ZnZzcWEwQXZVeW1vUXo3Y3lKbURBNXo1VHoxMGhhCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0Kclient-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBclNnOUZKTEpXQ1FCbWZKVVBUMXBYdS9FRHZPVjlaamFCQ2V4QTU2TXhKRFcvYkFaCnlMbWE4dVdWME15K3NDSnd0RkFHV3dpbmFQNGN5RFJmLzZ4OFh3WVJGUW9PemZGeTM0QUZwOS85empzL1ZoY28KUFhtNDRIK28wMU5LN2JSbXdRYnY2dEcvM1l1cy9saFpvNFQyTkYzOHdkYXA5ZkhXK0VCWUlRMkxJdW16VGRxUwpwN0xTdUpGMDBldmpVUGxjUTQvRFBCRVNtZVF2SHl4VzV4eTRCU3dMcittdWd3djE0Q0JyejAxNExrZnJHWTMwCnc5V1E5TGhQaStwL2REYVhPa2tHYWZDN2M5MStmOW9CUHBLMlhOQ2w2dmNsVDRia1IzRnFrN2VhMGtIc1VtZnMKUS9yWU03QnFoT3A1QWNWR3o5M0xvbHMyUSthZzMyT0VrV1hkMXdJREFRQUJBb0lCQUhSbTBHaThKRTMwSm45dQp2K0pMSGtLTHU2aXhadVdxMHlSbjZqOGNubFNsYVdFd3VLU080UExZRTFaQnpRNXFtSWtlSXFlZnNhcUs2SjVOClZ4dHd3RXJBc1VzTGI5aFJyMzgvZUkzWnJheXRkMjVRTXVUZ3ByK0VFZUc5NUdqWEZSdzlwWnFkVmZXQXA5SnoKWWc3aW12K3BEdmpmYlhIQUdWclpKbVZSelc2eHdHZDNqMUZtUjRFK0FmaEQ4TUJEMUZpYW5QbnBoSmNhUzlEdApHWnVUMUE3Z3g2R21kUlJ6RVZObGgvN01DdGpWUEM0WlhTSGtaL1lIeENnNlZHaXdQb1p5ZFhSaWxPS0VPZXNCCkJSNjZkaHNWRGtUVXJwbllXdTR1ZUVpVDIwcnNuQWk0RnVwemhOV2Q0STdXbFlLYnBMYzdJQXpSSy9iUDY2SmIKSXVXT0lRRUNnWUVBeXBjSjNyMHNjNmluYktXV2lTRDdnNXBRVEtIZ3lrQU0yV2FYQnYwWktGaFlmVlJzbTZWYwpqbCtNS1BOZ0Z2bUp3RVlLbThrWFNpR3VRSitJc0xqNkNIcFM1TkFpTzBtKzQ5UlQ3Z3ZXQzgzUXZ1S0NHb2p3CmFIVUxHSUZNN1k3cStMMkVMNDJybk1tVHNPaE1lQm9mZlJVako0Qkh0bFNDbFZSdDRnT0JTK2NDZ1lFQTJzN0IKZ2tOZnlFOVdBRHZySUE0ZmVjSEFrZDBMYlgxbzNkQnM1cDVsUnd3OEVPQnpVYWJzYUVGRmZtSUVTYk8rTDBVQQpHUmFnU0VCVWhFRTZkQVRqNmE5RVpEanQwelMyUTErTVRhM3RscFZ4WGg2MC9MN1VwNUx3RVlJTmZ1eGRSNWtNCnBhL0ppcXJDUmZrVld1VTVQeGN3OC9OTnlmemxUeXJKV1hsTElKRUNnWUVBam1LYldFWWk4T01QVU9nTXBqSmQKTTRDSWdXT2dwdVZmWW9pZEN4ZGwwQnBQano1LzJ1RGM3Vit3RmJQL3pBWDBVQU9xUHhXVlhjL1FOYkVxay9KZQpJUWxBSzNzeEkvUlB5cmFYaW80ZEVVekZNdlBsdHhxWnhRREdIS0g3M0ZiL1JIV0dheG1xRE5jTUMyRXBKWnhPCmwwMzgyQzFydVhVcUZpUXgycERXRmU4Q2dZQlNiUENZa2FqVFhJK1pKdms3NmhZUlY5dWpGeEhtL1FjMEIvLysKSUUwbXBvVTZGbE5hdnZidmp5Y09wUDNZaGMxdklSOFlWRjJzMmc3OGcxWHh2cVhjc2htaFo1Q3ZyM3U1aFpUawo3bEJDR2FuTE91WVRFQkFyMEQ1L1dlNmJrWTdTR2VXWnhNNjNYZnd4UDlPN21mNG10aVhLd0N6ZG1sY0hxNmFOCi9oTS9BUUtCZ1FDdkhYT3JvU1ROSmVGQlptalBBRmdUQXZUQW8zOW1yY0QxaUNDY1FHYyt2djNIL2NDK1B2bGUKZDI3OWE3L05wVDgxazhFL3FwS0VoN1NiZXdXaC9PcDViTWUvSHFtdXpIVWttRWo4c1Zibk5QSk1UcXB2dUhDRgo3VEx2TzdTcWRsa3N4THI1SlcwZzZtT3ZsK3J0L0RJVktyWk9iY1dwRE9lWnUwejRKZ25Eb0E9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo
3 Woker节点配置及添加
3.1 节点添加
kubeadm-init文件(每次初始化内容会不同切记先进行保存)
kubeadm join kubeapi.dinginx.org:6443 --token lk5whx.00xug3dwizebomql \--discovery-token-ca-cert-hash sha256:03a18aaca2f33be62f131f7cc111af1c541489519d86254f0d053777f54eb6bf \--cri-socket unix:///run/cri-dockerd.sock3.2 验证集群状态
#全部节点及namespace为running状态
rootk8s-node01:~# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01.dinginx.org Ready control-plane 2d21h v1.29.13 11.0.1.10 none Ubuntu 22.04.2 LTS 5.15.0-131-generic docker://27.5.1
k8s-node01.dinginx.org Ready none 2d21h v1.29.13 11.0.1.11 none Ubuntu 22.04.2 LTS 5.15.0-131-generic docker://27.5.1
k8s-node02.dinginx.org Ready none 2d21h v1.29.13 11.0.1.12 none Ubuntu 22.04.2 LTS 5.15.0-131-generic docker://27.5.1
k8s-node03.dinginx.org Ready none 2d21h v1.29.13 11.0.1.13 none Ubuntu 22.04.2 LTS 5.15.0-131-generic docker://27.5.14 部署Web应用测试
#创建服务
rootk8s-master01:~# kubectl create namespace web01 --dry-runclient -oyaml web01.yaml
rootk8s-master01:~# kubectl run -n web01 --image nginx web01 --dry-runclient -oyaml web01.yaml
rootk8s-master01:~# kubectl create service nodeport web01 --tcp80:80 --dry-runclient -oyaml web01.yaml#配置文件
rootk8s-master01:~# cat web01.yaml
apiVersion: v1
kind: Namespace
metadata:name: web01
---
apiVersion: v1
kind: Pod
metadata:labels:app: web01name: web01namespace: web01
spec:containers:- image: nginxname: web01imagePullPolicy: IfNotPresent
---
apiVersion: v1
kind: Service
metadata:labels:app: web01name: web01namespace: web01
spec:ports:- name: 80-80port: 80protocol: TCPtargetPort: 80selector:app: web01type: NodePortrootk8s-master01:~# kubectl apply -f web01.yaml
rootk8s-master01:~# kubectl -n web01 get pod,svc
NAME READY STATUS RESTARTS AGE
pod/web01 1/1 Running 0 15mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web01 NodePort 10.110.14.158 none 80:32717/TCP 15m#修改显示内容
rootk8s-master01:~# kubectl exec -it -n web01 pods/web01 -- bash -c cat /usr/share/nginx/html/index.html EOF
h1welcome to dinginxs website!/h1
EOF#查看services
rootk8s-node01:~# kubectl -n web01 get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web01 NodePort 10.110.14.158 none 80:32717/TCP 43h