福建住房与城乡建设厅网站,wordpress 音乐页面,二级域名iis建立网站,宣传网国庆假期前#xff0c;给小伙伴们更行完了云计算CLOUD第一周目的内容#xff0c;现在为大家更行云计算CLOUD二周目内容#xff0c;内容涉及K8S组件的添加与使用#xff0c;K8S集群的搭建。最重要的主体还是资源文件的编写。
(*^▽^*) 环境准备#xff1a;
主机清单
主机… 国庆假期前给小伙伴们更行完了云计算CLOUD第一周目的内容现在为大家更行云计算CLOUD二周目内容内容涉及K8S组件的添加与使用K8S集群的搭建。最重要的主体还是资源文件的编写。
(*^▽^*) 环境准备
主机清单
主机名IP地址最低配置harbor192.168.1.302CPU,4G内存master192.168.1.502CPU,4G内存node-0001192.168.1.512CPU,4G内存node-0002192.168.1.522CPU,4G内存node-0003192.168.1.532CPU,4G内存node-0004192.168.1.542CPU,4G内存node-0005192.168.1.552CPU,4G内存
#不一定要那么多从节点 你自己电脑随便运行 3个 node结点机器就好。^_^ cloud 01
一、理解k8s架构 架构内容
首先 有1台 客户机client 可以访问测试各个结点。
#为了方便就用云计算cloud一周目的跳板机ecs-proxy来方便测试的同时还可以同步上传软件到其他机器。 master控制节点机器选一个台虚拟机再搭建几台 Node 节点机器。该操作在云计算 第二阶段ansible 时候讲过不记得可以回去看 。
#主机清单和ansible.cfg文件的设置。方便运行自动化剧本和 节点测试。 私有镜像仓库云计算cloud一周目结尾搭建的harbor镜像仓库。 #方便上传和下拉镜像方便分组管理镜像每个镜像在那个组有什么用途方便管理。 不然就像 windows 所有 文件 不分用途不分 磁盘 全放 C盘太过混乱。 二、搭建k8s集群 安装控制节点 1、配置软件仓库 [rootecs-proxy s4]# rsync -av docker/ /var/localrepo/docker/
[rootecs-proxy s4]# rsync -av kubernetes/packages/ /var/localrepo/k8s/
[rootecs-proxy s4]# createrepo --update /var/localrepo/ 2、系统环境配置 # 禁用 firewall 和 swap
[rootmaster ~]# sed /swap/d -i /etc/fstab
[rootmaster ~]# swapoff -a
[rootmaster ~]# dnf remove -y firewalld-* 3、安装软件包 [rootmaster ~]# vim /etc/hosts
192.168.1.30 harbor
192.168.1.50 master
192.168.1.51 node-0001
192.168.1.52 node-0002
192.168.1.53 node-0003
192.168.1.54 node-0004
192.168.1.55 node-0005
[rootmaster ~]# dnf install -y kubeadm kubelet kubectl containerd.io ipvsadm ipset iproute-tc
[rootmaster ~]# containerd config default /etc/containerd/config.toml
[rootmaster ~]# vim /etc/containerd/config.toml
61: sandbox_image harbor:443/k8s/pause:3.9
125: SystemdCgroup true
154 行新插入:[plugins.io.containerd.grpc.v1.cri.registry.mirrors.docker.io]endpoint [https://harbor:443][plugins.io.containerd.grpc.v1.cri.registry.mirrors.harbor:443]endpoint [https://harbor:443][plugins.io.containerd.grpc.v1.cri.registry.configs.harbor:443.tls]insecure_skip_verify true
[rootmaster ~]# systemctl enable --now kubelet containerd 4、配置内核参数 # 加载内核模块
[rootmaster ~]# vim /etc/modules-load.d/containerd.conf
overlay
br_netfilter
xt_conntrack
[rootmaster ~]# systemctl start systemd-modules-load.service # 设置内核参数
[rootmaster ~]# vim /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward 1
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
net.netfilter.nf_conntrack_max 1000000
[rootmaster ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf 5、导入 k8s 镜像 拷贝本阶段 kubernetes/init 目录到 masterrsync -av kubernetes/init 192.168.1.50:/root/ 5.1 安装部署 docker[rootmaster ~]# dnf install -y docker-ce
[rootmaster ~]# vim /etc/docker/daemon.json
{registry-mirrors:[https://harbor:443],insecure-registries:[harbor:443]
}
[rootmaster ~]# systemctl enable --now docker
[rootmaster ~]# docker info5.2 上传镜像到 harbor 仓库[rootmaster ~]# docker login harbor:443
Username: 登录用户
Password: 登录密码
Login Succeeded
[rootmaster ~]# docker load -i init/v1.29.2.tar.xz
[rootmaster ~]# docker images|while read i t _;do[[ ${t} TAG ]] continue[[ ${i} ~ ^harbor:443/. ]] continuedocker tag ${i}:${t} harbor:443/k8s/${i##*/}:${t}docker push harbor:443/k8s/${i##*/}:${t}docker rmi ${i}:${t} harbor:443/k8s/${i##*/}:${t}
done #cloud一周目也讲过上传和下拉镜像这个是通过循环的方法下拉的结合了正则匹配相关k8s镜像。 6、设置 Tab 键 [rootmaster ~]# source (kubeadm completion bash|tee /etc/bash_completion.d/kubeadm)
[rootmaster ~]# source (kubectl completion bash|tee /etc/bash_completion.d/kubectl) #把要用到的组件 反向写入到 source运行环境中方便 tab 补全 命令。 7、master 安装
#开始在主节点操作。设哪台为 master节点最好给主机名取号方便区分。 # 测试系统环境
[rootmaster ~]# kubeadm init --configinit/init.yaml --dry-run 2error.log
[rootmaster ~]# cat error.log
# 主控节点初始化
[rootmaster ~]# rm -rf error.log /etc/kubernetes/tmp
[rootmaster ~]# kubeadm init --configinit/init.yaml |tee init/init.log
# 管理授权
[rootmaster ~]# mkdir -p $HOME/.kube
[rootmaster ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[rootmaster ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# 验证安装结果
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 19s v1.29.2 安装网络插件 拷贝本阶段 kubernetes/plugins 目录到 masterrsync -av kubernetes/plugins 192.168.1.50:/root/ #calico插件是k8s的网络插件。 上传镜像 [rootmaster ~]# cd plugins/calico
[rootmaster calico]# docker load -i calico.tar.xz
[rootmaster calico]# docker images|while read i t _;do[[ ${t} TAG ]] continue[[ ${i} ~ ^harbor:443/. ]] continuedocker tag ${i}:${t} harbor:443/plugins/${i##*/}:${t}docker push harbor:443/plugins/${i##*/}:${t}docker rmi ${i}:${t} harbor:443/plugins/${i##*/}:${t}
done 安装 calico 插件 [rootmaster calico]# sed -ri s,^(\s*image: )(.*/)?(.),\1harbor:443/plugins/\3, calico.yaml
4642: image: docker.io/calico/cni:v3.26.4
4670: image: docker.io/calico/cni:v3.26.4
4713: image: docker.io/calico/node:v3.26.4
4739: image: docker.io/calico/node:v3.26.4
4956: image: docker.io/calico/kube-controllers:v3.26.4[rootmaster calico]# kubectl apply -f calico.yaml
[rootmaster calico]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 23m v1.29.2 安装计算节点 1、获取凭证 # 查看 token
[rootmaster ~]# kubeadm token list
TOKEN TTL EXPIRES
abcdef.0123456789abcdef 23h 2022-04-12T14:04:34Z
# 删除 token
[rootmaster ~]# kubeadm token delete abcdef.0123456789abcdef
bootstrap token abcdef deleted
# 创建 token
[rootmaster ~]# kubeadm token create --ttl0 --print-join-command
kubeadm join 192.168.1.50:6443 --token fhf6gk.bhhvsofvd672yd41 --discovery-token-ca-cert-hash sha256:ea07de5929dab8701c1bddc347155fe51c3fb6efd2ce8a4177f6dc03d5793467
# 获取 hash 值 [1、在创建 token 时候显示 2、使用 openssl 计算]
[rootmaster ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt |openssl rsa -pubin -outform der |openssl dgst -sha256 -hex #查看原有token凭证删除再重新创建新的每个人不一样不要照抄。具体看自己生产的凭证。 不要照抄具体看自己的具体看自己的自己的。 [○Д´ ○] #从节点有自动化的方式可以配置不懂可以请教AI。用到ansible自动化中的vars模块。
相关安装软件、配置系统环境、内核参数都可以自己转化剧本方式编写。 下面演示的是手动安装的操作。每个 node结点步骤都是一样的
你也可以通过winterm的同步窗口一起编辑配置。 2、node 安装 2、系统环境配置 # 禁用 firewall 和 swap
[rootmaster ~]# sed /swap/d -i /etc/fstab
[rootmaster ~]# swapoff -a
[rootmaster ~]# dnf remove -y firewalld-* 3、安装软件包 [rootmaster ~]# vim /etc/hosts
192.168.1.30 harbor
192.168.1.50 master
192.168.1.51 node-0001
192.168.1.52 node-0002
192.168.1.53 node-0003
192.168.1.54 node-0004
192.168.1.55 node-0005
[rootmaster ~]# dnf install -y kubeadm kubelet kubectl containerd.io ipvsadm ipset iproute-tc
[rootmaster ~]# containerd config default /etc/containerd/config.toml
[rootmaster ~]# vim /etc/containerd/config.toml
61: sandbox_image harbor:443/k8s/pause:3.9
125: SystemdCgroup true
154 行新插入:[plugins.io.containerd.grpc.v1.cri.registry.mirrors.docker.io]endpoint [https://harbor:443][plugins.io.containerd.grpc.v1.cri.registry.mirrors.harbor:443]endpoint [https://harbor:443][plugins.io.containerd.grpc.v1.cri.registry.configs.harbor:443.tls]insecure_skip_verify true
[rootmaster ~]# systemctl enable --now kubelet containerd 4、配置内核参数 # 加载内核模块
[rootmaster ~]# vim /etc/modules-load.d/containerd.conf
overlay
br_netfilter
xt_conntrack
[rootmaster ~]# systemctl start systemd-modules-load.service # 设置内核参数
[rootmaster ~]# vim /etc/sysctl.d/99-kubernetes-cri.conf
net.ipv4.ip_forward 1
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
net.netfilter.nf_conntrack_max 1000000
[rootmaster ~]# sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf #安装配置好后加入master控制节点的 token凭证和 hash证书。 [rootnode ~]# kubeadm join 192.168.1.50:6443 --token 你的token --discovery-token-ca-cert-hash sha256:ca 证书 hash
#------------------------ 在 master 节点上验证---------------------------
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 76m v1.29.2
node-0001 Ready none 61s v1.29.2 #每个节点配置好后查看集群工作状态。
查看集群状态 # 验证节点工作状态
[rootmaster ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 99m v1.29.2
node-0001 Ready none 23m v1.29.2
node-0002 Ready none 57s v1.29.2
node-0003 Ready none 57s v1.29.2
node-0004 Ready none 57s v1.29.2
node-0005 Ready none 57s v1.29.2# 验证容器工作状态
[rootmaster ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-fc945b5f7-p4xnj 1/1 Running 0 77m
calico-node-6s8k2 1/1 Running 0 59s
calico-node-bxwdd 1/1 Running 0 59s
calico-node-d5g6x 1/1 Running 0 77m
calico-node-lfwdh 1/1 Running 0 59s
calico-node-qnhxr 1/1 Running 0 59s
calico-node-sjngw 1/1 Running 0 24m
coredns-844c6bb88b-89lzt 1/1 Running 0 59m
coredns-844c6bb88b-qpbvk 1/1 Running 0 59m
etcd-master 1/1 Running 0 70m
kube-apiserver-master 1/1 Running 0 70m
kube-controller-manager-master 1/1 Running 0 70m
kube-proxy-5xjzw 1/1 Running 0 59s
kube-proxy-9mbh5 1/1 Running 0 59s
kube-proxy-g2pmp 1/1 Running 0 99m
kube-proxy-l7lpk 1/1 Running 0 24m
kube-proxy-m6wfj 1/1 Running 0 59s
kube-proxy-vqtt8 1/1 Running 0 59s
kube-scheduler-master 1/1 Running 0 70m #具体看节点和 相关组件的 运行状况 是否 是 ready 和 running 状态。 这个 pods 相当于 一个饭盒 1个pod可以运行多个容器和服务。 cloud 02
#上节课讲了 安装和配置 K8S相关组件与集群搭建。 这节课讲具体的组件命令的使用。 抽象来说 k8s 相当于 一台 机甲 它由 控制组件和 计算组件 结构而成。 接下来 我们一起去 熟悉k8s的具体操作。 O(∩_∩)O 一、熟悉 k8s中 kubectl 管理命令
k8s集群管理
信息查询命令
子命令说明help用于查看命令及子命令的帮助信息cluster-info显示集群的相关配置信息api-resources查看当前服务器上所有的资源对象api-versions查看当前服务器上所有资源对象的版本config管理当前节点上的认证信息 ###### help 命令# 查看帮助命令信息
[rootmaster ~]# kubectl help version
Print the client and server version information for the current context.Examples:# Print the client and server versions for the current contextkubectl version
... ...
###### cluster-info# 查看集群状态信息
[rootmaster ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.50:6443
CoreDNS is running at https://192.168.1.50:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
... ...######## api-resources# 查看资源对象类型
[rootmaster ~]# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
endpoints ep v1 true Endpoints
events ev v1 true Event
... ...########api-versions# 查看资源对象版本
[rootmaster ~]# kubectl api-versions
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
... ...
config# 查看当前认证使用的用户及证书
[rootmaster ~]# kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO
* kubernetes-adminkubernetes kubernetes kubernetes-admin######## 使用 view 查看详细配置[rootmaster ~]# kubectl config view
apiVersion: v1
clusters:
- cluster:certificate-authority-data: DATAOMITTEDserver: https://192.168.1.50:6443name: kubernetes
contexts:
- context:cluster: kubernetesuser: kubernetes-adminname: kubernetes-adminkubernetes
current-context: kubernetes-adminkubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-adminuser:client-certificate-data: REDACTEDclient-key-data: REDACTED#######集群管理授权########[rootharbor ~]# vim /etc/hosts
192.168.1.30 harbor
192.168.1.50 master
192.168.1.51 node-0001
192.168.1.52 node-0002
192.168.1.53 node-0003
192.168.1.54 node-0004
192.168.1.55 node-0005
[rootharbor ~]# dnf install -y kubectl
[rootharbor ~]# mkdir -p $HOME/.kube
[rootharbor ~]# rsync -av master:/etc/kubernetes/admin.conf $HOME/.kube/config
[rootharbor ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[rootharbor ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 24h v1.29.2
node-0001 Ready none 22h v1.29.2
node-0002 Ready none 22h v1.29.2
node-0003 Ready none 22h v1.29.2
node-0004 Ready none 22h v1.29.2
node-0005 Ready none 22h v1.29.2 二、 熟悉 pod 创建过程及相位状态
Pod 管理
创建 Pod
上传镜像到 harbor 仓库rsync -av public/myos.tar.xz 192.168.1.50:/root/
# 导入镜像
[rootmaster ~]# docker load -i myos.tar.xz
# 上传镜像到 library 项目
[rootmaster ~]# docker images|while read i t _;do[[ ${t} TAG ]] continue[[ ${i} ~ ^harbor:443/. ]] continuedocker tag ${i}:${t} harbor:443/library/${i##*/}:${t}docker push harbor:443/library/${i##*/}:${t}docker rmi ${i}:${t} harbor:443/library/${i##*/}:${t}
done 创建 Pod # 创建 Pod 资源对象
[rootmaster ~]# kubectl run myweb --imagemyos:httpd
pod/myweb created# 查询资源对象
[rootmaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb 1/1 Running 0 3s 10.244.1.3 node-0001# 访问验证
[rootmaster ~]# curl http://10.244.1.3
Welcome to The Apache. Pod 创建过程图解 Pod 管理命令1
子命令说明备注run创建 Pod 资源对象创建即运行没有停止概念get查看资源对象的状态信息常用参数: -o 显示格式create创建资源对象不能创建 Poddescribe查询资源对象的属性信息logs查看容器的报错信息常用参数: -c 容器名称 get# 查看 Pod 资源对象
[rootmaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
myweb 1/1 Running 0 10m# 只查看资源对象的名字
[rootmaster ~]# kubectl get pods -o name
pod/myweb# 查看资源对象运行节点的信息
[rootmaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myweb 1/1 Running 0 10m 10.244.1.3 node-0001# 查看资源对象详细信息Yaml/Json 格式
[rootmaster ~]# kubectl get pod myweb -o yaml
apiVersion: v1
kind: Pod
metadata:name: myweb
... ...# 查看名称空间
[rootmaster ~]# kubectl get namespaces
NAME STATUS AGE
default Active 39h
kube-node-lease Active 39h
kube-public Active 39h
kube-system Active 39h# 查看名称空间中的 Pod 信息
[rootmaster ~]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
etcd-master 1/1 Running 0 39h
kube-apiserver-master 1/1 Running 0 39h
kube-controller-manager-master 1/1 Running 0 39h
kube-scheduler-master 1/1 Running 0 39h
... ... create# 创建名称空间资源对象
[rootmaster ~]# kubectl create namespace work
namespace/work created# 查看名称空间
[rootmaster ~]# kubectl get namespaces
NAME STATUS AGE
default Active 39h
kube-node-lease Active 39h
kube-public Active 39h
kube-system Active 39h
work Active 11s run# 在 work 名称空间创建 Pod
[rootmaster ~]# kubectl -n work run myhttp --imagemyos:httpd
pod/myhttp created# 查询资源对象
[rootmaster ~]# kubectl -n work get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myhttp 1/1 Running 0 3s 10.244.2.2 node-0002# 访问验证
[rootmaster ~]# curl http://10.244.2.2
Welcome to The Apache. describe# 查看资源对象的配置信息
[rootmaster ~]# kubectl -n work describe pod myhttp
Name: myhttp
Namespace: work
Priority: 0
Service Account: default
Node: node-0002/192.168.1.52
... ...
Events:Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 7s default-scheduler Successfully assigned work/myhttp to node-0002Normal Pulling 6s kubelet Pulling image myos:httpdNormal Pulled 2s kubelet Successfully pulled image myos:httpd in 4.495s (4.495s including waiting)Normal Created 2s kubelet Created container myhttpNormal Started 2s kubelet Started container myhttp# 查看 work 名称空间的配置信息
[rootmaster ~]# kubectl describe namespaces work
Name: work
Labels: kubernetes.io/metadata.namework
Annotations: none
Status: ActiveNo resource quota.No LimitRange resource. logs# 查看容器日志
[rootmaster ~]# kubectl -n work logs myhttp
[rootmaster ~]#
[rootmaster ~]# kubectl -n default logs myweb
2022/11/12 18:28:54 [error] 7#0: *2 open() ... ... failed (2: No such file or directory), ...... Pod 管理命令2
子命令说明备注exec在某一个容器内执行特定的命令可选参数: -c 容器名称cp在容器和宿主机之间拷贝文件或目录可选参数: -c 容器名称delete删除资源对象可选参数: -l 标签 exec# 在容器内执行命令
[rootmaster ~]# kubectl exec -it myweb -- ls
index.html info.php[rootmaster ~]# kubectl exec -it myweb -- bash
[rootmyweb html]# ifconfig eth0
eth0: flags4163UP,BROADCAST,RUNNING,MULTICAST mtu 1450inet 10.244.1.3 netmask 255.255.255.0 broadcast 10.244.2.255ether 3a:32:78:59:ed:25 txqueuelen 0 (Ethernet)
... ... cp
# 与容器进行文件或目录传输
[rootmaster ~]# kubectl cp myweb:/etc/yum.repos.d /root/aaa
tar: Removing leading / from member names
[rootmaster ~]# tree /root/aaa
/root/aaa
├── local.repo
├── Rocky-AppStream.repo
├── Rocky-BaseOS.repo
└── Rocky-Extras.repo0 directories, 4 files
[rootmaster ~]# kubectl -n work cp /etc/passwd myhttp:/root/mima
[rootmaster ~]# kubectl -n work exec -it myhttp -- ls /root/
mima delete# 删除资源对象
[rootmaster ~]# kubectl delete pods myweb
pod myweb deleted# 删除 work 名称空间下所有 Pod 对象
[rootmaster ~]# kubectl -n work delete pods --all
pod myhttp deleted# 删除名称空间
[rootmaster ~]# kubectl delete namespaces work
namespace work deleted 三、熟悉资源清单文件 #相信大家已经看出来了 云计算cloud一周目收尾的时候简单讲过资源清单格式。 抽象来说资源清单文件就是预制菜它可以直接写好再资源清单中而其中的每一个资源控制组件就相当于调料品和 制作菜品的食材。 资源监控组件 配置授权令牌 [rootmaster ~]# echo serverTLSBootstrap: true /var/lib/kubelet/config.yaml
[rootmaster ~]# systemctl restart kubelet
[rootmaster ~]# kubectl get certificatesigningrequests
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-2hg42 14s kubernetes.io/kubelet-serving system:node:master none Pending
[rootmaster ~]# kubectl certificate approve csr-2hg42
certificatesigningrequest.certificates.k8s.io/csr-2hg42 approved
[rootmaster ~]# kubectl get certificatesigningrequests
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-2hg42 28s kubernetes.io/kubelet-serving system:node:master none Approved,Issued 安装插件 metrics #用到的第二个插件。 # 上传镜像到私有仓库
[rootmaster metrics]# docker load -i metrics-server.tar.xz
[rootmaster metrics]# docker images|while read i t _;do[[ ${t} TAG ]] continue[[ ${i} ~ ^harbor:443/. ]] continuedocker tag ${i}:${t} harbor:443/plugins/${i##*/}:${t}docker push harbor:443/plugins/${i##*/}:${t}docker rmi ${i}:${t} harbor:443/plugins/${i##*/}:${t}
done# 使用资源对象文件创建服务
[rootmaster metrics]# sed -ri s,^(\s*image: )(.*/)?(.),\1harbor:443/plugins/\3, components.yaml
140: image: registry.k8s.io/metrics-server/metrics-server:v0.6.4
[rootmaster metrics]# kubectl apply -f components.yaml# 验证插件 Pod 状态
[rootmaster metrics]# kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
metrics-server-ddb449849-c6lkc 1/1 Running 0 64s 证书签发 # 查看节点资源指标
[rootmaster metrics]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 99m 4% 1005Mi 27%
node-0001 unknown unknown unknown unknown
node-0002 unknown unknown unknown unknown
node-0003 unknown unknown unknown unknown
node-0004 unknown unknown unknown unknown
node-0005 unknown unknown unknown unknown
#--------------- 在所有计算节点配置证书 -----------------
[rootnode ~]# echo serverTLSBootstrap: true /var/lib/kubelet/config.yaml
[rootnode ~]# systemctl restart kubelet#--------------- 在 master 签发证书 -------------------
[rootmaster ~]# kubectl certificate approve $(kubectl get csr -o name)
certificatesigningrequest.certificates.k8s.io/csr-t8799 approved
certificatesigningrequest.certificates.k8s.io/csr-69qhz approved
... ...
[rootmaster ~]# kubectl get certificatesigningrequests
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-2hg42 14m kubernetes.io/kubelet-serving master Approved,Issued
csr-9gu29 28s kubernetes.io/kubelet-serving node-0001 Approved,Issued
... ... 资源指标监控 # 获取资源指标有延时等待 15s 即可查看
[rootmaster ~]# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master 83m 4% 1789Mi 50%
node-0001 34m 1% 747Mi 20%
node-0002 30m 1% 894Mi 24%
node-0003 39m 1% 930Mi 25%
node-0004 45m 2% 896Mi 24%
node-0005 40m 2% 1079Mi 29% #前面很懵现在懂了吧这个类似于 TOP 命令只是该插件是看每个节点的资源情况的。 罒ω罒 资源清单文件
#不记得咋写的默写下面的格式三遍试试唤醒记忆 [rootmaster ~]# vim myweb.yaml
---
kind: Pod
apiVersion: v1
metadata:name: myweb
spec:containers:- name: nginximage: myos:nginx
status: {} 管理命令 子命令说明备注create创建文件中定义的资源支持指令式和资源清单文件配置apply创建更新文件中定义的资源只支持资源清单文件声明式delete删除文件中定义的资源支持指令式和资源清单文件配置replace更改/替换资源对象强制重建 --force create# 创建资源对象
[rootmaster ~]# kubectl create -f myweb.yaml
pod/myweb created
# 不能更新重复执行会报错
[rootmaster ~]# kubectl create -f myweb.yaml
Error from server (AlreadyExists): error when creating myweb.yaml: pods myweb already exists delete# 使用资源清单文件删除
[rootmaster ~]# kubectl delete -f myweb.yaml
pod myweb deleted
[rootmaster ~]# kubectl get pods
No resources found in default namespace. apply# 创建资源对象
[rootmaster ~]# kubectl apply -f myweb.yaml
pod/myweb created
# 更新资源对象
[rootmaster ~]# kubectl apply -f myweb.yaml
pod/myweb configured# 强制重建资源对象
[rootmaster ~]# kubectl replace --force -f myweb.yaml
pod myweb deleted
pod/myweb created# 删除资源对象
[rootmaster ~]# kubectl delete -f myweb.yaml
pod myweb deleted# 拓展提高
# 与 kubectl apply -f myweb.yaml 功能相同
[rootmaster ~]# cat myweb.yaml |kubectl apply -f - cloud 03
一、属性 pod模版与版主手册
#学习前第一件事默写资源清单最简单、最基本的格式2-3遍。 资源清单文件
yaml格式---
kind: Pod
apiVersion: v1
metadata:name: myweb
spec:containers:- name: nginximage: myos:nginx
status {} #知道了 yaml格式的再给你们介绍下json 易读格式的怎么写。 json格式{kind:Pod,apiVersion:v1,metadata:{name:myweb},spec:{containers:[{name:nginx,image:myos:nginx}]},status:{}
} 模板与帮助信息 # 获取资源对象模板
[rootmaster ~]# kubectl create namespace work --dry-runclient -o yaml
apiVersion: v1
kind: Namespace
metadata:creationTimestamp: nullname: work
spec: {}
status: {}# 查询帮助信息
[rootmaster ~]# kubectl explain Pod.metadata
KIND: Pod
VERSION: v1FIELD: metadata ObjectMeta ... ...namespace stringNamespace defines the space within which each name must be unique. An emptynamespace is equivalent to the default namespace, but default is thecanonical representation. Not all objects are required to be scoped to anamespace - the value of this field for those objects will be empty.[rootmaster ~]# kubectl explain Pod.metadata.namespace
KIND: Pod
VERSION: v1FIELD: namespace stringDESCRIPTION:Namespace defines the space within which each name must be unique. An emptynamespace is equivalent to the default namespace, but default is thecanonical representation. Not all objects are required to be scoped to anamespace - the value of this field for those objects will be empty.Must be a DNS_LABEL. Cannot be updated. More info:https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces #status { } 可以不写┗|O′|┛ 嗷~~后面出现了你可以省略掉。 配置名称空间 [rootmaster ~]# vim myweb.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mywebnamespace: default
spec:containers:- name: nginximage: myos:nginx
status: {} 管理资源对象 # 创建多个资源清单文件
[rootmaster ~]# mkdir app
[rootmaster ~]# sed s,myweb,web1, myweb.yaml app/web1.yaml
[rootmaster ~]# sed s,myweb,web2, myweb.yaml app/web2.yaml
[rootmaster ~]# sed s,myweb,web3, myweb.yaml app/web3.yaml
[rootmaster ~]# tree app/
app/
├── web1.yaml
├── web2.yaml
└── web3.yaml# 创建应用
[rootmaster ~]# kubectl apply -f app/web1.yaml -f app/web2.yaml
pod/web1 created
pod/web2 created# 执行目录下所有资源清单文件
[rootmaster ~]# kubectl apply -f app/
pod/web1 unchanged
pod/web2 unchanged
pod/web3 created# 删除目录下所有的资源对象
[rootmaster ~]# kubectl delete -f app/
pod web1 deleted
pod web2 deleted
pod web3 deleted# 合并管理资源清单文件
[rootmaster ~]# cat app/* app.yaml
[rootmaster ~]# kubectl apply -f app.yaml
pod/web1 created
pod/web2 created
pod/web3 created #他们像葫芦娃一样把它藤蔓的根拔掉上面的葫芦也带着拔走了。 二、pod多容器与嵌入式脚本 多容器 Pod
[rootmaster ~]# vim mynginx.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mynginxnamespace: default
spec:containers:- name: nginximage: myos:nginx- name: phpimage: myos:php-fpm[rootmaster ~]# kubectl apply -f mynginx.yaml
pod/mynginx created
[rootmaster ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
mynginx 2/2 Running 0 3s #类似双黄蛋 管理多容器 Pod
受到多容器影响的命令: [logs, exec, cp] # 查看日志
[rootmaster ~]# kubectl logs mynginx -c nginx
[rootmaster ~]#
[rootmaster ~]# kubectl logs mynginx -c php
[06-Mar-2024 12:56:18] NOTICE: [pool www] user directive is ignored when FPM is not running as root
[06-Mar-2024 12:56:18] NOTICE: [pool www] group directive is ignored when FPM is not running as root# 执行命令
[rootmaster ~]# kubectl exec -it mynginx -c nginx -- pstree -p
nginx(1)--nginx(7)-nginx(8)
[rootmaster ~]# kubectl exec -it mynginx -c php -- pstree -p
php-fpm(1)# 拷贝文件
[rootmaster ~]# kubectl cp mynginx:/etc/php-fpm.conf /root/php.conf -c nginx
tar: Removing leading / from member names
tar: /etc/php-fpm.conf: Cannot stat: No such file or directory
tar: Exiting with failure status due to previous errors[rootmaster ~]# kubectl cp mynginx:/etc/php-fpm.conf /root/php.conf -c php
tar: Removing leading / from member names 案例 3 排错找出错误理解格式 [rootmaster ~]# vim web2.yaml
---
kind: Pod
apiVersion: v1
metadata:name: web2namespace: default
spec:containers:- name: nginximage: myos:nginx- name: apacheimage: myos:httpd
status: {}[rootmaster ~]# kubectl apply -f web2.yaml
pod/web2 created
[rootmaster ~]# kubectl get pods web2
NAME READY STATUS RESTARTS AGE
web2 1/2 Error 1 (4s ago) 8s 自定义任务 [rootmaster ~]# vim mycmd.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mycmd
spec:containers:- name: linuximage: myos:httpdcommand: [sleep] # 配置自定义命令args: # 设置命令参数- 30[rootmaster ~]# kubectl apply -f mycmd.yaml
pod/mycmd created
[rootmaster ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
mycmd 1/1 Running 0 4s
mycmd 0/1 Completed 0 31s
mycmd 1/1 Running 1 (2s ago) 32s 容器保护策略 [rootmaster ~]# vim mycmd.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mycmd
spec:restartPolicy: OnFailure # 配置重启策略containers:- name: linuximage: myos:httpdcommand: [sleep]args:- 30[rootmaster ~]# kubectl replace --force -f mycmd.yaml
pod mycmd deleted
pod/mycmd replaced
[rootmaster ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
mycmd 1/1 Running 0 4s
mycmd 0/1 Completed 0 31s 宽限期策略 [rootmaster ~]# vim mycmd.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mycmd
spec:terminationGracePeriodSeconds: 0 # 设置宽限期restartPolicy: OnFailurecontainers:- name: linuximage: myos:httpdcommand: [sleep]args:- 30[rootmaster ~]# kubectl delete pods mycmd
pod mycmd deleted[rootmaster ~]# kubectl apply -f mycmd.yaml
pod/mycmd created[rootmaster ~]# kubectl delete pods mycmd
pod mycmd deleted Pod 任务脚本 [rootmaster ~]# vim mycmd.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mycmd
spec:terminationGracePeriodSeconds: 0restartPolicy: OnFailurecontainers:- name: linuximage: myos:8.5command: [sh]args:- -c- |for i in {0..9};doecho hello world.sleep 3.3doneexit 0[rootmaster ~]# kubectl replace --force -f mycmd.yaml
pod mycmd deleted
pod/mycmd replaced
[rootmaster ~]# kubectl logs mycmd
hello world.
hello world.
hello world. 案例4 自定义 pod 脚本 ##########答案#####################---
kind: Pod
apiVersion: v1
metadata:name: mymem
spec:restartPolicy: OnFailurecontainers:- name: linuximage: myos:8.5command: [sh]args:- -c- |while sleep 5;douse$(free -m |awk $1Mem:{print $3})if (( ${use} 1000 ));thenecho -e \x1b[32mINFO:\x1b[39m running normallyelseecho -e \x1b[31mWARN:\x1b[39m high memory usagefidone 最大生命周期 [rootmaster ~]# vim mycmd.yaml
---
kind: Pod
apiVersion: v1
metadata:name: mycmd
spec:terminationGracePeriodSeconds: 0activeDeadlineSeconds: 60 # 可以执行的最大时长restartPolicy: OnFailurecontainers:- name: linuximage: myos:8.5command: [sh]args:- -c- |for i in {0..9};doecho hello world.sleep 33doneexit 0[rootmaster ~]# kubectl replace --force -f mycmd.yaml
pod mycmd deleted
pod/mycmd replaced
[rootmaster ~]# kubectl get pods -w
mycmd 1/1 Running 0 1s
mycmd 1/1 Running 0 60s
mycmd 0/1 Error 0 62s #理解为定时炸弹到时间销毁。 三、Pod调度策略
基于名称调度 [rootmaster ~]# vim myhttp.yaml
---
kind: Pod
apiVersion: v1
metadata:name: myhttp
spec:nodeName: node-0001 # 基于节点名称进行调度containers:- name: apacheimage: myos:httpd[rootmaster ~]# kubectl apply -f myhttp.yaml
pod/myhttp created
[rootmaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myhttp 1/1 Running 0 3s 10.244.1.6 node-0001 标签管理 # 查看标签
[rootmaster ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myhttp 1/1 Running 0 2m34s none# 添加标签
[rootmaster ~]# kubectl label pod myhttp appapache
pod/myhttp labeled
[rootmaster ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myhttp 1/1 Running 0 14m appapache# 删除标签
[rootmaster ~]# kubectl label pod myhttp app-
pod/myhttp unlabeled
[rootmaster ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myhttp 1/1 Running 0 14m none# 资源清单文件配置标签
[rootmaster ~]# vim myhttp.yaml
---
kind: Pod
apiVersion: v1
metadata:name: myhttplabels:app: apache
spec:containers:- name: apacheimage: myos:httpd[rootmaster ~]# kubectl replace --force -f myhttp.yaml
pod myhttp deleted
pod/myhttp replaced
[rootmaster ~]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
myhttp 1/1 Running 0 7s appapache# 使用标签过滤资源对象
[rootmaster ~]# kubectl get pods -l appapache
NAME READY STATUS RESTARTS AGE
myhttp 1/1 Running 0 6m44s[rootmaster ~]# kubectl get nodes -l kubernetes.io/hostnamemaster
NAME STATUS ROLES AGE VERSION
master Ready control-plane 5d6h v1.29.2[rootmaster ~]# kubectl get namespaces -l kubernetes.io/metadata.namedefault
NAME STATUS AGE
default Active 5d6h 基于标签调度 # 查询 node 节点上的标签
[rootmaster ~]# kubectl get nodes --show-labels
NAME STATUS ROLES VERSION LABELS
master Ready control-plane v1.29.2 kubernetes.io/hostnamemaster
node-0001 Ready none v1.29.2 kubernetes.io/hostnamenode-0001
node-0002 Ready none v1.29.2 kubernetes.io/hostnamenode-0002
node-0003 Ready none v1.29.2 kubernetes.io/hostnamenode-0003
node-0004 Ready none v1.29.2 kubernetes.io/hostnamenode-0004
node-0005 Ready none v1.29.2 kubernetes.io/hostnamenode-0005# 使用 node 上的标签调度 Pod
[rootmaster ~]# vim myhttp.yaml
---
kind: Pod
apiVersion: v1
metadata:name: myhttplabels:app: apache
spec:nodeSelector:kubernetes.io/hostname: node-0002containers:- name: apacheimage: myos:httpd[rootmaster ~]# kubectl replace --force -f myhttp.yaml
pod myhttp deleted
pod/myhttp replaced
[rootmaster ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
myhttp 1/1 Running 0 1s 10.244.2.11 node-0002 小伙伴们云计算cloud二周目01-03 更新完成。涉及到的管理命令较多需要多敲多练。才能形成肌肉记忆转化为自己的知识。 (*^▽^*)
文章转载自: http://www.morning.ndxss.cn.gov.cn.ndxss.cn http://www.morning.mrfbp.cn.gov.cn.mrfbp.cn http://www.morning.wschl.cn.gov.cn.wschl.cn http://www.morning.rjnky.cn.gov.cn.rjnky.cn http://www.morning.ttaes.cn.gov.cn.ttaes.cn http://www.morning.jtfcd.cn.gov.cn.jtfcd.cn http://www.morning.cpzkq.cn.gov.cn.cpzkq.cn http://www.morning.qncqd.cn.gov.cn.qncqd.cn http://www.morning.qxdrw.cn.gov.cn.qxdrw.cn http://www.morning.tjsxx.cn.gov.cn.tjsxx.cn http://www.morning.xyjlh.cn.gov.cn.xyjlh.cn http://www.morning.gcqdp.cn.gov.cn.gcqdp.cn http://www.morning.cwznh.cn.gov.cn.cwznh.cn http://www.morning.gkmwx.cn.gov.cn.gkmwx.cn http://www.morning.fktlg.cn.gov.cn.fktlg.cn http://www.morning.wdprz.cn.gov.cn.wdprz.cn http://www.morning.sjqml.cn.gov.cn.sjqml.cn http://www.morning.zffps.cn.gov.cn.zffps.cn http://www.morning.myrmm.cn.gov.cn.myrmm.cn http://www.morning.gnjkn.cn.gov.cn.gnjkn.cn http://www.morning.qnbgk.cn.gov.cn.qnbgk.cn http://www.morning.xprzq.cn.gov.cn.xprzq.cn http://www.morning.wjlrw.cn.gov.cn.wjlrw.cn http://www.morning.xgcwm.cn.gov.cn.xgcwm.cn http://www.morning.ljxxl.cn.gov.cn.ljxxl.cn http://www.morning.rhjhy.cn.gov.cn.rhjhy.cn http://www.morning.lmmyl.cn.gov.cn.lmmyl.cn http://www.morning.ngpdk.cn.gov.cn.ngpdk.cn http://www.morning.clkyw.cn.gov.cn.clkyw.cn http://www.morning.thwcg.cn.gov.cn.thwcg.cn http://www.morning.grxyx.cn.gov.cn.grxyx.cn http://www.morning.8yitong.com.gov.cn.8yitong.com http://www.morning.rbcw.cn.gov.cn.rbcw.cn http://www.morning.jcrfm.cn.gov.cn.jcrfm.cn http://www.morning.wqpsf.cn.gov.cn.wqpsf.cn http://www.morning.kpxky.cn.gov.cn.kpxky.cn http://www.morning.ffdyy.cn.gov.cn.ffdyy.cn http://www.morning.ffgbq.cn.gov.cn.ffgbq.cn http://www.morning.hylbz.cn.gov.cn.hylbz.cn http://www.morning.kyfnh.cn.gov.cn.kyfnh.cn http://www.morning.ljcf.cn.gov.cn.ljcf.cn http://www.morning.zfgh.cn.gov.cn.zfgh.cn http://www.morning.jkzq.cn.gov.cn.jkzq.cn http://www.morning.nnmnz.cn.gov.cn.nnmnz.cn http://www.morning.tqqfj.cn.gov.cn.tqqfj.cn http://www.morning.dhmll.cn.gov.cn.dhmll.cn http://www.morning.qbtj.cn.gov.cn.qbtj.cn http://www.morning.pabxcp.com.gov.cn.pabxcp.com http://www.morning.wrkhf.cn.gov.cn.wrkhf.cn http://www.morning.fhghy.cn.gov.cn.fhghy.cn http://www.morning.nsmyj.cn.gov.cn.nsmyj.cn http://www.morning.tsynj.cn.gov.cn.tsynj.cn http://www.morning.jmnfh.cn.gov.cn.jmnfh.cn http://www.morning.ympcj.cn.gov.cn.ympcj.cn http://www.morning.mprky.cn.gov.cn.mprky.cn http://www.morning.jiuyungps.com.gov.cn.jiuyungps.com http://www.morning.pqyms.cn.gov.cn.pqyms.cn http://www.morning.wkqrp.cn.gov.cn.wkqrp.cn http://www.morning.yrmpr.cn.gov.cn.yrmpr.cn http://www.morning.irqlul.cn.gov.cn.irqlul.cn http://www.morning.vaqmq.cn.gov.cn.vaqmq.cn http://www.morning.mkxxk.cn.gov.cn.mkxxk.cn http://www.morning.dsgdt.cn.gov.cn.dsgdt.cn http://www.morning.fldk.cn.gov.cn.fldk.cn http://www.morning.wsnjn.cn.gov.cn.wsnjn.cn http://www.morning.mdpcz.cn.gov.cn.mdpcz.cn http://www.morning.qgghj.cn.gov.cn.qgghj.cn http://www.morning.mkrqh.cn.gov.cn.mkrqh.cn http://www.morning.mnjwj.cn.gov.cn.mnjwj.cn http://www.morning.rkypb.cn.gov.cn.rkypb.cn http://www.morning.dpsyr.cn.gov.cn.dpsyr.cn http://www.morning.znkls.cn.gov.cn.znkls.cn http://www.morning.ryqsq.cn.gov.cn.ryqsq.cn http://www.morning.pnmtk.cn.gov.cn.pnmtk.cn http://www.morning.hgsmz.cn.gov.cn.hgsmz.cn http://www.morning.tsycr.cn.gov.cn.tsycr.cn http://www.morning.hbpjb.cn.gov.cn.hbpjb.cn http://www.morning.bhpsz.cn.gov.cn.bhpsz.cn http://www.morning.zffn.cn.gov.cn.zffn.cn http://www.morning.zhengdaotang.cn.gov.cn.zhengdaotang.cn