当前位置: 首页 > news >正文

长葛网站制作保险网站哪个好

长葛网站制作,保险网站哪个好,衡水网站建设服务商,百度咨询电话 人工目录 基于k8s的高性能综合web服务器搭建 项目描述#xff1a; 项目规划图#xff1a; 项目环境#xff1a; k8s#xff0c; docker centos7.9 nginx prometheus grafana flask ansible Jenkins等 1.规划设计整个集群的架构#xff0c;k8s单master的集群环境…目录 基于k8s的高性能综合web服务器搭建 项目描述 项目规划图 项目环境  k8s docker  centos7.9  nginx  prometheus  grafana flask  ansible Jenkins等 1.规划设计整个集群的架构k8s单master的集群环境单master双worker),部署dashboard监视集群资源 规划好IP地址 关闭selinux和firewalld 2.部署ansible完成相关业务的自动化运维工作同时部署防火墙服务器和堡垒机提升整个集群的安全性。 3.部署堡垒机和防火墙 部署堡垒机仅需两步快速安装 JumpServer准备一台 2核4G 最低且可以访问互联网的 64 位 Linux 主机以 root 用户执行如下命令一键安装 JumpServer。 ##出现这个就表示你的jumpserver初步部署完成 ##部署防火墙 #编写脚本实现iptables 4.部署nfs服务器为整个web集群提供数据存储服务让所有的web业务pod都取访问通过pv和pvc、卷挂载实现。 5..使用go语言搭建一个简易的镜像启动nginx采用HPA技术当cpu使用率达到60%的时候进行水平扩缩最小10个最多40个pod。 ##访问测试表示服务启动 #下面开始制作镜像打标签登录harbor仓库上传其他节点拉取镜像 #使用水平扩缩技术 5.构建CI/CD环境安装gitlab、Jenkins、harbor实现相关的代码发布、镜像制作、数据备份等流水线工作   部署Jenkins #接下来部署harbor 简单测试harbor仓库是否可以使用 7.部署promethuesgrafana对集群里的所有服务器cpu内存网络带宽磁盘IO等进行常规性能监控包括k8s集群节点服务器。 #安装grafana绘制优美的图片方便我们进行观察 #我将密码修改为123456 ​编辑 8.使用ingress给web业务做基于域名的负载均衡 拓展小知识 部署过程 9.使用探针liveless、readiness、startup的httpGet和exec方法对web业务pod进行监控一旦出现问题马上重启增强业务pod的可靠性。 10.使用ab工具对整个k8s集群里的web服务进行压力测试 基于k8s的高性能综合web服务器搭建 项目描述 模拟企业里的k8s测试环境部署webmysqlnfsharborPrometheusgitlabJenkins等应用构建一个高可用高性能的web系统同时能监控整个k8s集群的使用部署了CICD的一套系统。 项目规划图 项目环境  k8s docker  centos7.9  nginx  prometheus  grafana flask  ansible Jenkins等 步骤 1.规划设计整个集群的架构k8s单master的集群环境单master双worker),部署dashboard监视集群资源 规划好IP地址 master        jekens    192.168.0.20 Slave1                    192.168.0.21 slave2                    192.168.0.22 ansible                 192.168.0.30 防火墙                  192.168.0.31 堡垒机jumpserver代理             192.168.0.32 prometheus         192.168.0.33 harbor                 192.168.0.34  gitlab              192.168.0.35   nfs服务器          192.168.0.36 #修改主机名改为master hostnamectl set-hostname mastersu ##切换用户 关闭selinux和firewalld #关闭防火墙和selinux [rootansible ~]# systemctl stop firewalld [rootansible ~]# systemctl disable firewalld [rootansible ~]# getenforce Disabled[rootansible ~]# cat /etc/selinux/config # This file controls the state of SELinux on the system. # SELINUX can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. SELINUXdisabled##其他所有机器关闭 #IP地址规划 [rootansible ~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33 BOOTPROTOnone NAMEens33 DEVICEens33 ONBOOTyes IPADDR192.168.0.30 GATEWAY192.168.0.2 DNS18.8.8.8 DNS2114.114.114.114 ##其他机器合适规划IP地址2.部署ansible完成相关业务的自动化运维工作同时部署防火墙服务器和堡垒机提升整个集群的安全性。 #在kubernetes集群里面和ansible建立免密通道 #一直回车就好就是用默认的就好 [rootansible ~]# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Created directory /root/.ssh. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:BT7myvQ1r1QoEJgurdR4MZxdCulsFbyC3S4j/08xT5E rootansible The keys randomart image is: ---[RSA 2048]---- | ..Booo | | O. . . | | X o. E | | o o o | | . o. S . | | o oo.o B | | o oo o o . | | . . . . | | .... . | ----[SHA256]----- ##传递ansible的id_rsa.pub 到其他的master集群上 [rootansible ~]# ssh-copy-id master /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: /root/.ssh/id_rsa.pub The authenticity of host master (192.168.0.20) cant be established. ECDSA key fingerprint is SHA256:xactOuiFsm9merQVjdeiV4iZwI4rXUnviFYTXL2h8fc. ECDSA key fingerprint is MD5:69:58:6b:ab:c4:8c:27:e2:b2:7c:31:bb:63:20:81:61. Are you sure you want to continue connecting (yes/no)? yes /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys rootmasters password: Permission denied, please try again. rootmasters password: Number of key(s) added: 1Now try logging into the machine, with: ssh master and check to make sure that only the key(s) you wanted were added.[rootansible .ssh]# ls id_rsa id_rsa.pub known_hosts#前面配置好这个IP地址 [rootansible .ssh]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.20 master 192.168.0.21 worker1 192.168.0.22 worker2 192.168.0.30 ansible ##ansible的/etc/hosts文件的内容是要多一点管理的节点更多##测试登录 [rootansible ~]# ssh worker1 Last login: Wed Apr 3 11:11:49 2024 from ansible [rootworker1 ~]# 同理测试 [rootansible ~]# ssh worker2 [rootansible ~]# ssh master#安装ansible [rootansible .ssh]# yum install epel-release -y [rootansible .ssh]# yum install ansible -y#编写主机清单 #主机清单 [master] 192.168.0.20 [workers] 192.168.0.21 192.168.0.22 [nfs] 192.168.0.36 [gitlab] 192.168.0.35 [harbor] 192.168.0.34 [promethus] 192.168.0.33 3.部署堡垒机和防火墙 部署堡垒机 仅需两步快速安装 JumpServer 准备一台 2核4G 最低且可以访问互联网的 64 位 Linux 主机 以 root 用户执行如下命令一键安装 JumpServer。 curl -sSL https://resource.fit2cloud.com/jumpserver/jumpserver/releases/latest/download/quick_start.sh | bash ##这个是安装完成的提示提示符号 安装完成了 1. 可以使用如下命令启动, 然后访问 cd /opt/jumpserver-installer-v3.10.7 ./jmsctl.sh start 2. 其它一些管理命令 ./jmsctl.sh stop ./jmsctl.sh restart ./jmsctl.sh backup ./jmsctl.sh upgrade 更多还有一些命令, 你可以 ./jmsctl.sh --help 来了解 3. Web 访问 http://192.168.0.32:80 默认用户: admin  默认密码: admin 4. SSH/SFTP 访问 ssh -p2222 admin192.168.0.32 sftp -P2222 admin192.168.0.32 ##出现这个就表示你的jumpserver初步部署完成 ##部署防火墙 #防火墙的配置 WAN口是ens36,LAN是ens33 [rootfirewalld ~]# ip a 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:e7:7d:f3 brd ff:ff:ff:ff:ff:ffinet 192.168.0.31/24 brd 192.168.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft foreverinet6 fe80::20c:29ff:fee7:7df3/64 scope link valid_lft forever preferred_lft forever 3: ens36: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:e7:7d:fd brd ff:ff:ff:ff:ff:ffinet 192.168.1.5/24 brd 192.168.1.255 scope global noprefixroute dynamic ens36valid_lft 5059sec preferred_lft 5059secinet6 fe80::347c:1701:c765:777b/64 scope link noprefixroute valid_lft forever preferred_lft forever#本地的服务器可以是将网关设置成防火墙的IP地址--》当做LAN口 [rootnfs ~]# cat /etc/sysconfig/network-scripts/ifcfg-ens33 BOOTPROTOnone NAMEens33 DEVICEens33 ONBOOTyes GATEWAY192.168.0.31 IPADDR192.168.0.36 DNS18.8.8.8#查看路由 [rootnfs ~]# ip route default via 192.168.1.5 dev ens33 proto static metric 100  192.168.0.0/24 dev ens33 proto kernel scope link src 192.168.0.36 metric 100  192.168.1.5 dev ens33 proto static scope link metric 100 #编写脚本实现iptables #脚本 4.部署nfs服务器为整个web集群提供数据存储服务让所有的web业务pod都取访问通过pv和pvc、卷挂载实现。 ##在所有的k8s集群上部署nfs服务器设置pv,pvc实现卷的永久挂载 [rootnfs ~]# yum install nfs-utils -y [rootworker1 ~]# yum install nfs-utils -y [rootworker2 ~]# yum install nfs-utils -y [rootmaster ~]# yum install nfs-utils -y#设置共享目录 [rootnfs ~]# vim /etc/exports [rootnfs ~]# cat /etc/exports /web/data 192.168.0.0/24(rw,root squashing,sync)##root squashing--》当做root用户--》可以读写 #输出共享目录 [rootnfs data]# exportfs -rv exporting 192.168.0.0/24:/web/data#创建共享目录 [rootnfs /]# cd web/ [rootnfs web]# ls data [rootnfs web]# cd data [rootnfs data]# ls index.html [rootnfs data]# cat index.html ##编写网页头文件 welcome to sanchuang !!! \n welcome to sanchuang !!! 0000000000000000000000 welcome to sanchuang !!! welcome to sanchuang !!! welcome to sanchuang !!! 666666666666666666 !!! 777777777777777777 !!!##刷新服务 [rootnfs data]# service nfs restart#设置nfs服务开机启动 [rootnfs web]# systemctl restart nfs systemctl enable nfs Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.#在k8s集群里面挂载 在k8s集群里的任意一个节点服务器上测试能否挂载nfs服务器共享的目录 [rootk8snode1 ~]# mkdir /worker1_nfs [rootworker1 ~]# mount 192.168.0.36:/web /worker1_nfs [rootworker1 ~]# df -Th|grep nfs 192.168.0.36:/web nfs4 50G 3.8G 47G 8% /worker1_nfs ##master 192.168.0.36:/web nfs4 54G 4.1G 50G 8% /master_nfs #worker2 [rootworker2 ~]# df -Th|grep nfs 192.168.0.36:/web nfs4 50G 3.8G 47G 8% /worker2_nfs##创建pv-pvc目录存放对整个系统的pv-pvc [rootmaster ~]# cd /pv-pvc/ [rootmaster pv-pvc]# ls nfs-pvc-yaml nfs-pv.yaml [rootmaster pv-pvc]# kubectl apply -f nfs-pv.yaml persistentvolume/pv-web created [rootmaster pv-pvc]# kubectl apply -f nfs-pvc-yaml persistentvolumeclaim/pvc-web created [rootmaster pv-pvc]# cat nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: pv-weblabels:type: pv-web spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfs # pv对应的名字nfs:path: /web # nfs共享的目录server: 192.168.0.36 # nfs服务器的ip地址readOnly: false # 访问模式[rootmaster pv-pvc]# cat nfs-pvc.yaml cat: nfs-pvc.yaml: 没有那个文件或目录 [rootmaster pv-pvc]# cat nfs-pvc-yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: pvc-web spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs #使用nfs类型的pv#效果图 [rootmaster pv-pvc]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-web Bound pv-web 10Gi RWX nfs 2m44s##创建pod 使用pvc [rootmaster pv-pvc]# cat nginx-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: pvc-webcontainers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: http-servervolumeMounts:- mountPath: /usr/share/nginx/htmlname: sc-pv-storage-nfs #启动pod [rootmaster pv-pvc]# kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created [rootmaster pv-pvc]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-d4c8d4d89-9spwk 1/1 Running 0 111s 10.224.235.133 worker1 none none nginx-deployment-d4c8d4d89-lk4mb 1/1 Running 0 111s 10.224.189.70 worker2 none none nginx-deployment-d4c8d4d89-ml8l7 1/1 Running 0 111s 10.224.189.69 worker2 none none[rootmaster pv-pvc]# kubectl apply -f nfs-pv.yaml persistentvolume/pv-web created [rootmaster pv-pvc]# kubectl apply -f nfs-pvc.yaml persistentvolumeclaim/pvc-web created [rootmaster pv-pvc]# kubectl apply -f nginx-deployment.yaml deployment.apps/nginx-deployment created [rootmaster pv-pvc]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-d4c8d4d89-2xh6w 1/1 Running 0 12s 10.224.235.134 worker1 none none nginx-deployment-d4c8d4d89-c64c4 1/1 Running 0 12s 10.224.189.71 worker2 none none nginx-deployment-d4c8d4d89-fhvfd 1/1 Running 0 12s 10.224.189.72 worker2 none none##连接测试发现成功了 [rootmaster pv-pvc]# curl 10.224.235.134 welcome to sanchuang !!! \n welcome to sanchuang !!! 0000000000000000000000 welcome to sanchuang !!! welcome to sanchuang !!! welcome to sanchuang !!! 666666666666666666 !!! 777777777777777777 !!! 5..使用go语言搭建一个简易的镜像启动nginx采用HPA技术当cpu使用率达到60%的时候进行水平扩缩最小10个最多40个pod。 #使用go语言制作简易镜像,上传到本地harbor仓库让其他的节点下载启动web服务 [rootharbor harbor]# mkdir go [rootharbor harbor]# cd go [rootharbor go]# pwd /harbor/go [rootharbor go]# ls apiserver.tar.gz [rootharbor go]# 安装go语言的环境 [rootharbor yum.repos.d]# yum install epel-release -y [rootharbor yum.repos.d]# yum install golang -y [rootharbor go]# vim server.go package main//server.go是主运行文件import (net/httpgithub.com/gin-gonic/gin )//gin--go中的web框架//入口函数 func main(){//创建一个web服务器r:gin.Default()// 当访问/sc返回{message:hello, sanchuang}r.GET(/,func(c *gin.Context){//200返回的数据c.JSON(http.StatusOK,gin.H{message:hello,sanchuanger 2024 nice,})})//运行web服务r.Run() } [rootharbor go]# cat Dockerfile FROM centos:7 WORKDIR /go COPY . /go RUN ls /go pwd ENTRYPOINT [/go/scweb]#上传apiserver这个是k8s里面的重要组件 [rootharbor go]# ls apiserver.tar.gz server.go [rootharbor go]# vim server.go [rootharbor go]# go env -w GOPROXYhttps://goproxy.cn,direct [rootharbor go]# go mod init web go: creating new go.mod: module web go: to add module requirements and sums:go mod tidy [rootharbor go]# go mod tidy go: finding module for package github.com/gin-gonic/gin go: downloading github.com/gin-gonic/gin v1.9.1 go: found github.com/gin-gonic/gin in github.com/gin-gonic/gin v1.9.1 go: downloading github.com/gin-contrib/sse v0.1.0 go: downloading github.com/mattn/go-isatty v0.0.19 go: downloading golang.org/x/net v0.10.0 go: downloading github.com/stretchr/testify v1.8.3 go: downloading google.golang.org/protobuf v1.30.0 go: downloading github.com/go-playground/validator/v10 v10.14.0 go: downloading github.com/pelletier/go-toml/v2 v2.0.8 go: downloading github.com/ugorji/go/codec v1.2.11 go: downloading gopkg.in/yaml.v3 v3.0.1 go: downloading github.com/bytedance/sonic v1.9.1 go: downloading github.com/goccy/go-json v0.10.2 go: downloading github.com/json-iterator/go v1.1.12 go: downloading golang.org/x/sys v0.8.0 go: downloading github.com/davecgh/go-spew v1.1.1 go: downloading github.com/pmezard/go-difflib v1.0.0 go: downloading github.com/gabriel-vasile/mimetype v1.4.2 go: downloading github.com/go-playground/universal-translator v0.18.1 go: downloading github.com/leodido/go-urn v1.2.4 go: downloading golang.org/x/crypto v0.9.0 go: downloading golang.org/x/text v0.9.0 go: downloading github.com/go-playground/locales v0.14.1 go: downloading github.com/modern-go/reflect2 v1.0.2 go: downloading github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd go: downloading github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 go: downloading golang.org/x/arch v0.3.0 go: downloading github.com/twitchyliquid64/golang-asm v0.15.1 go: downloading github.com/klauspost/cpuid/v2 v2.2.4 go: downloading github.com/go-playground/assert/v2 v2.2.0 go: downloading github.com/google/go-cmp v0.5.5 go: downloading gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 go: downloading golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543 [rootharbor go]# go run server.go [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.[GIN-debug] [WARNING] Running in debug mode. Switch to release mode in production.- using env: export GIN_MODErelease- using code: gin.SetMode(gin.ReleaseMode)[GIN-debug] GET / -- main.main.func1 (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Environment variable PORT is undefined. Using port :8080 by default [GIN-debug] Listening and serving HTTP on :8080运行代码默认监听的是8080这个步骤只是测试我们的server.go能否正常运行 #将这个server.go编写成一个二进制可以执行文件 [rootharbor go]# go build -o k8s-web . [rootharbor go]# ls apiserver.tar.gz go.mod go.sum k8s-web server.go##访问测试表示服务启动 [rootharbor go]# ./k8s-web  [GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached. [GIN-debug] [WARNING] Running in debug mode. Switch to release mode in production.  - using env:    export GIN_MODErelease  - using code:    gin.SetMode(gin.ReleaseMode) [GIN-debug] GET    /                         -- main.main.func1 (3 handlers) [GIN-debug] [WARNING] You trusted all proxies, this is NOT safe. We recommend you to set a value. Please check https://pkg.go.dev/github.com/gin-gonic/gin#readme-don-t-trust-all-proxies for details. [GIN-debug] Environment variable PORT is undefined. Using port :8080 by default [GIN-debug] Listening and serving HTTP on :8080 [GIN] 2024/04/04 - 12:38:39 | 200 |     120.148?s |     192.168.0.1 | GET      /   #下面开始制作镜像打标签登录harbor仓库上传其他节点拉取镜像 [rootharbor go]# cat Dockerfile FROM centos:7 WORKDIR /harbor/go COPY . /harbor/go RUN ls /harbor/go pwd ENTRYPOINT [/harbor/k8s-web][rootharbor go]# docker pull centos:7 7: Pulling from library/centos 2d473b07cdd5: Pull complete Digest: sha256:9d4bcbbb213dfd745b58be38b13b996ebb5ac315fe75711bd618426a630e0987 Status: Downloaded newer image for centos:7 docker.io/library/centos:7 [rootharbor go]# vim Dockerfile [rootharbor go]# docker build -t scmyweb:1.1 . [] Building 2.5s (9/9) FINISHED docker:default [internal] load build definition from Dockerfile 0.0s transferring dockerfile: 147B 0.0s [internal] load metadata for docker.io/library/centos:7 0.0s [internal] load .dockerignore 0.0s transferring context: 2B 0.0s [1/4] FROM docker.io/library/centos:7 0.0s [internal] load build context 0.1s transferring context: 295B 0.0s [2/4] WORKDIR /harbor/go 0.4s [3/4] COPY . /harbor/go 0.4s [4/4] RUN ls /harbor/go pwd 1.4s exporting to image 0.1s exporting layers 0.1s writing image sha256:fed4a30515b10e9f15c6dd7ba092b553658d3c7a33466bf38a20762bde68 0.0s naming to docker.io/library/scmyweb:1.1 0.0s [rootharbor go]# docker tag scmyweb:1.1 192.168.0.34:5001/k8s-web/web:v1 [rootharbor go]# docker image ls | grep web 192.168.0.34:5001/k8s-web/web v1 fed4a30515b1 3 minutes ago 221MB##将镜像上传到harbor仓库然后让worker1和worker2来拉取镜像 [rootworker1 ~]# docker pull 192.168.0.34:5001/k8s-web/web:v1 [rootworker2 ~]# docker pull 192.168.0.34:5001/k8s-web/web:v1 #检查一下 [rootworker2 ~]# docker images|grep web 192.168.0.34:5001/k8s-web/web v1 fed4a #使用水平扩缩技术 # 采用HPA技术当cpu使用率达到50%的时候进行水平扩缩最小1个最多10个pod # HorizontalPodAutoscaler简称 HPA 自动更新工作负载资源例如Deployment目的是自动扩缩# 工作负载以满足需求。 https://kubernetes.io/zh-cn/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/# 1.安装metrics server # 下载components.yaml配置文件 wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml# 替换imageimage: registry.aliyuncs.com/google_containers/metrics-server:v0.6.0imagePullPolicy: IfNotPresentargs: # // 新增下面两行参数- --kubelet-insecure-tls- --kubelet-preferred-address-typesInternalDNS,InternalIP,ExternalDNS,ExternalIP,Hostname[rootmaster metrics]# docker load -i metrics-server-v0.6.3.tar d0157aa0c95a: Loading layer 327.7kB/327.7kB 6fbdf253bbc2: Loading layer 51.2kB/51.2kB 1b19a5d8d2dc: Loading layer 3.185MB/3.185MB ff5700ec5418: Loading layer 10.24kB/10.24kB d52f02c6501c: Loading layer 10.24kB/10.24kB e624a5370eca: Loading layer 10.24kB/10.24kB 1a73b54f556b: Loading layer 10.24kB/10.24kB d2d7ec0f6756: Loading layer 10.24kB/10.24kB 4cb10dd2545b: Loading layer 225.3kB/225.3kB ebc813d4c836: Loading layer 66.45MB/66.45MB Loaded image: registry.k8s.io/metrics-server/metrics-server:v0.6.3 [rootmaster metrics]# vim components.yaml [rootmaster mysql]# kubectl top nodes NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% master 343m 17% 1677Mi 45% worker1 176m 8% 1456Mi 39% worker2 184m 9% 1335Mi 36% #部署服务开启HPA ##创建nginx服务开启水平扩缩功能最少3个最多20个CPU大于70就开始水平扩缩 [rootmaster nginx]# kubectl apply -f web-hpa.yaml deployment.apps/ab-nginx created service/ab-nginx-svc created horizontalpodautoscaler.autoscaling/ab-nginx created [rootmaster nginx]# cat web-hpa.yaml apiVersion: apps/v1 kind: Deployment metadata:name: ab-nginx spec:selector:matchLabels:run: ab-nginxtemplate:metadata:labels:run: ab-nginxspec:#nodeName: node-2 #取消指定containers:- name: ab-nginximage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80resources:limits:cpu: 100mrequests:cpu: 50m --- apiVersion: v1 kind: Service metadata:name: ab-nginx-svclabels:run: ab-nginx-svc spec:type: NodePortports:- port: 80targetPort: 80nodePort: 31000selector:run: ab-nginx --- apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata:name: ab-nginx spec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: ab-nginxminReplicas: 3maxReplicas: 20targetCPUUtilizationPercentage: 70[rootmaster nginx]# kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE ab-nginx 3/3 3 3 2m10s [rootmaster nginx]# kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE ab-nginx Deployment/ab-nginx 0%/70% 3 20 3 2m28s ##访问成功 [rootmaster nginx]# curl 192.168.0.20:31000 !DOCTYPE html html head titleWelcome to nginx!/title style html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } /style /head body h1Welcome to nginx!/h1 pIf you see this page, the nginx web server is successfully installed and working. Further configuration is required./ppFor online documentation and support please refer to a hrefhttp://nginx.org/nginx.org/a.br/ Commercial support is available at a hrefhttp://nginx.com/nginx.com/a./ppemThank you for using nginx./em/p /body /html #开启MySQL的pod为web业务提供数据库服务支持。 1.编写yaml文件包括了deployment、service [rootmaster ~]# mkdir /mysql [rootmaster ~]# cd /mysql/ [rootmaster mysql]# vim mysql.yamlapiVersion: apps/v1 kind: Deployment metadata:labels:app: mysqlname: mysql spec:replicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels: app: mysqlspec:containers:- image: mysql:latestname: mysqlimagePullPolicy: IfNotPresentenv:- name: MYSQL_ROOT_PASSWORDvalue: 123456 #mysql的密码ports:- containerPort: 3306 --- apiVersion: v1 kind: Service metadata:labels:app: svc-mysqlname: svc-mysql spec:selector:app: mysqltype: NodePortports:- port: 3306protocol: TCPtargetPort: 3306nodePort: 300072.部署 [rootmaster mysql]# kubectl apply -f mysql.yaml deployment.apps/mysql created service/svc-mysql created [rootmaster mysql]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 23h php-apache ClusterIP 10.96.134.145 none 80/TCP 21h svc-mysql NodePort 10.109.190.20 none 3306:30007/TCP 9s [rootmaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-597ff9595d-tzqzl 0/1 ContainerCreating 0 27s nginx-deployment-794d8c5666-dsxkq 1/1 Running 1 (15m ago) 22h nginx-deployment-794d8c5666-fsctm 1/1 Running 1 (15m ago) 22h nginx-deployment-794d8c5666-spkzs 1/1 Running 1 (15m ago) 22h php-apache-7b9f758896-2q44p 1/1 Running 1 (15m ago) 21h[rootmaster mysql]# kubectl exec -it mysql-597ff9595d-tzqzl -- bash rootmysql-597ff9595d-tzqzl:/# mysql -uroot -p123456 #容器内部进入mysqlmysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 8.0.27 MySQL Community Server - GPLCopyright (c) 2000, 2021, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type help; or \h for help. Type \c to clear the current input statement.mysql 5.构建CI/CD环境安装gitlab、Jenkins、harbor实现相关的代码发布、镜像制作、数据备份等流水线工作   #配置gitlub服务器 [rootlocalhost ~]# hostnamectl set-hostname gitlab [rootlocalhost ~]# su [rootgitlab ~]# #部署过程 # 1.安装和配置必须的依赖项 yum install -y curl policycoreutils-python openssh-server perl# 2.配置极狐GitLab 软件源镜像 [rootgitlab ~]# curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bashDetected OS centos Add yum repo file to /etc/yum.repos.d/gitlab-jh.repo[gitlab-jh] nameJiHu GitLab baseurlhttps://packages.gitlab.cn/repository/el/$releasever/ gpgcheck0 gpgkeyhttps://packages.gitlab.cn/repository/raw/gpg/public.gpg.key priority1 enabled1 Generate yum cache for gitlab-jh Successfully added gitlab-jh repo. To install JiHu GitLab, run sudo yum/dnf install gitlab-jh.[rootgitlab ~]# yum install gitlab-jh -y Thank you for installing JiHu GitLab! GitLab was unable to detect a valid hostname for your instance. Please configure a URL for your JiHu GitLab instance by setting external_url configuration in /etc/gitlab/gitlab.rb file. Then, you can start your JiHu GitLab instance by running the following command:sudo gitlab-ctl reconfigureFor a comprehensive list of configuration options please see the Omnibus GitLab readme https://jihulab.com/gitlab-cn/omnibus-gitlab/-/blob/main-jh/README.mdHelp us improve the installation experience, let us know how we did with a 1 minute survey: https://wj.qq.com/s2/10068464/dc66[rootgitlab ~]# vim /etc/gitlab/gitlab.rb external_url http://myweb.first.com[rootgitlab ~]# gitlab-ctl reconfigure Notes: Default admin account has been configured with following details: Username: root Password: You didnt opt-in to print initial root password to STDOUT. Password stored to /etc/gitlab/initial_root_password. This file will be cleaned up in first reconfigure run after 24 hours. NOTE: Because these credentials might be present in your log files in plain text, it is highly recommended to reset the password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password. gitlab Reconfigured! # 查看密码 [rootgitlab ~]# cat /etc/gitlab/initial_root_password # WARNING: This value is valid only in the following conditions # 1. If provided manually (either via GITLAB_ROOT_PASSWORD environment variable or via gitlab_rails[initial_root_password] setting in gitlab.rb, it was provided before database was seeded for the first time (usually, the first reconfigure run). # 2. Password hasnt been changed manually, either via UI or via command line. # # If the password shown here doesnt work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.Password: mzYlWEzJG6nzbExL6L25J7jhbup0Ye8QFldcD/rXNqg# NOTE: This file will be automatically deleted in the first reconfigure run after 24 hours.# 可以登录后修改语言为中文 # 用户的profile/preferences# 修改密码[rootgitlab ~]# gitlab-rake gitlab:env:infoSystem information System: Proxy: no Current User: git Using RVM: no Ruby Version: 3.0.6p216 Gem Version: 3.4.13 Bundler Version:2.4.13 Rake Version: 13.0.6 Redis Version: 6.2.11 Sidekiq Version:6.5.7 Go Version: unknownGitLab information Version: 16.0.4-jh Revision: c2ed99db36f Directory: /opt/gitlab/embedded/service/gitlab-rails DB Adapter: PostgreSQL DB Version: 13.11 URL: http://myweb.first.com HTTP Clone URL: http://myweb.first.com/some-group/some-project.git SSH Clone URL: gitmyweb.first.com:some-group/some-project.git Elasticsearch: no Geo: no Using LDAP: no Using Omniauth: yes Omniauth Providers: GitLab Shell Version: 14.20.0 Repository storages: - default: unix:/var/opt/gitlab/gitaly/gitaly.socket GitLab Shell path: /opt/gitlab/embedded/service/gitlab-shell 部署Jenkins # Jenkins部署到k8s里 # 1.安装git软件 [rootmaster jenkins]# yum install git -y# 2.下载相关的yaml文件 [rootmaster jenkins]# git clone https://github.com/scriptcamp/kubernetes-jenkins 正克隆到 kubernetes-jenkins... remote: Enumerating objects: 16, done. remote: Counting objects: 100% (7/7), done. remote: Compressing objects: 100% (7/7), done. remote: Total 16 (delta 1), reused 0 (delta 0), pack-reused 9 Unpacking objects: 100% (16/16), done. [rootk8smaster jenkins]# ls kubernetes-jenkins [rootmaster jenkins]# cd kubernetes-jenkins/ [rootmaster kubernetes-jenkins]# ls deployment.yaml namespace.yaml README.md serviceAccount.yaml service.yaml volume.yaml# 3.创建命名空间 [rootmaster kubernetes-jenkins]# cat namespace.yaml apiVersion: v1 kubectl apply -f namespace.yaml kind: Namespace metadata:name: devops-tools [rootmaster kubernetes-jenkins]# kubectl apply -f namespace.yaml namespace/devops-tools created[rootmaster kubernetes-jenkins]# kubectl get ns NAME STATUS AGE default Active 22h devops-tools Active 19s ingress-nginx Active 139m kube-node-lease Active 22h kube-public Active 22h kube-system Active 22h# 4.创建服务账号集群角色绑定 [rootk8smaster kubernetes-jenkins]# cat serviceAccount.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata:name: jenkins-admin rules:- apiGroups: []resources: [*]verbs: [*]--- apiVersion: v1 kind: ServiceAccount metadata:name: jenkins-adminnamespace: devops-tools--- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata:name: jenkins-admin roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: jenkins-admin subjects: - kind: ServiceAccountname: jenkins-admin[rootk8smaster kubernetes-jenkins]# kubectl apply -f serviceAccount.yaml clusterrole.rbac.authorization.k8s.io/jenkins-admin created serviceaccount/jenkins-admin created clusterrolebinding.rbac.authorization.k8s.io/jenkins-admin created# 5.创建卷用来存放数据 [rootk8smaster kubernetes-jenkins]# cat volume.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata:name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer--- apiVersion: v1 kind: PersistentVolume metadata:name: jenkins-pv-volumelabels:type: local spec:storageClassName: local-storageclaimRef:name: jenkins-pv-claimnamespace: devops-toolscapacity:storage: 10GiaccessModes:- ReadWriteOncelocal:path: /mntnodeAffinity:required:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/hostnameoperator: Invalues:- k8snode1 # 需要修改为k8s里的node节点的名字--- apiVersion: v1 kind: PersistentVolumeClaim metadata:name: jenkins-pv-claimnamespace: devops-tools spec:storageClassName: local-storageaccessModes:- ReadWriteOnceresources:requests:storage: 3Gi[rootk8smaster kubernetes-jenkins]# kubectl apply -f volume.yaml storageclass.storage.k8s.io/local-storage created persistentvolume/jenkins-pv-volume created persistentvolumeclaim/jenkins-pv-claim created[rootk8smaster kubernetes-jenkins]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 33s pv-web 10Gi RWX Retain Bound default/pvc-web nfs 21h[rootk8smaster kubernetes-jenkins]# kubectl describe pv jenkins-pv-volume Name: jenkins-pv-volume Labels: typelocal Annotations: none Finalizers: [kubernetes.io/pv-protection] StorageClass: local-storage Status: Bound Claim: devops-tools/jenkins-pv-claim Reclaim Policy: Retain Access Modes: RWO VolumeMode: Filesystem Capacity: 10Gi Node Affinity: Required Terms: Term 0: kubernetes.io/hostname in [k8snode1] Message: Source:Type: LocalVolume (a persistent volume backed by local storage on a node)Path: /mnt Events: none# 6.部署Jenkins [rootk8smaster kubernetes-jenkins]# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata:name: jenkinsnamespace: devops-tools spec:replicas: 1selector:matchLabels:app: jenkins-servertemplate:metadata:labels:app: jenkins-serverspec:securityContext:fsGroup: 1000 runAsUser: 1000serviceAccountName: jenkins-admincontainers:- name: jenkinsimage: jenkins/jenkins:ltsimagePullPolicy: IfNotPresentresources:limits:memory: 2Gicpu: 1000mrequests:memory: 500Micpu: 500mports:- name: httpportcontainerPort: 8080- name: jnlpportcontainerPort: 50000livenessProbe:httpGet:path: /loginport: 8080initialDelaySeconds: 90periodSeconds: 10timeoutSeconds: 5failureThreshold: 5readinessProbe:httpGet:path: /loginport: 8080initialDelaySeconds: 60periodSeconds: 10timeoutSeconds: 5failureThreshold: 3volumeMounts:- name: jenkins-datamountPath: /var/jenkins_home volumes:- name: jenkins-datapersistentVolumeClaim:claimName: jenkins-pv-claim[rootk8smaster kubernetes-jenkins]# kubectl apply -f deployment.yaml deployment.apps/jenkins created[rootk8smaster kubernetes-jenkins]# kubectl get deploy -n devops-tools NAME READY UP-TO-DATE AVAILABLE AGE jenkins 1/1 1 1 5m36s[rootk8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools NAME READY STATUS RESTARTS AGE jenkins-7fdc8dd5fd-bg66q 1/1 Running 0 19s# 7.启动服务发布Jenkins的pod [rootk8smaster kubernetes-jenkins]# cat service.yaml apiVersion: v1 kind: Service metadata:name: jenkins-servicenamespace: devops-toolsannotations:prometheus.io/scrape: trueprometheus.io/path: /prometheus.io/port: 8080 spec:selector: app: jenkins-servertype: NodePort ports:- port: 8080targetPort: 8080nodePort: 32000[rootk8smaster kubernetes-jenkins]# kubectl apply -f service.yaml service/jenkins-service created[rootk8smaster kubernetes-jenkins]# kubectl get svc -n devops-tools NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins-service NodePort 10.104.76.252 none 8080:32000/TCP 24s# 8.在Windows机器上访问Jenkins宿主机ip端口号 http://192.168.0.20:32000# 9.进入pod里获取登录的密码 [rootmaster kubernetes-jenkins]# kubectl exec -it jenkins-b96f7764f-znvfj -n devops-tools -- bash jenkinsjenkins-b96f7764f-znvfj:/$ cat /var/jenkins_home/secrets/initialAdminPassword bbb283b8dc35449bbdb3d6824f12446c# 修改密码[rootk8smaster kubernetes-jenkins]# kubectl get pod -n devops-tools NAME READY STATUS RESTARTS AGE jenkins-7fdc8dd5fd-5nn7m 1/1 Running 0 91s 出现这个图片表是你安装成功 #接下来部署harbor [rootharbor ~]# yum install -y yum-utils [rootharbor ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo [rootharbor ~]# yum install docker-ce-20.10.6 -y [rootharbor ~]# systemctl start docker systemctl enable docker.service Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. 查看docker版本docker compose版本 [rootharbor ~]# docker version Client: Docker Engine - CommunityVersion: 24.0.2API version: 1.41 (downgraded from 1.43)Go version: go1.20.4Git commit: cb74dfcBuilt: Thu May 25 21:55:21 2023OS/Arch: linux/amd64Context: default [rootharbor ~]# docker compose version Docker Compose version v2.25.0 ##开始安装harbor [rootharbor harbor]# vim harbor.yml.tmpl # Configuration file of Harbor# The IP address or hostname to access admin UI and registry service. # DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients. hostname: 192.168.0.34# http related config http:# port for http, default is 80. If https enabled, this port will redirect to https portport: 123# https related config #https:# https port for harbor, default is 443# port: 1234# The path of cert and key files for nginx#certificate: /your/certificate/path#private_key: /your/private/key/path##注意要把https的部分注释掉不然会出问题 # 配置开机自启 [rootharbor harbor]# vim /etc/rc.local [rootharbor harbor]# cat /etc/rc.local #!/bin/bash # THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES # # It is highly advisable to create own systemd services or udev rules # to run scripts during boot instead of using this file. # # In contrast to previous versions due to parallel execution during boot # this script will NOT be run after all other services. # # Please note that you must run chmod x /etc/rc.d/rc.local to ensure # that this script will be executed during boot.touch /var/lock/subsys/local /usr/local/sbin/docker-compose -f /root/harbor/harbor/docker-compose.yml up -d# 设置权限 [rootharbor harbor]# chmod x /etc/rc.local /etc/rc.d/rc.local添加harbor仓库到k8s集群上 master机器 [rootmaster ~]# vim /etc/docker/daemon.json {registry-mirrors: [https://ruk1gp3w.mirror.aliyuncs.com],insecure-registries : [192.168.0.34:5001] } 然后重启docker [rootmaster ~]# systemctl daemon-reload [rootmaster ~]# systemctl restart dockerworker1机器 [rootworker1 ~]# vim /etc/docker/daemon.json {registry-mirrors: [https://ruk1gp3w.mirror.aliyuncs.com],insecure-registries : [192.168.0.34:5001] } 然后重启docker [rootworker1~]# systemctl daemon-reload [rootworker1 ~]# systemctl restart docker worker2机器 [rootworker2 ~]# vim /etc/docker/daemon.json {registry-mirrors: [https://ruk1gp3w.mirror.aliyuncs.com],insecure-registries : [192.168.0.34:5001] } 然后重启docker [rootmworker2 ~]# systemctl daemon-reload [rootworker2~]# systemctl restart docker 简单测试harbor仓库是否可以使用 [rootmaster ~]# docker login 192.168.0.34:5001 Authenticating with existing credentials... WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-storeLogin Succeeded1.编写yaml文件包括了deployment、service [rootmaster ~]# cd service/ [rootmaster service]# ls mysql nginx [rootmaster service]# cd mysql/ [rootmaster mysql]# ls [rootmaster mysql]# vim mysql.yaml [rootmaster mysql]# ls mysql.yaml [rootmaster mysql]# docker pull mysql:latest latest: Pulling from library/mysql 72a69066d2fe: Pull complete 93619dbc5b36: Pull complete 99da31dd6142: Pull complete 626033c43d70: Pull complete 37d5d7efb64e: Pull complete ac563158d721: Pull complete d2ba16033dad: Pull complete 688ba7d5c01a: Pull complete 00e060b6d11d: Pull complete 1c04857f594f: Pull complete 4d7cfa90e6ea: Pull complete e0431212d27d: Pull complete Digest: sha256:e9027fe4d91c0153429607251656806cc784e914937271037f7738bd5b8e7709 Status: Downloaded newer image for mysql:latest docker.io/library/mysql:latest [rootmaster mysql]# cat mysql.yaml apiVersion: apps/v1 kind: Deployment metadata:labels:app: mysqlname: mysql spec:replicas: 1selector:matchLabels:app: mysqltemplate:metadata:labels: app: mysqlspec:containers:- image: mysql:latestname: mysqlimagePullPolicy: IfNotPresentenv:- name: MYSQL_ROOT_PASSWORDvalue: 123456 #mysql的密码ports:- containerPort: 3306 --- apiVersion: v1 kind: Service metadata:labels:app: svc-mysqlname: svc-mysql spec:selector:app: mysqltype: NodePortports:- port: 3306 #服务的端口服务映射到集群里面的端口protocol: TCP targetPort: 3306 #pod映射端口nodePort: 30007 #宿主机的端口,服务暴露在外面的端口2.部署 [rootmaster mysql]# kubectl apply -f mysql.yaml deployment.apps/mysql created service/svc-mysql created [rootmaster mysql]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 37h svc-mysql NodePort 10.110.192.240 none 3306:30007/TCP 9s [rootmaster mysql]# kubectl get pod NAME READY STATUS RESTARTS AGE mysql-597ff9595d-lhsgp 0/1 ContainerCreating 0 56s nginx-deployment-d4c8d4d89-2xh6w 1/1 Running 2 (15h ago) 20h nginx-deployment-d4c8d4d89-c64c4 1/1 Running 2 (15h ago) 20h nginx-deployment-d4c8d4d89-fhvfd 1/1 Running 2 (15h ago) 20h[rootmaster mysql]# kubectl exec -it mysql-597ff9595d-lhsgp -- bash rootmysql-597ff9595d-tzqzl:/# mysql -uroot -p123456 #容器内部进入mysqlmysql: [Warning] Using a password on the command line interface can be insecure. Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 8.0.27 MySQL Community Server - GPLCopyright (c) 2000, 2021, Oracle and/or its affiliates.Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.Type help; or \h for help. Type \c to clear the current input statement.mysql 7.部署promethuesgrafana对集群里的所有服务器cpu内存网络带宽磁盘IO等进行常规性能监控包括k8s集群节点服务器。 prometheus监控系统grafana出图 监控对象masterworker1worker2nfs服务器gitlab服务器harbor服务器 ansible中控机 提前下载prometheus监控系统所需要的软件 #准备工作 [rootprometheus ~]# mkdir /prom [rootprometheus ~]# cd /prom [rootprometheus prom]# ls grafana-enterprise-9.1.2-1.x86_64.rpm prometheus-2.43.0.linux-amd64.tar.gz node_exporter-1.4.0-rc.0.linux-amd64.tar.gz [rootprometheus prom]# tar xf prometheus-2.43.0.linux-amd64.tar.gz [rootprometheus prom]# ls grafana-enterprise-9.1.2-1.x86_64.rpm prometheus-2.43.0.linux-amd64 node_exporter-1.4.0-rc.0.linux-amd64.tar.gz prometheus-2.43.0.linux-amd64.tar.gz [rootprometheus prom]# mv prometheus-2.43.0.linux-amd64 prometheus [rootprometheus prom]# ls grafana-enterprise-9.1.2-1.x86_64.rpm prometheus node_exporter-1.4.0-rc.0.linux-amd64.tar.gz prometheus-2.43.0.linux-amd64.tar.gz 临时和永久修改PATH变量添加prometheus的路径 [rootprometheus prom]# PATH/prom/prometheus:$PATH [rootprometheus prom]# echo PATH/prom/prometheus:$PATH /etc/profile [rootprometheus prom]# which prometheus /prom/prometheus/prometheus 把prometheus做成一个服务来进行管理非常方便日后维护和使用 [rootprometheus prom]# vim /usr/lib/systemd/system/prometheus.service [Unit] Descriptionprometheus [Service] ExecStart/prom/prometheus/prometheus --config.file/prom/prometheus/prometheus.yml ExecReload/bin/kill -HUP $MAINPID KillModeprocess Restarton-failure [Install] WantedBymulti-user.target 重新加载systemd相关的服务识别Prometheus服务的配置文件 [rootprometheus prom]# systemctl daemon-reload [rootprometheus prom]# 启动Prometheus服务 [rootprometheus prom]# systemctl start prometheus [rootprometheus prom]# systemctl restart prometheus [rootprometheus prom]# ps aux|grep prome root 2166 1.1 3.7 798956 37588 ? Ssl 13:53 0:00 /prom/prometheus/prometheus --config.file/prom/prometheus/prometheus.yml root 2175 0.0 0.0 112824 976 pts/0 S 13:53 0:00 grep --colorauto prome #设置开启启动 [rootprometheus prom]# systemctl enable prometheus Created symlink from /etc/systemd/system/multi-user.target.wants/prometheus.service to /usr/lib/systemd/system/prometheus.service. [rootprometheus prom]# ip a 1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00inet 127.0.0.1/8 scope host lovalid_lft forever preferred_lft foreverinet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: ens33: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc pfifo_fast state UP group default qlen 1000link/ether 00:0c:29:37:86:3b brd ff:ff:ff:ff:ff:ffinet 192.168.0.33/24 brd 192.168.0.255 scope global noprefixroute ens33valid_lft forever preferred_lft forever #修改prometheus,yml文件- job_name: prometheusstatic_configs:- targets: [192.168.0.33:9090]- job_name: masterstatic_configs:- targets: [192.168.0.20:9090]- job_name: worker1static_configs:- targets: [192.168.0.21:9090]- job_name: worker2static_configs:- targets: [192.168.0.22:9090]- job_name: ansiblestatic_configs:- targets: [192.168.0.30:9090]- job_name: gitlabstatic_configs:- targets: [192.168.0.35:9090]- job_name: harborstatic_configs:- targets: [192.168.0.34:9090]- job_name: nfsstatic_configs:- targets: [192.168.0.36:9090] 安装exporter ~ 使用xftp工具上传node_exporter软件也可以使用ansible上传到被监控的服务器上 [rootprometheus prom]# scp ./node_exporter-1.4.0-rc.0.linux-amd64.tar.gz 192.168.0.30:/root The authenticity of host 192.168.0.30 (192.168.0.30) cant be established. ECDSA key fingerprint is SHA256:xactOuiFsm9merQVjdeiV4iZwI4rXUnviFYTXL2h8fc. ECDSA key fingerprint is MD5:69:58:6b:ab:c4:8c:27:e2:b2:7c:31:bb:63:20:81:61. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 192.168.0.30 (ECDSA) to the list of known hosts. root192.168.0.30s password: node_exporter-1.4.0-rc.0.linux-amd64.tar.gz 100% 9507KB 40.0MB/s 00:00 [rootansible ~]# ls anaconda-ks.cfg node_exporter-1.4.0-rc.0.linux-amd64.tar.gz #检查进程是否启动 [rootmaster ~]# ps -aux|grep node root 2231 2.7 2.2 828488 85208 ? Ssl 11:48 4:12 kue --authentication-kubeconfig/etc/kubernetes/controller-manager.conntroller-manager.conf --bind-address127.0.0.1 --client-ca-file/etc0/16 --cluster-namekubernetes --cluster-signing-cert-file/etc/kubetc/kubernetes/pki/ca.key --controllers*,bootstrapsigner,tokencleaneer.conf --leader-electtrue --requestheader-client-ca-file/etc/kubetc/kubernetes/pki/ca.crt --service-account-private-key-file/etc/kub0.96.0.0/12 --use-service-account-credentialstrue root 3403 0.0 0.0 4236 416 ? Ss 11:49 0:00 ru root 3408 2.8 1.2 1672716 47712 ? Sl 11:49 4:22 ca root 3409 0.0 1.0 1524740 41652 ? Sl 11:49 0:01 ca root 3410 0.0 0.9 1156080 36288 ? Sl 11:49 0:00 ca root 3411 0.0 0.9 1155824 36972 ? Sl 11:49 0:00 ca root 3413 0.0 1.0 1156080 39968 ? Sl 11:49 0:00 ca root 3414 0.0 0.8 1229812 34732 ? Sl 11:49 0:00 ca root 121582 0.1 0.4 717696 16676 ? Ssl 14:20 0:00 /n 0.0.0.0:9090##访问本机的9090端口就行 实现了对整个集群的监控。 #安装grafana绘制优美的图片方便我们进行观察 ##只要在安装了prometheus的机器上安装就行 [rootprometheus prom]# ls grafana-enterprise-9.1.2-1.x86_64.rpm install_node_exporter.sh node_exporter-1.4.0-rc.0.linux-amd64.tar.gz prometheus prometheus-2.43.0.linux-amd64.tar.gz [rootprometheus prom]# yum install grafana-enterprise-9.1.2-1.x86_64.rpm -y [rootprometheus prom]# systemctl start grafana-server [rootprometheus prom]# systemctl enable grafana-server Created symlink from /etc/systemd/system/multi-user.target.wants/grafana-server.service to /usr/lib/systemd/system/grafana-server.service. [rootprometheus prom]# ps aux|grep grafana grafana 1410 8.9 7.1 1137704 71040 ? Ssl 15:12 0:01 /usr/sbin/grafana-server --config/etc/grafana/grafana.ini --pidfile/var/run/grafana/grafana-server.pid --packagingrpm cfg:default.paths.logs/var/log/grafana cfg:default.paths.data/var/lib/grafana cfg:default.paths.plugins/var/lib/grafana/plugins cfg:default.paths.provisioning/etc/grafana/provisioning root 1437 0.0 0.0 112824 976 pts/0 S 15:13 0:00 grep --colorauto grafana #安装成功 监听3000端口登录在浏览器里登录 http://192.168.203.135:3000 默认的用户名和密码是 用户名admin 密码admin #我将密码修改为123456 #添加数据源 添加数据修改仪表盘 #实现了对整个集群的性能监控 8.使用ingress给web业务做基于域名的负载均衡 拓展小知识 在 Kubernetes 集群中监控容器和集群资源的时候通常会考虑使用 cAdvisorContainer Advisor和 Metrics Server 这两个工具。它们各自有不同的特点和适用场景 cAdvisor (Container Advisor): 特点: cAdvisor 是 Kubernetes 官方提供的一个容器资源使用和性能分析工具。它可以监控容器的资源使用情况包括 CPU、内存、网络和磁盘等方面的指标。cAdvisor 运行在每个节点上通过监控 Docker 容器的 cgroups 和命名空间来获取容器的统计信息。可以通过 cAdvisor 提供的 API 接口或者直接访问 cAdvisor 的 Web UI 来查看容器的监控数据。适用场景: 适用于需要基本的容器资源监控和性能分析的场景。对于单个节点上的容器监控比较适用但对于跨节点的集群级别监控需要其他工具配合。 Metrics Server: 特点: Metrics Server 是 Kubernetes 官方提供的用于聚合和提供资源指标的 API 服务器。它可以提供节点级别和集群级别的资源指标包括 CPU 使用率、内存使用量等。Metrics Server 获取节点和容器的指标数据并将其暴露为 Kubernetes API 的一部分可以通过 kubectl top 命令来查看资源使用情况。通常作为 Kubernetes Dashboard 和 Horizontal Pod Autoscaler 等功能的基础。适用场景: 适用于需要查看集群级别资源使用情况的场景如监控整个集群的 CPU、内存等指标。用于 Kubernetes Dashboard、Horizontal Pod Autoscaler 等需要使用资源指标的功能。 综合来看一般来说cAdvisor 更适合单个节点上的容器监控和性能分析而 Metrics Server 更适合集群级别的资源指标聚合和 API 访问。在实际使用中您可以根据具体需求和场景来选择合适的监控工具或者将它们结合使用。 部署过程 第1大步骤 安装ingress controller 1.将镜像scp到所有的node节点服务器上 #准备好所有需要的文件 [rootansible ~]# ls [rootansible ~]# ls hpa-example.tar ##hpa水平扩缩 ingress-controller-deploy.yaml #ingress controller ingress-nginx-controllerv1.1.0.tar.gz #ingress-nginx-controller镜像 install_node_exporter.sh kube-webhook-certgen-v1.1.0.tar.gz # kube-webhook-certgen镜像 nfs-pvc.yaml nfs-pv.yaml nginx-deployment-nginx-svc-2.yaml node_exporter-1.4.0-rc.0.linux-amd64.tar.gz sc-ingress-url.yaml #基于URL的负载均衡 sc-ingress.yaml sc-nginx-svc-1.yaml #创建service1 和相关pod sc-nginx-svc-3.yaml #创建service3 和相关pod sc-nginx-svc-4.yaml #创建service4 和相关pod#kube-webhook-certgen镜像主要用于生成Kubernetes集群中用于Webhook的证书。 #kube-webhook-certgen镜像生成的证书可以确保Webhook服务在Kubernetes集群中的安全通信和身份验证 [rootansible ~]# ansible nodes -m copy -a src./ingress-nginx-controllerv1.1.0.tar.gz dest/root/ 192.168.0.22 | CHANGED {ansible_facts: {discovered_interpreter_python: /usr/bin/python}, changed: true, checksum: 090f67aad7867a282c2901cc7859bc16856034ee, dest: /root/ingress-nginx-controllerv1.1.0.tar.gz, gid: 0, group: root, md5sum: 5777d038007f563180e59a02f537b155, mode: 0644, owner: root, size: 288980480, src: /root/.ansible/tmp/ansible-tmp-1712220848.65-1426-256601085523400/source, state: file, uid: 0 } ##类似这样就是成功了 [rootworker2 ~]# ls anaconda-ks.cfg ingress-nginx-controllerv1.1.0.tar.gz install_node_exporter.sh node_exporter-1.4.0-rc.0.linux-amd64.tar.gz导入镜像在所有的worker服务器上进行 [rootworker1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz [rootworker1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz [rootworker2 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz [rootworker2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz [rootworker1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz e2eb06d8af82: Loading layer 65.54 e2eb06d8af82: Loading layer 3.08 e2eb06d8af82: Loading layer 5.865MB/5.865MB ab1476f3fdd9: Loading layer 557.1 ab1476f3fdd9: Loading layer 6.128 ab1476f3fdd9: Loading layer 10.58 ab1476f3fdd9: Loading layer 15.04 ab1476f3fdd9: Loading layer 23.4 ab1476f3fdd9: Loading layer 32.87 ab1476f3fdd9: Loading layer 38.99 ab1476f3fdd9: Loading layer 41.78 ab1476f3fdd9: Loading layer 44.01 ab1476f3fdd9: Loading layer 45.68 ab1476f3fdd9: Loading layer 49.58 ab1476f3fdd9: Loading layer 55.71 ab1476f3fdd9: Loading layer 62.39 ab1476f3fdd9: Loading layer 71.3 ab1476f3fdd9: Loading layer 79.66 ab1476f3fdd9: Loading layer 88.57 ab1476f3fdd9: Loading layer 97.48 ab1476f3fdd9: Loading layer 105.8 ab1476f3fdd9: Loading layer 114.2 ab1476f3fdd9: Loading layer 120.9 ab1476f3fdd9: Loading layer 120.9MB/120.9MB ad20729656ef: Loading layer 4.096 ad20729656ef: Loading layer 4.096kB/4.096kB 0d5022138006: Loading layer 393.2 0d5022138006: Loading layer 12.98 0d5022138006: Loading layer 20.84 0d5022138006: Loading layer 28.31 0d5022138006: Loading layer 35.39 0d5022138006: Loading layer 36.57 0d5022138006: Loading layer 38.09MB/38.09MB 8f757e3fe5e4: Loading layer 229.4 8f757e3fe5e4: Loading layer 10.09 8f757e3fe5e4: Loading layer 15.83 8f757e3fe5e4: Loading layer 18.12 8f757e3fe5e4: Loading layer 19.04 8f757e3fe5e4: Loading layer 21.42MB/21.42MB a933df9f49bb: Loading layer 65.54 a933df9f49bb: Loading layer 1.573 a933df9f49bb: Loading layer 2.49 a933df9f49bb: Loading layer 3.411MB/3.411MB 7ce1915c5c10: Loading layer 32.77 7ce1915c5c10: Loading layer 309.8 7ce1915c5c10: Loading layer 309.8 986ee27cd832: Loading layer 6.141 b94180ef4d62: Loading layer 38.37 d36a04670af2: Loading layer 2.754 2fc9eef73951: Loading layer 4.096 1442cff66b8e: Loading layer 51.67 1da3c77c05ac: Loading layer 3.584Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.1.0[rootworker1 ~]# ls anaconda-ks.cfg ingress-nginx-controllerv1.1.0.tar.gz install_node_exporter.sh node_exporter-1.4.0-rc.0.linux-amd64.tar.gz [rootworker1 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz c0d270ab7e0d: Loading layer 3.697MB/3.697MB ce7a3c1169b6: Loading layer 45.38MB/45.38MB Loaded image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1 [rootmaster ingress]# kubectl get ns NAME STATUS AGE default Active 42h devops-tools Active 21h ingress-nginx Active 18m kube-node-lease Active 42h kube-public Active 42h kube-system Active 42h kubernetes-dashboard Active 41h [rootmaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.101.22.116 none 80:32140/TCP,443:30268/TCP 18m ingress-nginx-controller-admission ClusterIP 10.106.82.248 none 443/TCP 18m [rootmaster ingress]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-lvbmf 0/1 Completed 0 18m ingress-nginx-admission-patch-h24bx 0/1 Completed 1 18m ingress-nginx-controller-7cd558c647-ft9gx 1/1 Running 0 18m ingress-nginx-controller-7cd558c647-t2pmg 1/1 Running 0 18m 第2大步骤 创建pod和暴露pod的服务##启动nginx服务pod--》启动两个pod实现dns域名解析轮询 [rootmaster ingress]# kubectl apply -f sc-nginx-svc-3.yaml deployment.apps/sc-nginx-deploy-3 unchanged service/sc-nginx-svc-3 unchanged [rootmaster ingress]# kubectl apply -f sc-nginx-svc-4.yaml deployment.apps/sc-nginx-deploy-4 unchanged service/sc-nginx-svc-4 unchanged[rootmaster ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 43h sc-nginx-svc-3 ClusterIP 10.102.96.68 none 80/TCP 19m sc-nginx-svc-4 ClusterIP 10.100.36.98 none 80/TCP 19m svc-mysql NodePort 10.110.192.240 none 3306:30007/TCP 5h51m查看服务器的详细信息查看Endpoints对应的pod的ip和端口是否正常 [rootmaster ingress]# kubectl describe svc sc-nginx-svc Name: sc-nginx-svc-3 Namespace: default Labels: appsc-nginx-svc-3 Annotations: none Selector: appsc-nginx-feng-3 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.102.96.68 IPs: 10.102.96.68 Port: name-of-service-port 80/TCP TargetPort: 80/TCP Endpoints: 10.224.189.95:80,10.224.189.96:80,10.224.235.150:80 Session Affinity: None Events: noneName: sc-nginx-svc-4 Namespace: default Labels: appsc-nginx-svc-4 Annotations: none Selector: appsc-nginx-feng-4 Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.100.36.98 IPs: 10.100.36.98 Port: name-of-service-port 80/TCP TargetPort: 80/TCP Endpoints: 10.224.189.97:80,10.224.189.98:80,10.224.235.151:80 Session Affinity: None Events: none[rootmaster ingress]# curl 10.224.189.95:80 ##内部pod的IP地址 wang6666666 10.224.189.96:80##10.224.235.150:80 第3大步骤 启用ingress 关联ingress controller 和service[rootmaster ingress]# kubectl apply -f sc-ingress.yaml ingress.networking.k8s.io/sc-ingress created 过几分钟可以看到 有宿主机的ip地址 [rootmaster ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE sc-ingress nginx www.feng.com,www.wang.com 80 8s [rootmaster ingress]# cat sc-ingress-url.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata:name: simple-fanout-exampleannotations:kubernets.io/ingress.class: nginx spec:ingressClassName: nginxrules:- host: www.wang.com #设置域名http:paths:- path: /wang1 #内部pod里面的地址pathType: Prefixbackend:service:name: sc-nginx-svc-3port:number: 80- path: /wang2pathType: Prefixbackend:service:name: sc-nginx-svc-4port:number: 80 [rootmaster ingress]# kubectl apply -f sc-ingress-url.yaml [rootmaster ingress]# kubectl exec -it sc-nginx-deploy-4-7d4b5c487f-8l7wr -- bash rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/# cd /usr/share/nginx/html/ rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# ls 50x.html index.html wang2 rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# ls 50x.html index.html wang2 rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# cat index.html wang11111111 rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# cp index.html ./wang2/ rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# ls 50x.html index.html wang2 rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html# cd wang2/ rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html/wang2# ls index.html rootsc-nginx-deploy-4-7d4b5c487f-8l7wr:/usr/share/nginx/html/wang2# exit exit [rootmaster ingress]# kubectl exec -it sc-nginx-deploy-3-5c4b975ffc-d8hwk -- bash rootsc-nginx-deploy-3-5c4b975ffc-d8hwk:/# cd /usr/share/nginx/html/ rootsc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# ls 50x.html index.html wang1 rootsc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# cp index.html ./wang1/ rootsc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# ls 50x.html index.html wang1 rootsc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# cat ./wang1/index.html wang6666666 rootsc-nginx-deploy-3-5c4b975ffc-d8hwk:/usr/share/nginx/html# exit exit##先在pod里面创建好文件index.html和文件夹 #需要分别在service3和service4上面创建好第4步 查看ingress controller 里的nginx.conf 文件里是否有ingress对应的规则 [rootmaster ingress]# kubectl get pod -n ingress-nginx NAME READY STATUS RESTARTS AGE ingress-nginx-admission-create-lvbmf 0/1 Completed 0 29m ingress-nginx-admission-patch-h24bx 0/1 Completed 1 29m ingress-nginx-controller-7cd558c647-ft9gx 1/1 Running 0 29m ingress-nginx-controller-7cd558c647-t2pmg 1/1 Running 0 29m获取ingress controller对应的service暴露宿主机的端口访问宿主机和相关端口就可以验证ingress controller是否能进行负载均衡[rootk8smaster 4-4]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.99.160.10 none 80:30092/TCP,443:30263/TCP 37m ingress-nginx-controller-admission ClusterIP 10.99.138.23 none 443/TCP 37m在其他的宿主机或者windows机器上使用域名进行访问因为我们是基于域名做的负载均衡的配置所有必须要在浏览器里使用域名去访问不能使用ip地址 同时ingress controller做负载均衡的时候是基于http协议的7层负载均衡 [rootnfs ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.0.21 www.wang.com 192.168.0.22 www.wang.com 192.168.0.20 master [rootnfs ~]# curl www.wang.com/wang1/index.html wang6666666 [rootnfs ~]# curl www.wang.com/wang2/index.html html headtitle404 Not Found/title/head body centerh1404 Not Found/h1/center hrcenternginx/1.21.5/center /body /html [rootnfs ~]# curl www.wang.com/wang2/index.html wang11111111##DNS,这个采用的是轮询算法需要多试几次就行了 #部署pv和pvc,对系统资源的管理  第5步启动第2个服务和pod使用了pvpvcnfs 需要提前准备好nfs服务器创建pv和pvc [rootk8smaster 4-4]# ls ingress-controller-deploy.yaml nfs-pvc.yaml sc-ingress.yaml ingress-nginx-controllerv1.1.0.tar.gz nfs-pv.yaml sc-nginx-svc-1.yaml kube-webhook-certgen-v1.1.0.tar.gz nginx-deployment-nginx-svc-2.yaml[rootmaster ingress]# cat nfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata:name: sc-nginx-pvlabels:type: sc-nginx-pv spec:capacity:storage: 10Gi accessModes:- ReadWriteManystorageClassName: nfsnfs:path: /web #nfs共享的目录server: 192.168.0.36 #nfs服务器的ip地址readOnly: false[rootk8smaster 4-4]# kubectl apply -f nfs-pv.yaml persistentvolume/sc-nginx-pv configured [rootmaster ingress]# cat nfs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata:name: sc-nginx-pvc spec:accessModes:- ReadWriteMany resources:requests:storage: 1GistorageClassName: nfs #使用nfs类型的pv[rootmaster ingress]# kubectl apply -f nfs-pvc.yaml persistentvolumeclaim/sc-nginx-pvc created [rootmaster ingress]# kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE jenkins-pv-volume 10Gi RWO Retain Bound devops-tools/jenkins-pv-claim local-storage 22h pv-web 10Gi RWX Retain Bound default/pvc-web nfs 24h sc-nginx-pv 10Gi RWX Retain Bound default/sc-nginx-pvc nfs 76s[rootmaster ingress]# cat nginx-deployment-nginx-svc-2.yaml apiVersion: apps/v1 kind: Deployment metadata:name: nginx-deploymentlabels:app: nginx spec:replicas: 3selector:matchLabels:app: sc-nginx-feng-2template:metadata:labels:app: sc-nginx-feng-2spec:volumes:- name: sc-pv-storage-nfspersistentVolumeClaim:claimName: sc-nginx-pvccontainers:- name: sc-pv-container-nfsimage: nginximagePullPolicy: IfNotPresentports:- containerPort: 80name: http-servervolumeMounts:- mountPath: /usr/share/nginx/htmlname: sc-pv-storage-nfs --- apiVersion: v1 kind: Service metadata:name: sc-nginx-svc-2labels:app: sc-nginx-svc-2 spec:selector:app: sc-nginx-feng-2ports:- name: name-of-service-portprotocol: TCPport: 80targetPort: 80 [rootk8smaster 4-4]# [rootk8smaster 4-4]# kubectl apply -f nginx-deployment-nginx-svc-2.yaml deployment.apps/nginx-deployment created service/sc-nginx-svc-2 created[rootmaster ingress]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 none 443/TCP 42h sc-nginx-svc ClusterIP 10.108.143.45 none 80/TCP 20m sc-nginx-svc-2 ClusterIP 10.109.241.58 none 80/TCP 16s svc-mysql NodePort 10.110.192.240 none 3306:30007/TCP 4h45m[rootmaster ingress]# kubectl get svc -n ingress-nginx NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller NodePort 10.101.22.116 none 80:32140/TCP,443:30268/TCP 44m ingress-nginx-controller-admission ClusterIP 10.106.82.248 none 443/TCP 44m[rootmaster ingress]# kubectl get ingress NAME CLASS HOSTS ADDRESS PORTS AGE sc-ingress nginx www.feng.com,www.wang.com 192.168.0.21,192.168.0.22 80 16m访问宿主机暴露的端口号30092或者80都可以 ##访问成功了 [rootansible ~]# curl www.wang.com welcome to sanchuang !!! \n welcome to sanchuang !!! 0000000000000000000000 welcome to sanchuang !!! welcome to sanchuang !!! welcome to sanchuang !!! 666666666666666666 !!! 777777777777777777 !!! 9.使用探针liveless、readiness、startup的httpGet和exec方法对web业务pod进行监控一旦出现问题马上重启增强业务pod的可靠性。 [rootmaster ingress]# vim my-web.yaml [rootmaster ingress]# cat my-web.yaml apiVersion: apps/v1 kind: Deployment metadata:labels:app: mywebname: myweb spec:replicas: 3selector:matchLabels:app: mywebtemplate:metadata:labels:app: mywebspec:containers:- name: mywebimage: nginx:latestimagePullPolicy: IfNotPresentports:- containerPort: 8000resources:limits:cpu: 300mrequests:cpu: 100mlivenessProbe:exec:command:- ls- /initialDelaySeconds: 5periodSeconds: 5readinessProbe:exec:command:- ls- /initialDelaySeconds: 5periodSeconds: 5 startupProbe:httpGet:path: /port: 8000failureThreshold: 30periodSeconds: 10 --- apiVersion: v1 kind: Service metadata:labels:app: myweb-svcname: myweb-svc spec:selector:app: mywebtype: NodePortports:- port: 8000protocol: TCPtargetPort: 8000nodePort: 30001 [rootmaster ingress]# kubectl describe pod myweb-b69f9bc6-ht2vw Name: myweb-b69f9bc6-ht2vw Namespace: default Priority: 0 Node: worker2/192.168.0.22 Start Time: Thu, 04 Apr 2024 20:06:43 0800 Labels: appmywebpod-template-hashb69f9bc6 Annotations: cni.projectcalico.org/containerID: 8c2aed8a822bab4162d7d8cce6933cf058ecddb3d33ae8afa3eee7daa8a563becni.projectcalico.org/podIP: 10.224.189.110/32cni.projectcalico.org/podIPs: 10.224.189.110/32 Status: Running IP: 10.224.189.110 IPs:IP: 10.224.189.110 Controlled By: ReplicaSet/myweb-b69f9bc6 Containers:myweb:Container ID: docker://64d91f5ae0c61770e2dc91ee6cfc46f029a7af25f2119ea9ea047407ae072969Image: nginx:latestImage ID: docker-pullable://nginxsha256:0d17b565c37bcbd895e9d92315a05c1c3c9a29f762b011a10c54a66cd53c9b31Port: 8000/TCPHost Port: 0/TCPState: RunningStarted: Thu, 04 Apr 2024 20:06:44 0800Ready: FalseRestart Count: 0Limits:cpu: 300mRequests:cpu: 100mLiveness: exec [ls /] delay5s timeout1s period5s #success1 #failure3Readiness: exec [ls /] delay5s timeout1s period5s #success1 #failure3Startup: http-get http://:8000/ delay0s timeout1s period10s #success1 #failure30Environment: noneMounts:/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bhvf6 (ro)10.使用ab工具对整个k8s集群里的web服务进行压力测试 安装http-tools工具得到ab软件 [rootnfs-server ~]# yum install httpd-tools -y模拟访问 [rootnfs-server ~]# ab -n 1000 -c50 http://192.168.220.100:31000/index.htmlrootmaster hpa]# kubectl get hpa --watch增加并发数和请求总数[rootgitlab ~]# ab -n 5000 -c100 http://192.168.0.21:80/index.html This is ApacheBench, Version 2.3 $Revision: 1430300 $ Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/Benchmarking 192.168.0.21 (be patient) Completed 500 requests Completed 1000 requests Completed 1500 requests Completed 2000 requests Completed 2500 requests Completed 3000 requests Completed 3500 requests Completed 4000 requests Completed 4500 requests Completed 5000 requests Finished 5000 requestsServer Software: Server Hostname: 192.168.0.21 Server Port: 80Document Path: /index.html Document Length: 146 bytesConcurrency Level: 100 Time taken for tests: 2.204 seconds Complete requests: 5000 Failed requests: 0 Write errors: 0 Non-2xx responses: 5000 Total transferred: 1370000 bytes HTML transferred: 730000 bytes Requests per second: 2268.42 [#/sec] (mean) Time per request: 44.084 [ms] (mean) Time per request: 0.441 [ms] (mean, across all concurrent requests) Transfer rate: 606.98 [Kbytes/sec] receivedConnection Times (ms)min mean[/-sd] median max Connect: 0 3 4.1 1 22 Processing: 1 40 30.8 38 160 Waiting: 0 39 30.7 36 160 Total: 1 43 30.9 41 162Percentage of the requests served within a certain time (ms)50% 4166% 5475% 6380% 6990% 8395% 10098% 11599% 129100% 162 (longest request)##监控方式1.kubectl top pod ##本地top 查看 2.http://192.168.0.33:3000/ #使用grafana 3.http://192.168.0.33:9090/targets #使用prometheus项目心得 1.更加深入的了解了k8s的各个功能     2.对各个服务Prometheusnfs等深入了解     3.自己的故障处理能力得到提升     4.对负载均衡和高可用自动扩缩有了认识     5.更加了解开发和运维的关系
文章转载自:
http://www.morning.yqhdy.cn.gov.cn.yqhdy.cn
http://www.morning.trhrk.cn.gov.cn.trhrk.cn
http://www.morning.mmjqk.cn.gov.cn.mmjqk.cn
http://www.morning.wqsjx.cn.gov.cn.wqsjx.cn
http://www.morning.hxcrd.cn.gov.cn.hxcrd.cn
http://www.morning.pkpqh.cn.gov.cn.pkpqh.cn
http://www.morning.pqbkk.cn.gov.cn.pqbkk.cn
http://www.morning.tkztx.cn.gov.cn.tkztx.cn
http://www.morning.bftr.cn.gov.cn.bftr.cn
http://www.morning.hjbrd.cn.gov.cn.hjbrd.cn
http://www.morning.kdnrc.cn.gov.cn.kdnrc.cn
http://www.morning.lhqw.cn.gov.cn.lhqw.cn
http://www.morning.yymlk.cn.gov.cn.yymlk.cn
http://www.morning.mhdwp.cn.gov.cn.mhdwp.cn
http://www.morning.klcdt.cn.gov.cn.klcdt.cn
http://www.morning.grjh.cn.gov.cn.grjh.cn
http://www.morning.mrfgy.cn.gov.cn.mrfgy.cn
http://www.morning.zdmrf.cn.gov.cn.zdmrf.cn
http://www.morning.hbdqf.cn.gov.cn.hbdqf.cn
http://www.morning.nclps.cn.gov.cn.nclps.cn
http://www.morning.zcncb.cn.gov.cn.zcncb.cn
http://www.morning.fqklt.cn.gov.cn.fqklt.cn
http://www.morning.xfhms.cn.gov.cn.xfhms.cn
http://www.morning.thlr.cn.gov.cn.thlr.cn
http://www.morning.fpbj.cn.gov.cn.fpbj.cn
http://www.morning.sgnjg.cn.gov.cn.sgnjg.cn
http://www.morning.yfcyh.cn.gov.cn.yfcyh.cn
http://www.morning.frtt.cn.gov.cn.frtt.cn
http://www.morning.smkxm.cn.gov.cn.smkxm.cn
http://www.morning.skksz.cn.gov.cn.skksz.cn
http://www.morning.frpfk.cn.gov.cn.frpfk.cn
http://www.morning.hlrtzcj.cn.gov.cn.hlrtzcj.cn
http://www.morning.wtsr.cn.gov.cn.wtsr.cn
http://www.morning.hpnhl.cn.gov.cn.hpnhl.cn
http://www.morning.bpyps.cn.gov.cn.bpyps.cn
http://www.morning.gbybx.cn.gov.cn.gbybx.cn
http://www.morning.mhnd.cn.gov.cn.mhnd.cn
http://www.morning.srbfp.cn.gov.cn.srbfp.cn
http://www.morning.nrll.cn.gov.cn.nrll.cn
http://www.morning.qnlbb.cn.gov.cn.qnlbb.cn
http://www.morning.rnnq.cn.gov.cn.rnnq.cn
http://www.morning.wjplm.cn.gov.cn.wjplm.cn
http://www.morning.qshxh.cn.gov.cn.qshxh.cn
http://www.morning.phjny.cn.gov.cn.phjny.cn
http://www.morning.lcjw.cn.gov.cn.lcjw.cn
http://www.morning.zfqr.cn.gov.cn.zfqr.cn
http://www.morning.xdpjs.cn.gov.cn.xdpjs.cn
http://www.morning.rrgqq.cn.gov.cn.rrgqq.cn
http://www.morning.dfygx.cn.gov.cn.dfygx.cn
http://www.morning.qsyyp.cn.gov.cn.qsyyp.cn
http://www.morning.rlbg.cn.gov.cn.rlbg.cn
http://www.morning.bqdgr.cn.gov.cn.bqdgr.cn
http://www.morning.qlck.cn.gov.cn.qlck.cn
http://www.morning.mtgnd.cn.gov.cn.mtgnd.cn
http://www.morning.gydsg.cn.gov.cn.gydsg.cn
http://www.morning.mwmtk.cn.gov.cn.mwmtk.cn
http://www.morning.bpmdr.cn.gov.cn.bpmdr.cn
http://www.morning.tkzrh.cn.gov.cn.tkzrh.cn
http://www.morning.sqqds.cn.gov.cn.sqqds.cn
http://www.morning.hxgly.cn.gov.cn.hxgly.cn
http://www.morning.ctbr.cn.gov.cn.ctbr.cn
http://www.morning.qpqwb.cn.gov.cn.qpqwb.cn
http://www.morning.pqnps.cn.gov.cn.pqnps.cn
http://www.morning.nxcgp.cn.gov.cn.nxcgp.cn
http://www.morning.mjtgt.cn.gov.cn.mjtgt.cn
http://www.morning.ywpwq.cn.gov.cn.ywpwq.cn
http://www.morning.sfdky.cn.gov.cn.sfdky.cn
http://www.morning.jpwmk.cn.gov.cn.jpwmk.cn
http://www.morning.cxsdl.cn.gov.cn.cxsdl.cn
http://www.morning.jhrkm.cn.gov.cn.jhrkm.cn
http://www.morning.dbxss.cn.gov.cn.dbxss.cn
http://www.morning.rjjys.cn.gov.cn.rjjys.cn
http://www.morning.dcpbk.cn.gov.cn.dcpbk.cn
http://www.morning.fmkjx.cn.gov.cn.fmkjx.cn
http://www.morning.nbsbn.cn.gov.cn.nbsbn.cn
http://www.morning.kaoshou.net.gov.cn.kaoshou.net
http://www.morning.xfrqf.cn.gov.cn.xfrqf.cn
http://www.morning.pwlxy.cn.gov.cn.pwlxy.cn
http://www.morning.jrlxz.cn.gov.cn.jrlxz.cn
http://www.morning.fkcjs.cn.gov.cn.fkcjs.cn
http://www.tj-hxxt.cn/news/268854.html

相关文章:

  • html网站设计实验报告北京做彩右影影视公司网站
  • 英文网站怎么做seo做房产抵押网站需要什么手续费
  • 高大上的网站欣赏wordpress 自定义html
  • 温州文成网站建设杭州建设网站建站
  • 网站建设都怎么找客户的网站开发所得税
  • 张家港网站制作服务网站开发 学习
  • 自己网站做反链如何注册网站的名字
  • 邢台哪儿能做网站腾讯云海外服务器
  • 潮州+网站建设jsp网站开发详解 下载
  • 怎么制作网站准考证在网上打印python基础教程答案
  • 城乡建设网官方网站seo月薪
  • 网站制作做网站智慧团建官网网页版入口
  • 企业网站怎么维护北京软件开发培训学校
  • 青海网站建设企业一级注册安全工程师
  • 陕西咸阳做网站的公司网站制作什么
  • 网站建设中的财务预算官方网站制作哪家专业
  • 凡科做的网站百度能收录吗一元购网站的建设
  • 个人网站建设教程融水苗族自治县网站建设公司
  • 门户网站如何制作开发工具都有哪些
  • html5网站模板 站长网设计素材网站照片
  • 做网站投资太大 网站也没搞起来专题学习网站开发流程
  • 用wordpress 扒站网站html优化
  • qq刷赞网站如何做分站专业网站建设电
  • 郑州网站建设知乎广州网站建设外包建设推广
  • 网站开发的电视剧福田做网站多少钱
  • 贵州讯玛网站建设合肥市城乡建设厅网站
  • 烟台建设银行网站外包平台
  • 免费心理咨询师24小时在线咨询东莞网站快速排名优化
  • 基于php技术的小型企业网站开发做本地分类信息网站赚钱吗
  • 网站底部导航菜单台州企业网站