建设部质监局网站,重庆会计之家是谁做的网站,下载软件用哪个软件好,厦门 网站建设闽icp目录 一、环境搭建
1、准备环境
2、安装master节点
3、安装k8s-master上的node
4、安装配置k8s-node1节点
5、安装k8s-node2节点
6、为所有node节点配置flannel网络
7、配置docker开启加载防火墙规则允许转发数据
二、k8s常用资源管理
1、创建一个pod
2、pod管理 一、…目录 一、环境搭建
1、准备环境
2、安装master节点
3、安装k8s-master上的node
4、安装配置k8s-node1节点
5、安装k8s-node2节点
6、为所有node节点配置flannel网络
7、配置docker开启加载防火墙规则允许转发数据
二、k8s常用资源管理
1、创建一个pod
2、pod管理 一、环境搭建
1、准备环境
计算机说明建议系统版本7.4或者7.6 主机名 IP地址 角色 硬件 k8s-master 192.168.2.116 Masternode Etcd、apiserver、controlor-manager、scheduler、kube-proxy、docker、registry K8s-node1 192.168.2.117 Node Kubletel、kube-proxy、docker K8s-node2 192.168.2.118 Node Kubletel、kube-proxy、docker
修改master主机的计算机名设置host文件
[rootcentos01 ~]# hostnamectl set-hostname k8s-master[rootcentos01 ~]# bash[rootk8s-master ~]# vim /etc/hosts192.168.2.116 k8s-master192.168.2.117 k8s-node1192.168.2.118 k8s-node2
修改节点一主机名设置host文件 [rootcentos02 ~]# hostnamectl set-hostname k8s-node1[rootcentos02 ~]# bash[rootk8s-node1 ~]# scp 192.168.2.117:/etc/hosts /etc/ 4修改节点二主机名字设置host文件 [rootcentos03 ~]# hostnamectl set-hostname k8s-node2[rootcentos03 ~]# bash[rootk8s-node2 ~]# scp 192.168.2.118:/etc/hosts /etc/
2、安装master节点
1安装etcd配置etcd
[rootk8s-master ~]# yum -y install etcd[rootk8s-master ~]# cp /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak[rootk8s-master ~]# vim /etc/etcd/etcd.conf6 ETCD_LISTEN_CLIENT_URLShttp://0.0.0.0:2379
21 ETCD_ADVERTISE_CLIENT_URLShttp://192.168.2.116:2379[rootk8s-master ~]# systemctl start etcd[rootk8s-master ~]# systemctl enable etcdCreated symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
2安装k8s-master节点
[rootk8s-master ~]#yum install kubernetes-master.x86_64 -y
3配置apiserver
[rootk8s-master ~]# vim /etc/kubernetes/apiserver8 KUBE_API_ADDRESS--insecure-bind-address0.0.0.0 //修改监听IP地址
12 KUBE_API_PORT--port8080 //监听端口
16 KUBELET_PORT--kubelet-port10250 //kubelet监听端口
19 KUBE_ETCD_SERVERS--etcd-servershttp://192.168.2.116:2379 //连接etcd
24 KUBE_ADMISSION_CONTROL--admission-controlNamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota
4配置controller和scheduler
[rootk8s-master ~]# vim /etc/kubernetes/config22 KUBE_MASTER--masterhttp://192.168.2.116:8080
5启动k8s服务
[rootk8s-master ~]# systemctl start kube-apiserver.service[rootk8s-master ~]# systemctl start kube-controller-manager.service[rootk8s-master ~]# systemctl start kube-scheduler.service[rootk8s-master ~]# systemctl enable kube-apiserver.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[rootk8s-master ~]# systemctl enable kube-controller-manager.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[rootk8s-master ~]# systemctl enable kube-scheduler.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.6检查节点安装都是健康的
[rootk8s-master ~]# kubectl get componentstatus NAME STATUS MESSAGE ERROR
etcd-0 Healthy {health:true}
controller-manager Healthy ok
scheduler Healthy ok 3、安装k8s-master上的node
1安装node
[rootk8s-master ~]# yum install kubernetes node.x86_642配置kubelet
[rootk8s-master ~]# vim /etc/kubernetes/kubelet
5 KUBELET_ADDRESS--address192.168.2.116 //监听IP地址
11 KUBELET_HOSTNAME--hostname-overridek8s-master //监听计算机名
14 KUBELET_API_SERVER--api-servershttp://192.168.2.116:8080 //监听apiserver端口
3启动kubelet启动自动启动docker服务
[rootk8s-master ~]# systemctl start kubelet
[rootk8s-master ~]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.4启动kubelet-proxy
[rootk8s-master ~]# systemctl start kube-proxy
[rootk8s-master ~]# systemctl enable kube-proxyCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.5检查node节点
[rootk8s-master ~]# kubectl get nodesNAME STATUS AGE
k8s-master Ready 51s4、安装配置k8s-node1节点
1安装node
[rootk8s-node1 ~]# yum install kubernetes-node.x86_64
2node1连接k8s-master
[rootk8s-node1 ~]# vim /etc/kubernetes/config
22 KUBE_MASTER--masterhttp://192.168.2.116:8080
3配置kubelet
[rootk8s-node1 ~]# vim /etc/kubernetes/kubelet
5 KUBELET_ADDRESS--address192.168.2.117
11 KUBELET_HOSTNAME--hostname-overridek8s-node1
15 KUBELET_API_SERVER--api-servershttp://192.168.2.116:8080
4启动服务
[rootk8s-node1 yum.repos.d]# systemctl start kubelet
[rootk8s-node1 yum.repos.d]# systemctl start kube-proxy
[rootk8s-node1 yum.repos.d]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[rootk8s-node1 yum.repos.d]# systemctl enable kube-proxyCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.5在master节点检测node节点状态
[rootk8s-master ~]# kubectl get nodesNAME STATUS AGE
k8s-master Ready 27m
k8s-node1 Ready 12m5、安装k8s-node2节点
1安装node
[rootk8s-node2 ~]# yum install kubernetes-node.x86_64 2node2连接k8s-master
[rootk8s-node2 ~]# vim /etc/kubernetes/config
22 KUBE_MASTER--masterhttp://192.168.2.116:8080
3配置kubelet
[rootk8s-node2 ~]# vim /etc/kubernetes/kubelet
5 KUBELET_ADDRESS--address192.168.2.118
11 KUBELET_HOSTNAME--hostname-overridek8s-node1
15 KUBELET_API_SERVER--api-servershttp://192.168.2.116:8080
4启动服务
[rootk8s-node2 yum.repos.d]# systemctl start kubelet
[rootk8s-node2 yum.repos.d]# systemctl start kube-proxy
[rootk8s-node2 yum.repos.d]# systemctl enable kube-proxyCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[rootk8s-node2 yum.repos.d]# systemctl enable kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to
/usr/lib/systemd/system/kubelet.service.5在master节点检测node节点状态
[rootk8s-master ~]# kubectl get nodesNAME STATUS AGE
k8s-master Ready 31m
k8s-node1 Ready 16m
k8s-node2 Ready 33s6、为所有node节点配置flannel网络
1在k8s-master节点安装flannel
[rootk8s-master ~]# yum install flannel -y
[rootk8s-master ~]# vim /etc/sysconfig/flanneld
4 FLANNEL_ETCD_ENDPOINTShttp://192.168.200.112:2379[rootk8s-master ~]# etcdctl set /atomic.io/network/config { Network: 172.16.0.0/16 } //配置网络
{ Network: 172.16.0.0/16 }[rootk8s-master ~]# systemctl start flanneld
[rootk8s-master ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.[rootk8s-master ~]# ifconfigflannel0: flags4305UP,POINTOPOINT,RUNNING,NOARP,MULTICAST mtu 1472inet 172.16.35.0 netmask 255.255.0.0 destination 172.16.35.0inet6 fe80::db5b:30ce:83c4:c67b prefixlen 64 scopeid 0x20linkunspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 500 (UNSPEC)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 3 bytes 144 (144.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0[rootk8s-master ~]# systemctl restart docker
[rootk8s-master ~]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[rootk8s-master ~]# ifconfig
docker0: flags4099UP,BROADCAST,MULTICAST mtu 1500inet 172.16.35.1 netmask 255.255.255.0 broadcast 0.0.0.0ether 02:42:02:36:e4:15 txqueuelen 0 (Ethernet)RX packets 0 bytes 0 (0.0 B)RX errors 0 dropped 0 overruns 0 frame 0TX packets 0 bytes 0 (0.0 B)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
2配置node1节点flannel网络
[rootk8s-node1 ~]# yum install flannel -y
[rootk8s-node1 ~]# vim /etc/sysconfig/flanneld4 FLANNEL_ETCD_ENDPOINTShttp://192.168.2.116:2379[rootk8s-node1 ~]# systemctl start flanneld
[rootk8s-node1 ~]# systemctl enable flanneldCreated symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.[rootk8s-node1 ~]# systemctl restart docker
[rootk8s-node1 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.3安装node2节点flannel网络
[rootk8s-node2 ~]# yum install flannel -y[rootk8s-node2 ~]# vim /etc/sysconfig/flanneld4 FLANNEL_ETCD_ENDPOINTShttp://192.168.2.116:2379[rootk8s-node2 ~]# systemctl start flanneld
[rootk8s-node2 ~]# systemctl enable flanneldCreated symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.[rootk8s-node2 ~]# systemctl restart docker
[rootk8s-node2 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[rootk8s-master ~]# kubectl get nodesNAME STATUS AGE
k8s-master Ready 47m
k8s-node1 Ready 32m
k8s-node2 Ready 16m4测试docker容器跨宿主机通信
[rootk8s-master ~]# docker run -it busybox //下载镜像
Unable to find image busybox:latest locally
Trying to pull repository docker.io/library/busybox ...
latest: Pulling from docker.io/library/busybox
3f4d90098f5b: Pull complete
Digest: sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79
Status: Downloaded newer image for docker.io/busybox:latest
/ #
/ # ping 172.16.71.1 //测试和其他docker宿主机之间通信
PING 172.16.71.1 (172.16.71.1): 56 data bytes
64 bytes from 172.16.71.1: seq0 ttl61 time1.730 ms
64 bytes from 172.16.71.1: seq1 ttl61 time0.443 ms
64 bytes from 172.16.71.1: seq2 ttl61 time0.867 ms
^C
--- 172.16.71.1 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max 0.443/1.013/1.730 ms
/ # ping 172.16.10.1 //测试和其他docker宿主机之间通信
PING 172.16.10.1 (172.16.10.1): 56 data bytes
64 bytes from 172.16.10.1: seq0 ttl61 time1.424 ms
64 bytes from 172.16.10.1: seq1 ttl61 time0.485 ms
^C
--- 172.16.10.1 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max 0.485/0.954/1.424 ms7、配置docker开启加载防火墙规则允许转发数据
1配置k8s-master节点
[rootk8s-master ~]# vim /usr/lib/systemd/system/docker.service
18 ExecStartPort/usr/sbin/iptables -P FORWARD ACCEPT #添加[rootk8s-master ~]# systemctl daemon-reload
[rootk8s-master ~]# systemctl restart docker
2配置k8s-node1节点
[rootk8s-node1 ~]# vim /usr/lib/systemd/system/docker.service18 ExecStartPort/usr/sbin/iptables -P FORWARD ACCEPT #添加[rootk8s-node1 ~]# systemctl daemon-reload
[rootk8s-node1 ~]# systemctl restart docker
3配置k8s-node2节点
[rootk8s-node2 ~]# vim /usr/lib/systemd/system/docker.service18 ExecStartPort/usr/sbin/iptables -P FORWARD ACCEPT #添加[rootk8s-node2 ~]# systemctl daemon-reload
[rootk8s-node2 ~]# systemctl restart docker
二、k8s常用资源管理
1、创建一个pod
1创建yuml文件
[rootk8s-master ~]# mkdir k8s
[rootk8s-master ~]# vim ./k8s/nginx.yaml
apiVersion: v1
kind: Pod
metadata:name: nginxlabels:app: web
spec:containers:- name: nginximage: nginx:1.13ports:- containerPort: 80
2创建容器
方法一. yum安装[rootk8s-master ~]#yum install *rhsm*方法二 我是用这方法解决的执行命令[rootk8s-master ~]#wget http://mirror.centos.org/centos/7/os/x86_64/Packages/python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm[rootk8s-master ~]#rpm2cpio python-rhsm-certificates-1.19.10-1.el7_4.x86_64.rpm | cpio -iv --to-stdout ./etc/rhsm/ca/redhat-uep.pem | tee /etc/rhsm/ca/redhat-uep.pem
前两个命令会生成/etc/rhsm/ca/redhat-uep.pem文件. [rootk8s-master ~]#docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
[rootk8s-master ~]# kubectl create -f ./k8s/nginx.yaml
3查看所有pod创建运行状态
[rootk8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 5m4查看指定pod资源
[rootk8s-master ~]# kubectl get pod nginx
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 6m5查看pod运行的详细信息
[rootk8s-master ~]# kubectl describe pod nginx
Name: nginx
Namespace: default
Node: k8s-node2/192.168.2.118
Start Time: Fri, 11 Aug 2023 15:34:10 0800
Labels: appweb
Status: Pending
IP:
Controllers: none
Containers:nginx:Container ID: Image: nginx:1.13Image ID: Port: 80/TCPState: WaitingReason: ContainerCreatingReady: FalseRestart Count: 0Volume Mounts: noneEnvironment Variables: none
Conditions:Type StatusInitialized True Ready False PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: none
Events:FirstSeen LastSeen Count From SubObjectPath Type Reason Message--------- -------- ----- ---- ------------- -------- ------ -------7m 7m 1 {default-scheduler } NormalScheduled Successfully assigned nginx to k8s-node27m 1m 6 {kubelet k8s-node2} WarninFailedSync Error syncing pod, skipping: failed to StartContainer for POD with ErrImagePull: image pull failed for registry.access.redhat.com/rhel7/pod-infrastructure:latest, this may be because there are no credentials on this request. details: (open /etc/docker/certs.d/registry.access.redhat.com/redhat-ca.crt: no such file or directory)6m 9s 25 {kubelet k8s-node2} Warning FailedSync Error syncing pod, skipping: failed to StartContainer for POD with ImagePullBackOff: Back-off pulling image \registry.access.redhat.com/rhel7/pod-infrastructure:latest\
6验证运行的pod
[rootk8s-master ~]# kubectl get pod nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 8m none k8s-node22、pod管理
1删除pod
[rootk8s-master ~]# kubectl delete pod nginx
pod nginx deleted2查看删除pod无法找到
[rootk8s-master ~]# kubectl get pod nginx -o wide
Error from server (NotFound): pods nginx not found3创建pod
[rootk8s-master ~]# kubectl create -f ./k8s/nginx.yaml
pod nginx created4发现最先创建的pod运行在k8s-master节点上下载镜像速度太慢没法运行
[rootk8s-master ~]# kubectl get pod nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx 0/1 ContainerCreating 0 8s none k8s-node1