新建网站站点的,一级a做爰片图片免费观看网站,wordpress程序 耗内存,做网站是用ps还是ai目录
目录
1.环境准备
1.1禁用swap#xff08;kubernetes特性#xff09;
1.2 关闭iptables#xff08;三台机器都要设置#xff09;
1.3 修改主机名#xff08;三台机器都要设置#xff09;
1.4 域名解析#xff0c;ssh免密登录
1.5 调整docker配置文件(所有机器…目录
目录
1.环境准备
1.1禁用swapkubernetes特性
1.2 关闭iptables三台机器都要设置
1.3 修改主机名三台机器都要设置
1.4 域名解析ssh免密登录
1.5 调整docker配置文件(所有机器
2.安装k8s
2.1下载yum源
2.2创建缓存将后面需要下载的rpm包缓存下来方便其他机器使用
2.3 打开iptables桥接功能三个节点都需调整
2.4 打开路由转发三个节点都需调整
2.5 回到master节点
2.6 初始化集群下载镜像
2.7 其他节点加入集群 3.k8s网络环境重置 Kubernetes是一个开源的用于管理云平台中多个主机上的容器化的应用Kubernetes的目标是让部署容器化的应用简单并且高效powerful,Kubernetes提供了应用部署规划更新维护的一种机制。
Kubernetes可以在物理或虚拟机的Kubernetes集群上运行容器化应用Kubernetes能提供一个以“容器为中心的基础架构”满足在生产环境中运行应用的一些常见需求。
kubernetes中文官网 https://kubernetes.io/zh/ kubernetes中文社区 https://www.kubernetes.org.cn/
1.环境准备
我的服务器资源环境
每台主机必须安装docker关闭防火墙一般kubernetes是局域网运行禁用selinux确保文件访问确保时间同步。
1.1禁用swapkubernetes特性
注意所有节点都需禁用不然无法加入集群。
#查看是否启用swap
[rootm72 ~]# free -htotal used free shared buff/cache available
Mem: 62G 647M 54G 1.0G 7.9G 60G
Swap: 15G 0B 15G
关闭swap执行
# 关闭swap
[rootm72 ~]# swapoff -a
# 修改swap配置文件
[rootm72 ~]# vim /etc/fstab
注释掉下面部分 1.2 关闭iptables三台机器都要设置
关闭iptables并重启docker服务
[rootm72 ~]# iptables -F[rootm72 ~]# systemctl daemon-reload
[rootm72 ~]# systemctl restart docker
1.3 修改主机名三台机器都要设置
修改成对应的主机名我这里主机设置的是m72, 2个节点是s73和s74 [rootm72 ~]# hostnamectl set-hostname m72[roots73 ~]# hostnamectl set-hostname s73[roots74 ~]# hostnamectl set-hostname s74
1.4 域名解析ssh免密登录
修改主机hosts文件
[rootm72 ~]# vim /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.220.15.72 m72
10.220.15.73 s73
10.220.15.74 s74
将hosts文件拷贝给其他两个节点
[rootm72 ~]# scp /etc/hosts root10.220.15.73:/etc/hosts[rootm72 ~]# scp /etc/hosts root10.220.15.74:/etc/hosts主机生成ssh秘钥
[rootm72 ~]# ssh-keygen -t rsaGenerating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:ZEdN2IJIihMOhSrVavfgY81EDm6mJw1clOIg7hLwzpg rootm72
The keys randomart image is:
---[RSA 2048]----
|.ooo.o.. ... |
|ooo.o ..o o |
|*. o .. |
|.B.* oo . |
|oB O * S |
|E. * |
|. . |
| |
| |
----[SHA256]-----
拷贝秘钥给另外两个节点
[rootm72 ~]# ssh-copy-id s73
[rootm72 ~]# ssh-copy-id s74
1.5 调整docker配置文件(所有机器 # kubernetes 官方推荐 docker 等使用 systemd 作为 cgroupdriver否则 kubelet 启动不了 vi /etc/docker/daemon.json
主要是增加exec-opts: [native.cgroupdriversystemd],这个选项其他的根据自己的配置来
{exec-opts: [native.cgroupdriversystemd],storage-driver: overlay2,log-driver: json-file,log-opts: {max-size: 100m},registry-mirrors: [https://b9307g5p.mirror.aliyuncs.com]
}
完成后重启生效
systemctl daemon-reload
systemctl restart docker
2.安装k8s
我们安装k8s时利用的是kubernetes官方开发出来的自动化部署的软件kubeadm以来实现更快速的安装k8s。
2.1下载yum源
我们这里选择阿里的yum源
https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
新建一个kubernetes的repo文件
vim /etc/yum.repos.d/kubernetes.repo
加入以下内容保存
[kubernetes]
namekubernetes
baseurlhttps://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enable1
gpgcheck0
repo_gpgcheck0
gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg2.2创建缓存将后面需要下载的rpm包缓存下来方便其他机器使用
[rootm72 ~]# yum makecache
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile* base: mirrors.bfsu.edu.cn* extras: mirrors.aliyun.com* updates: mirrors.aliyun.com
base | 3.6 kB 00:00:00
docker-ce-stable | 3.5 kB 00:00:00
extras | 2.9 kB 00:00:00
kubernetes | 1.4 kB 00:00:00
updates | 2.9 kB 00:00:00
......
kubernetes 797/797
kubernetes 797/797
Metadata Cache Created
将kubernetes.repo复制给其他两台机器
[rootm72 /]# scp /etc/yum.repos.d/kubernetes.repo s73:/etc/yum.repos.d/[rootm72 /]# scp /etc/yum.repos.d/kubernetes.repo s74:/etc/yum.repos.d/2.3 打开iptables桥接功能三个节点都需调整
自定义文件k8s.conf
[rootm72 /]# vim /etc/sysctl.d/k8s.conf# 加入以下内容
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1# 使配置生效
[rootm72 /]# sysctl -p /etc/sysctl.d/k8s.conf
同样方式调整另外两台机器
2.4 打开路由转发三个节点都需调整
[rootm72 /]# echo net.ipv4.ip_forward 1 /etc/sysctl.conf # 使配置生效
[rootmaster ~]# sysctl -p
同样方式调整另外两台机器
2.5 回到master节点
[rootm72 /]# vim /etc/yum.conf
修改keepcache1 //启用缓存
下载rpm包
[rootm72 /]# yum -y install kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4
下载完成后查看是否缓存了rpm包
[rootm72 /]# cd /var/cache/yum/x86_64/7/kubernetes/packages/
[rootm72 packages]# lltotal 67204
-rw-r--r--. 1 root root 7401938 Mar 18 06:26 4d300a7655f56307d35f127d99dc192b6aa4997f322234e754f16aaa60fd8906-cri-tools-1.23.0-0.x86_64.rpm
-rw-r--r--. 1 root root 9290806 Jan 4 2021 aa386b8f2cac67415283227ccb01dc043d718aec142e32e1a2ba6dbd5173317b-kubeadm-1.22.4-0.x86_64.rpm
-rw-r--r--. 1 root root 19487362 Jan 4 2021 db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm
-rw-r--r--. 1 root root 9920226 Jan 4 2021 f27b0d7e1770ae83c9fce9ab30a5a7eba4453727cdc53ee96dc4542c8577a464-kubectl-1.22.4-0.x86_64.rpm
-rw-r--r--. 1 root root 22704558 Jan 4 2021 f5edc025972c2d092ac41b05877c89b50cedaa7177978d9e5e49b5a2979dbc85-kubelet-1.22.4-0.x86_64.rpm
设置kubelet开机自启动
[rootm72 packages]# systemctl enable kubelet.service
2.6 初始化集群下载镜像
可是由于国内网络环境限制我们不能直接从谷歌的镜像站下载镜像,我们直接上传镜像文件到集群然后导入即可
链接百度网盘 提取码kuxd
编写脚本自动导入
vim images-import.sh
加入以下内容
#!/bin/bash
# 默认会解压到/root/kubeadm-basic.images文件下
tar -zxvf /root/kubeadm-basic.images.tar.gz
ls /root/kubeadm-basic.images /tmp/image-list.txt
cd /root/kubeadm-basic.imagesfor i in $( cat /tmp/image-list.txt )
dodocker load -i $i
donerm -rf /tmp/image-list.txt
执行脚本查看docker镜像列表是否成功
[rootm72 ~]# sh images-import.sh # 查看是否成功
[rootm72 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.22.4 68c3eb07bfc3 2 years ago 207MB
k8s.gcr.io/kube-controller-manager v1.22.4 d75082f1d121 2 years ago 159MB
k8s.gcr.io/kube-proxy v1.22.4 89a062da739d 2 years ago 82.4MB
k8s.gcr.io/kube-scheduler v1.22.4 b0b3c4c404da 2 years ago 81.1MB
k8s.gcr.io/coredns 1.3.1 eb516548c180 3 years ago 40.3MB
k8s.gcr.io/etcd 3.3.10 2c4adeb21b4f 3 years ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 4 years ago 742kB
docker镜像都导入后执行初始化集群
kubeadm init --kubernetes-versionv1.22.4 --pod-network-cidr10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers --service-cidr10.96.0.0/12 --ignore-preflight-errorsSwap
#–kubernetes-version**指定当前kubernetes版本号查看版本kubelet --version
#–pod-network: 指定pod网段kubernetes默认指定网络。
#–ignore忽略所有报错
执行后如下图记住最后打印的token语句其他节点加入需要用得到 [init] Using Kubernetes version: v1.22.4......[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.220.15.72:6443 --token cralff.5gzyqjx8jirtta2c \--discovery-token-ca-cert-hash sha256:d72a9456b0ccb8e2bbdf58ae735410249ec9d6dfb641aa9e38e60e336de6d5bc
根据上面的提示创建目录并授权
[rootm72 ~]# mkdir -p $HOME/.kube
[rootm72 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[rootm72 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看节点
[rootm72 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m72 NotReady master 9m54s v1.22.4
显示 NotReady是因为还缺少一个附件flannel没有网络各Pod是无法通信的。
下载kube-flannel.yml文件可能被墙无法下载那么本地直接创建。
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
内容如下
---
kind: Namespace
apiVersion: v1
metadata:name: kube-flannellabels:k8s-app: flannelpod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
rules:
- apiGroups:- resources:- podsverbs:- get
- apiGroups:- resources:- nodesverbs:- get- list- watch
- apiGroups:- resources:- nodes/statusverbs:- patch
- apiGroups:- networking.k8s.ioresources:- clustercidrsverbs:- list- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:labels:k8s-app: flannelname: flannel
roleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannel
subjects:
- kind: ServiceAccountname: flannelnamespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:labels:k8s-app: flannelname: flannelnamespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodek8s-app: flannelapp: flannel
data:cni-conf.json: |{name: cbr0,cniVersion: 0.3.1,plugins: [{type: flannel,delegate: {hairpinMode: true,isDefaultGateway: true}},{type: portmap,capabilities: {portMappings: true}}]}net-conf.json: |{Network: 10.244.0.0/16,Backend: {Type: vxlan}}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannelk8s-app: flannel
spec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-pluginimage: docker.io/flannel/flannel-cni-plugin:v1.2.0command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cniimage: docker.io/flannel/flannel:v0.22.2command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannelimage: docker.io/flannel/flannel:v0.22.2command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: 100mmemory: 50MisecurityContext:privileged: falsecapabilities:add: [NET_ADMIN, NET_RAW]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: 5000volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate
重新加载再次查看
[rootm72 ~]# kubectl apply -f kube-flannel.yml [rootm72 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m72 Ready master 38m v1.22.4
2.7 其他节点加入集群
其他节点分别执行
yum -y install kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4
加入开机自启动
systemctl enable kubelet.service
创建文件夹
mkdir images
主节点复制镜像文件
[rootm72 ~]# cd kubeadm-basic.images
[rootm72 kubeadm-basic.images]# scp * s73:/root/images/
[rootm72 kubeadm-basic.images]# scp * s73:/root/images/
回到s73从节点docker加入上述镜像
docker load -i xxx.tar
最后s73执行之前打印的加入语句
kubeadm join 10.220.15.72:6443 --token cralff.5gzyqjx8jirtta2c \--discovery-token-ca-cert-hash sha256:d72a9456b0ccb8e2bbdf58ae735410249ec9d6dfb641aa9e38e60e336de6d5bc 提示已加入 查看主节点
[rootm72 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
m72 Ready master 3h7m v1.22.4
s73 Ready none 8m32s v1.22.4
s73节点加入成功同理加入s74节点集群部署完成 3.k8s网络环境重置 实际k8s使用环境中可能会出现集群网络环境变动的情况这时候我们就需要调整集群部署了
首先更新好每台机器节点hosts文件然后调整k8s配置
每个节点执行重置
kubeadm reset
主节点执行配置文件删除
rm -rf $HOME/.kube
重新执行配置初始化主要是更改apiserver的配置为新的ip地址
kubeadm init \
--kubernetes-versionv1.22.4 \
--pod-network-cidr10.244.0.0/16 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr10.96.0.0/12 \
--apiserver-advertise-address${newIp}
复制kube配置文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
根据kubeadm init生成的kubeadm join语句在从节点重新执行一遍加入步骤即可
执行kubectl get nodes查看节点信息
kubectl get nodes
如果有从节点显示notready更改从节点配置执行
vi /var/lib/kubelet/kubeadm-flags.env
删除以下配置
--network-plugincni
重启服务
systemctl daemon-reload
systemctl restart kubelet