聊城网站建设招聘,商城类网站建设报价,网推渠道,泰宁县建设局网站参考网上资料并将异常问题解决#xff0c;经测试可正常安装集群。
1.我的环境准备
本人使用vmware pro 17新建三个centos7虚拟机#xff0c;每个2cpu#xff0c;20GB磁盘存储#xff0c;内存2GB#xff0c;其中主节点的内存3GB#xff0c;可使用外网.
2.所有节点安装D…参考网上资料并将异常问题解决经测试可正常安装集群。
1.我的环境准备
本人使用vmware pro 17新建三个centos7虚拟机每个2cpu20GB磁盘存储内存2GB其中主节点的内存3GB可使用外网.
2.所有节点安装Docker
#查看系统是否已安装docker
rpm -qa|grep docker#卸载旧版本docker
sudo yum remove docker* #安装yum工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2#配置docker的yum下载地址
sudo yum-config-manager \
--add-repo \
http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo #生成缓存
sudo yum makecache #查看docker版本
yum list docker-ce --showduplicates | sort -r #安装docker的指定版本
sudo yum install -y docker-ce-19.03.9-3.el7 docker-ce-cli-19.03.9-3.el7 containerd.io#配置开机启动且立即启动docker容器
systemctl enable docker --now #创建docker配置
sudo mkdir -p /etc/docker #配置docker的镜像加速
sudo tee /etc/docker/daemon.json -EOF
{registry-mirrors: [http://hub-mirror.c.163.com],exec-opts: [native.cgroupdriversystemd],log-driver: json-file,log-opts: {max-size: 100m},storage-driver: overlay2
}
EOF#加载配置
sudo systemctl daemon-reload
#重启docker
sudo systemctl restart docker ##查看docker版本看是否安装成功
[rootlocalhost ~]# docker version
Client: Docker Engine - CommunityVersion: 19.03.9API version: 1.40Go version: go1.13.10Git commit: 9d988398e7Built: Fri May 15 00:25:27 2020OS/Arch: linux/amd64Experimental: falseps这个镜像地址http://hub-mirror.c.163.com真的非常好用下载镜像非常快
3.安装kubernetes
3.1 所有机器配置自己的hostname不能是localhost
我的集群192.168.209.132配置为master192.168.209.133为node1192.168.209.134为node2。
hostnamectl set-hostname master #在192.168.209.132执行
hostnamectl set-hostname node1 #在192.168.209.133执行
hostnamectl set-hostname node2 #在192.168.209.134执行3.2 所有机器必须关闭swap分区不为0则说明没有关闭禁用selinux允许 iptables 检查桥接流量k8s官网。
##关闭swap分区
swapoff -a
sed -ri s/.*swap.*/#/ /etc/fstab ## 把SELinux 设置为 permissive 模式相当于禁用
sudo setenforce 0
sudo sed -i s/^SELINUXenforcing$/SELINUXpermissive/ /etc/selinux/config ## 允许 iptables 检查桥接流量
cat EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOFcat EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
EOF
sudo sysctl --system3.3 所有机器关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service4. 安装kubelet、kubeadm、kubectl
4.1 所有机器配置k8s的yum源地址及安装并启动kubelet。
#配置k8s的yum源地址
cat EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
nameKubernetes
baseurlhttp://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled1
gpgcheck0
repo_gpgcheck0
gpgkeyhttp://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttp://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF#安装 kubeletkubeadmkubectl
sudo yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9#启动kubelet
sudo systemctl enable --now kubelet#所有机器配置master域名
echo 192.168.209.132 master /etc/hosts4.2 初始化master主节点
我这里是把192.168.209.132作为master–apiserver-advertise-address值为master的IP、–control-plane-endpoint值为master的域名、–image-repository 值为镜像仓库、–kubernetes-version指定k8s的版本、–service-cidr指定service的网段、–pod-network-cidr指定pod的网段。 注意pod-network-cidr指定的网段不要和master在同一个网段
kubeadm init \--apiserver-advertise-address192.168.209.132 \--control-plane-endpointmaster \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \--service-cidr10.96.0.0/16 \--pod-network-cidr192.169.0.0/16初始化完毕后需要记录如下信息后续会使用到。
Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:kubeadm join master:6443 --token qvwpva.w2tzw5bgwvswloho \--discovery-token-ca-cert-hash sha256:31e38d3227593fa4e5de5fb7e6a868cf927a0936c221d20efbe638daf8827ecd \--control-plane Then you can join any number of worker nodes by running the following on each as root:#后续会用到此部分工作节点加入主节点成为集群
kubeadm join master:6443 --token qvwpva.w2tzw5bgwvswloho \--discovery-token-ca-cert-hash sha256:31e38d3227593fa4e5de5fb7e6a868cf927a0936c221d20efbe638daf8827ecd 4.3 为执行kubectl
目前仅在在master节点执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configecho export KUBECONFIG/etc/kubernetes/admin.conf ~/.bash_profile
source ~/.bash_profile4.4 安装网络插件
4.4.1 更改kube-proxy的模式为ipvs
##更改kube-proxy的模式为ipvs
[rootmaster ~]# kubectl edit configMap kube-proxy -n kube-systemipvs:......kind: KubeProxyConfigurationmetricsBindAddress: mode: ipvs #设置为ipvs不设置默认使用iptables##重启所有的kube-proxy
[rootmaster ~]# kubectl get pod -A | grep kube-proxy | awk {system(kubectl delete pod $2 -n kube-system)}
pod kube-proxy-689h8 deleted##查看k8s主节点运行情况
[rootmaster ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-5897cd56c4-56mmj 0/1 Pending 0 8m38s
kube-system coredns-5897cd56c4-mgfmh 0/1 Pending 0 8m38s
kube-system etcd-master 1/1 Running 0 8m51s
kube-system kube-apiserver-master 1/1 Running 0 8m51s
kube-system kube-controller-manager-master 1/1 Running 0 8m51s
kube-system kube-proxy-l6946 1/1 Running 0 8m38s
kube-system kube-scheduler-master 1/1 Running 0 8m51s##查看proxy是否以ipvs模式运行发现已经换成了IPv4
[rootmaster ~]# kubectl logs kube-proxy-l6946 -n kube-system
......
I0623 09:03:38.347008 1 server_others.go:258] Using ipvs Proxier.4.4.2 安装网络插件
4.4.2.1 下载calico.yaml
[rootmaster ~]# curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O calico.yaml% Total % Received % Xferd Average Speed Time Time Time CurrentDload Upload Total Spent Left Speed
100 198k 100 198k 0 0 229k 0 --:--:-- --:--:-- --:--:-- 229k
[rootmaster ~]# 4.4.2.2 查看部署calico.yaml需要的镜像先进行下载
# 先登录docker
docker login#查看需要的镜像
grep image calico.yaml
image: docker.io/calico/cni:v3.20.6
image: docker.io/calico/cni:v3.20.6
image: docker.io/calico/pod2daemon-flexvol:v3.20.6
image: docker.io/calico/node:v3.20.6
image: docker.io/calico/kube-controllers:v3.20.6#全部下载到本地特别慢就换docker镜像源地址我之前配的就是换的特别快见本文上面部分
docker pull docker.io/calico/cni:v3.20.6
docker pull docker.io/calico/pod2daemon-flexvol:v3.20.6
docker pull docker.io/calico/node:v3.20.6
docker pull docker.io/calico/kube-controllers:v3.20.64.4.2.3 部署
[rootmaster ~]# kubectl apply -f calico.yaml #卸载则使用kubectl delete -f calico.yaml
configmap/calico-config unchanged
......
[rootmaster ~]## 查看部署成功
[rootmaster ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6d9cdcd744-49vrr 1/1 Running 0 12m
kube-system calico-node-8gpm2 1/1 Running 0 13m
kube-system coredns-5897cd56c4-56mmj 1/1 Running 0 28m
kube-system coredns-5897cd56c4-mgfmh 1/1 Running 0 28m
kube-system etcd-master 1/1 Running 0 28m
kube-system kube-apiserver-master 1/1 Running 0 28m
kube-system kube-controller-manager-master 1/1 Running 0 28m
kube-system kube-proxy-l6946 1/1 Running 0 119s
kube-system kube-scheduler-master 1/1 Running 0 28m5.node两个工作节点加入master节点每个都要执行
5.1 还记得4.4步骤初始化master节点后的信息吗work加入master节点的命令如下需要切换到非master节点的机器上执行。
kubeadm join master:6443 --token qvwpva.w2tzw5bgwvswloho \
--discovery-token-ca-cert-hash sha256:31e38d3227593fa4e5de5fb7e6a868cf927a0936c221d20efbe638daf8827ecd5.2 将主节点的/etc/kubernetes/admin.conf配置在工作节点也生成一份然后保存
vi /etc/kubernetes/admin.conf
echo export KUBECONFIG/etc/kubernetes/admin.conf ~/.bash_profile
source ~/.bash_profile6.各节点查看kubectl get nodes都是ready
7.安装Ingress Controller
7.1 下载yaml文件
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml -O ./ingress-nginx.yaml7.2 修改yaml配置文件
$ grep -n5 nodeSelector ingress-nginx.yaml
replicas: 2 #设置副本数host模式不会被调度到同一个node
spec:hostNetwork: true #添加为host模式
terminationGracePeriodSeconds: 300
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
ingress: true #替换此处来决定将ingress部署在哪些机器
containers:
- name: nginx-ingress-controllerimage: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0args:7.3 为安装ingress的node节点添加label
kubectl label node master ingresstrue
kubectl label node node1 ingresstrue
kubectl label node node2 ingresstrue7.4 创建ingress-controller
kubectl apply -f ingress-nginx.yamlPS若有报错则如下我是把报错有关的标签去掉了然后重新执行就成功了
Warning FailedScheduling 38s (x7 over 6m4s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didnt tolerate, 2 node(s) didnt have free ports for the requested pod ports.
[rootmaster k8s-app]# kubectl get nodes --show-labels -l ingresstrue
NAME STATUS ROLES AGE VERSION LABELS
master Ready control-plane,master 94m v1.20.9 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,ingresstrue,kubernetes.io/archamd64,kubernetes.io/hostnamemaster,kubernetes.io/oslinux,node-role.kubernetes.io/control-plane,node-role.kubernetes.io/master
node1 Ready none 89m v1.20.9 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,ingresstrue,kubernetes.io/archamd64,kubernetes.io/hostnamenode1,kubernetes.io/oslinux
node2 Ready none 89m v1.20.9 beta.kubernetes.io/archamd64,beta.kubernetes.io/oslinux,ingresstrue,kubernetes.io/archamd64,kubernetes.io/hostnamenode2,kubernetes.io/oslinux
[rootmaster k8s-app]# kubectl taint nodes --all node-role.kubernetes.io/master-
node/master untainted
taint node-role.kubernetes.io/master not found
taint node-role.kubernetes.io/master not found
# 去掉标签
[rootmaster k8s-app]# kubectl label nodes master node-role.kubernetes.io/control-plane-
node/master labeled
[rootmaster k8s-app]# kubectl label nodes master node-role.kubernetes.io/master-ls8.部署成功
[rootmaster istio]# kubectl version
Client Version: version.Info{Major:1, Minor:20, GitVersion:v1.20.9, GitCommit:7a576bc3935a6b555e33346fd73ad77c925e9e4a, GitTreeState:clean, BuildDate:2021-07-15T21:01:38Z, GoVersion:go1.15.14, Compiler:gc, Platform:linux/amd64}
Server Version: version.Info{Major:1, Minor:20, GitVersion:v1.20.9, GitCommit:7a576bc3935a6b555e33346fd73ad77c925e9e4a, GitTreeState:clean, BuildDate:2021-07-15T20:56:38Z, GoVersion:go1.15.14, Compiler:gc, Platform:linux/amd64}[rootmaster istio]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready none 25h v1.20.9
node1 Ready none 25h v1.20.9
node2 Ready none 25h v1.20.9我们就拥有了k8s集群 文章转载自: http://www.morning.bpmdg.cn.gov.cn.bpmdg.cn http://www.morning.jlschmy.com.gov.cn.jlschmy.com http://www.morning.pyswr.cn.gov.cn.pyswr.cn http://www.morning.fsfz.cn.gov.cn.fsfz.cn http://www.morning.zcfmb.cn.gov.cn.zcfmb.cn http://www.morning.xpgwz.cn.gov.cn.xpgwz.cn http://www.morning.cwgt.cn.gov.cn.cwgt.cn http://www.morning.rahllp.com.gov.cn.rahllp.com http://www.morning.rkqkb.cn.gov.cn.rkqkb.cn http://www.morning.ychrn.cn.gov.cn.ychrn.cn http://www.morning.ai-wang.cn.gov.cn.ai-wang.cn http://www.morning.fbbmg.cn.gov.cn.fbbmg.cn http://www.morning.pcqdf.cn.gov.cn.pcqdf.cn http://www.morning.rrqgf.cn.gov.cn.rrqgf.cn http://www.morning.pwsnr.cn.gov.cn.pwsnr.cn http://www.morning.jtfcd.cn.gov.cn.jtfcd.cn http://www.morning.fglxh.cn.gov.cn.fglxh.cn http://www.morning.ztqj.cn.gov.cn.ztqj.cn http://www.morning.zqybs.cn.gov.cn.zqybs.cn http://www.morning.fnfhs.cn.gov.cn.fnfhs.cn http://www.morning.rnsjp.cn.gov.cn.rnsjp.cn http://www.morning.tjwlp.cn.gov.cn.tjwlp.cn http://www.morning.jwcmq.cn.gov.cn.jwcmq.cn http://www.morning.bslkt.cn.gov.cn.bslkt.cn http://www.morning.pinngee.com.gov.cn.pinngee.com http://www.morning.rltw.cn.gov.cn.rltw.cn http://www.morning.wqhlj.cn.gov.cn.wqhlj.cn http://www.morning.yqqxj1.cn.gov.cn.yqqxj1.cn http://www.morning.bsjpd.cn.gov.cn.bsjpd.cn http://www.morning.pcngq.cn.gov.cn.pcngq.cn http://www.morning.lmdfj.cn.gov.cn.lmdfj.cn http://www.morning.lmfxq.cn.gov.cn.lmfxq.cn http://www.morning.ksgjn.cn.gov.cn.ksgjn.cn http://www.morning.kjrp.cn.gov.cn.kjrp.cn http://www.morning.xltdh.cn.gov.cn.xltdh.cn http://www.morning.pqndg.cn.gov.cn.pqndg.cn http://www.morning.dswtz.cn.gov.cn.dswtz.cn http://www.morning.rbrhj.cn.gov.cn.rbrhj.cn http://www.morning.dfojgo.cn.gov.cn.dfojgo.cn http://www.morning.jzdfc.cn.gov.cn.jzdfc.cn http://www.morning.yrccw.cn.gov.cn.yrccw.cn http://www.morning.jxcwn.cn.gov.cn.jxcwn.cn http://www.morning.nfccq.cn.gov.cn.nfccq.cn http://www.morning.yrgb.cn.gov.cn.yrgb.cn http://www.morning.kjcfz.cn.gov.cn.kjcfz.cn http://www.morning.rtbhz.cn.gov.cn.rtbhz.cn http://www.morning.tpnxr.cn.gov.cn.tpnxr.cn http://www.morning.zqzzn.cn.gov.cn.zqzzn.cn http://www.morning.gqflj.cn.gov.cn.gqflj.cn http://www.morning.qbdqc.cn.gov.cn.qbdqc.cn http://www.morning.wyzby.cn.gov.cn.wyzby.cn http://www.morning.xjqkh.cn.gov.cn.xjqkh.cn http://www.morning.pqbkk.cn.gov.cn.pqbkk.cn http://www.morning.pshpx.cn.gov.cn.pshpx.cn http://www.morning.wsnbg.cn.gov.cn.wsnbg.cn http://www.morning.gwwky.cn.gov.cn.gwwky.cn http://www.morning.dqxph.cn.gov.cn.dqxph.cn http://www.morning.ycmpk.cn.gov.cn.ycmpk.cn http://www.morning.rqfnl.cn.gov.cn.rqfnl.cn http://www.morning.lonlie.com.gov.cn.lonlie.com http://www.morning.rynqh.cn.gov.cn.rynqh.cn http://www.morning.mtmnk.cn.gov.cn.mtmnk.cn http://www.morning.zmzdx.cn.gov.cn.zmzdx.cn http://www.morning.dgfpp.cn.gov.cn.dgfpp.cn http://www.morning.tqklh.cn.gov.cn.tqklh.cn http://www.morning.bfwk.cn.gov.cn.bfwk.cn http://www.morning.plgbh.cn.gov.cn.plgbh.cn http://www.morning.qflwp.cn.gov.cn.qflwp.cn http://www.morning.qncqd.cn.gov.cn.qncqd.cn http://www.morning.hmsong.com.gov.cn.hmsong.com http://www.morning.dhqyh.cn.gov.cn.dhqyh.cn http://www.morning.sfgzx.cn.gov.cn.sfgzx.cn http://www.morning.wpmqq.cn.gov.cn.wpmqq.cn http://www.morning.bdfph.cn.gov.cn.bdfph.cn http://www.morning.mcjp.cn.gov.cn.mcjp.cn http://www.morning.pjfmq.cn.gov.cn.pjfmq.cn http://www.morning.brmbm.cn.gov.cn.brmbm.cn http://www.morning.yrskc.cn.gov.cn.yrskc.cn http://www.morning.llmhq.cn.gov.cn.llmhq.cn http://www.morning.znkls.cn.gov.cn.znkls.cn