网站建设课程大纲,开发小程序哪家好,灯饰 东莞网站建设,网站开发 自我评价文章目录一、环境二、步骤1、安装cfssl工具2、部署etcd集群3、在node节点安装docker组件4、安装flannel组件部署master节点组件部署node节点部署kube-proxy组件三、测试一、环境
角色服务器地址组件master192.168.174.140kube-apiserver#xff0c;kube-controller-managerkube-controller-managerkube-scheduleretcdnode192.168.174.151kube-proxyflannelkubeletdockeretcdnode192.168.174.190kube-proxyflannelkubeletdockeretcd
二、步骤
1、安装cfssl工具
在其中一台安装即可用来生成签署各组件的证书。
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo2、部署etcd集群
生成三个文件ca-config.json, ca-csr.json, server-csr.json
# cat ca-config.json
{signing: {default: {expiry: 87600h},profiles: {www: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}}
}# cat ca-csr.json
{CN: etcd CA,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing}]
}# cat server-csr.json
{CN: etcd,hosts: [192.168.174.140,192.168.174.151,192.168.174.190],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing}]
}生成证书文件
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilewww server-csr.json | cfssljson -bare server
# ls *pem
ca-key.pem ca.pem server-key.pem server.pem下载etcd
二进制包下载地址https://github.com/coreos/etcd/releases/tag/v3.2.12
mkdir /opt/etcd/{bin,cfg,ssl} -p
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/创建etcd配置文件
这里的配置文件除了节点名服务器当前IP不同其他都相同在其他节点进行相同的操作。
# cat /opt/etcd/cfg/etcd
#[Member]
ETCD_NAMEetcd01
ETCD_DATA_DIR/var/lib/etcd/default.etcd
ETCD_LISTEN_PEER_URLShttps://192.168.174.140:2380
ETCD_LISTEN_CLIENT_URLShttps://192.168.174.140:2379#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLShttps://192.168.174.140:2380
ETCD_ADVERTISE_CLIENT_URLShttps://192.168.174.140:2379
ETCD_INITIAL_CLUSTERetcd01https://192.168.174.140:2380,etcd02https://192.168.174.151:2380,etcd03https://192.168.174.190:2380
ETCD_INITIAL_CLUSTER_TOKENetcd-clusterETCD_INITIAL_CLUSTER_STATEnew以下是各选项的说明
ETCD_NAME 节点名称
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态new是新集群existing表示加入已有集群配置etcd启动文件
[Unit]
DescriptionEtcd Server
Afternetwork.target
Afternetwork-online.target
Wantsnetwork-online.target[Service]
Typenotify
EnvironmentFile/opt/etcd/cfg/etcd
ExecStart/opt/etcd/bin/etcd \
--name${ETCD_NAME} \
--data-dir${ETCD_DATA_DIR} \
--listen-peer-urls${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-statenew \
--cert-file/opt/etcd/ssl/server.pem \
--key-file/opt/etcd/ssl/server-key.pem \
--peer-cert-file/opt/etcd/ssl/server.pem \
--peer-key-file/opt/etcd/ssl/server-key.pem \
--trusted-ca-file/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file/opt/etcd/ssl/ca.pem
Restarton-failure
LimitNOFILE65536[Install]
WantedBymulti-user.target把刚才证书移动到ssl目录下
cp ca*pem server*pem /opt/etcd/ssl在3个节点都进行以上部署etcd集群的操作。
启动etcd集群。
# systemctl start etcd
# systemctl enable etcd集群健康状态检查。
/opt/etcd/bin/etcdctl \
--ca-fileca.pem --cert-fileserver.pem --key-fileserver-key.pem \
--endpointshttps://192.168.174.140:2379,https://192.168.174.151:2379,https://192.168.174.190:2379 \
cluster-health出现以下输出就说明集群是健康的。 至此etcd集群安装成功。
3、在node节点安装docker组件
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager \--add-repo \https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce -y
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
# systemctl start docker
# systemctl enable docker4、安装flannel组件
Falnnel要用etcd存储自身一个子网信息所以要保证能成功连接Etcd写入预定义子网段
/opt/etcd/bin/etcdctl \
--ca-fileca.pem --cert-fileserver.pem --key-fileserver-key.pem \
--endpointshttps://192.168.174.140:2379,https://192.168.174.151:2379,https://192.168.174.190:2379 \
set /coreos.com/network/config { Network: 172.17.0.0/16, Backend: {Type: vxlan}}以下操作在所有的node节点上运行
# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
# tar zxvf flannel-v0.9.1-linux-amd64.tar.gz
# cp flanneld mk-docker-opts.sh /usr/bin
# cp flanneld mk-docker-opts.sh /opt/kubernetes/bin/flannel配置文件:
cat /opt/kubernetes/cfg/flanneld:
FLANNEL_OPTIONS--etcd-endpointshttps://192.168.174.140:2379,https://192.168.174.151:2379,https://192.168.174.190:2379 -etcd-cafile/opt/etcd/ssl/ca.pem -etcd-certfile/opt/etcd/ssl/server.pem -etcd-keyfile/opt/etcd/ssl/server-key.pem配置flannel启动文件
# cat /usr/lib/systemd/system/flanneld.service
[Unit]
DescriptionFlanneld overlay address etcd agent
Afternetwork-online.target network.target
Beforedocker.service[Service]
Typenotify
EnvironmentFile/opt/kubernetes/cfg/flanneld
ExecStart/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restarton-failure[Install]
WantedBymulti-user.target配置docker启动文件用来指定和flannel同网段
# cat /usr/lib/systemd/system/docker.service [Unit]
DescriptionDocker Application Container Engine
Documentationhttps://docs.docker.com
Afternetwork-online.target firewalld.service
Wantsnetwork-online.target[Service]
Typenotify
EnvironmentFile/run/flannel/subnet.env
ExecStart/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload/bin/kill -s HUP $MAINPID
LimitNOFILEinfinity
LimitNPROCinfinity
LimitCOREinfinity
TimeoutStartSec0
Delegateyes
KillModeprocess
Restarton-failure
StartLimitBurst3
StartLimitInterval60s[Install]
WantedBymulti-user.target重启服务
# systemctl daemon-reload
# systemctl start flanneld
# systemctl enable flanneld
# systemctl restart docker确保docker0与flannel.1在同一网段。 测试不同节点互通在当前节点访问另一个Node节点docker0 IP确保能通信。
部署master节点组件
# cat ca-config.json
{signing: {default: {expiry: 87600h},profiles: {kubernetes: {expiry: 87600h,usages: [signing,key encipherment,server auth,client auth]}}}
}# cat ca-csr.json
{CN: kubernetes,key: {algo: rsa,size: 2048},names: [{C: CN,L: Beijing,ST: Beijing,O: k8s,OU: System}]
}生成CA证书cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
配置apiserver证书json文件
ca server-csr.json{CN: kubernetes,hosts: [10.0.0.1,127.0.0.1,192.168.174.140,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}]
}生成apiserver证书cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes server-csr.json | cfssljson -bare server
并把证书复制到/opt/kubernetes/ssl/目录下
mv *pem /opt/kubernetes/ssl/下载kubernetes组件地址https://github.com/kubernetes/kubernetes/releases
# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# tar zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin创建token
# cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,system:kubelet-bootstrap第一列随机字符串自己可生成 第二列用户名 第三列UID 第四列用户组
api-server配置文件
KUBE_APISERVER_OPTS--logtostderrtrue \
--v4 \
--etcd-servershttps://192.168.174.140:2379,https://192.168.174.151:2379,https://192.168.174.190:2379 \
--bind-address192.168.174.140 \
--secure-port6443 \
--advertise-address192.168.174.140 \
--allow-privilegedtrue \
--service-cluster-ip-range10.0.0.0/24 \
--enable-admission-pluginsNamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-modeRBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file/opt/kubernetes/cfg/token.csv \
--service-node-port-range30000-50000 \
--tls-cert-file/opt/kubernetes/ssl/server.pem \
--tls-private-key-file/opt/kubernetes/ssl/server-key.pem \
--client-ca-file/opt/kubernetes/ssl/ca.pem \
--service-account-key-file/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile/opt/etcd/ssl/ca.pem \
--etcd-certfile/opt/etcd/ssl/server.pem \
--etcd-keyfile/opt/etcd/ssl/server-key.pem参数说明
--logtostderr 启用日志
---v 日志等级
--etcd-servers etcd集群地址
--bind-address 监听地址
--secure-port https安全端口
--advertise-address 集群通告地址
--allow-privileged 启用授权
--service-cluster-ip-range Service虚拟IP地址段
--enable-admission-plugins 准入控制模块
--authorization-mode 认证授权启用RBAC授权和节点自管理
--enable-bootstrap-token-auth 启用TLS bootstrap功能后面会讲到
--token-auth-file token文件
--service-node-port-range Service Node类型默认分配端口范围配置apiserver启动文件
# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
DescriptionKubernetes API Server
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile-/opt/kubernetes/cfg/kube-apiserver
ExecStart/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restarton-failure[Install]
WantedBymulti-user.target启动aip-server服务
# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl restart kube-apiserver创建schduler配置文件
# cat /opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS--logtostderrtrue \
--v4 \
--master127.0.0.1:8080 \
--leader-elect
参数说明--master 连接本地apiserver
--leader-elect 当该组件启动多个时自动选举HAschduler启动文件
# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
DescriptionKubernetes Scheduler
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile-/opt/kubernetes/cfg/kube-scheduler
ExecStart/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restarton-failure[Install]
WantedBymulti-user.target启动schduler服务
# systemctl daemon-reload
# systemctl enable kube-apiserver
# systemctl restart kube-apiserver创建kube-controller-manager文件
# cat /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS--logtostderrtrue \
--v4 \
--master127.0.0.1:8080 \
--leader-electtrue \
--address127.0.0.1 \
--service-cluster-ip-range10.0.0.0/24 \
--cluster-namekubernetes \
--cluster-signing-cert-file/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file/opt/kubernetes/ssl/ca-key.pem创建kube-controller-manage启动文件
# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
DescriptionKubernetes Controller Manager
Documentationhttps://github.com/kubernetes/kubernetes[Service]
EnvironmentFile-/opt/kubernetes/cfg/kube-controller-manager
ExecStart/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restarton-failure[Install]
WantedBymulti-user.target启动kube-controller-manager服务
# systemctl daemon-reload
# systemctl enable kube-controller-manager
# systemctl restart kube-controller-manager检查集群master组件是否健康 部署node节点
将hel用户绑定到系统集群角色
注意这里如果不同节点的话需要创建不同的用户来管理。
kubectl create clusterrolebinding hel \--clusterrolesystem:node-bootstrapper \--userhel创建kubeconfig文件
export KUBE_APISERVERhttps://192.168.174.140:6443
TOKEN674c457d4dcf2eefe4920d7dbb6b0ddc# 设置集群参数在k8s CA证书的目录下
kubectl config set-cluster kubernetes \--certificate-authority./ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfighel.kubeconfig# 设置客户端认证参数
kubectl config set-credentials hel \--token${TOKEN} \--kubeconfighel.kubeconfig# 设置上下文参数
kubectl config set-context default \--clusterkubernetes \--userhel \--kubeconfighel.kubeconfigkubectl config use-context default --kubeconfigbootstrap.kubeconfig创建kube-proxy证书
# cat kube-proxy-csr.json
{CN: system:kube-proxy,hosts: [],key: {algo: rsa,size: 2048},names: [{C: CN,L: BeiJing,ST: BeiJing,O: k8s,OU: System}]
}生成kube-proxy证书
cfssl gencert -caca.pem -ca-keyca-key.pem -configca-config.json -profilekubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy创建kube-proxy kubeconfig文件:
kubectl config set-cluster kubernetes \--certificate-authority./ca.pem \--embed-certstrue \--server${KUBE_APISERVER} \--kubeconfigkube-proxy.kubeconfigkubectl config set-credentials kube-proxy \--client-certificate./kube-proxy.pem \--client-key./kube-proxy-key.pem \--embed-certstrue \--kubeconfigkube-proxy.kubeconfigkubectl config set-context default \--clusterkubernetes \--userkube-proxy \--kubeconfigkube-proxy.kubeconfigkubectl config use-context default --kubeconfigkube-proxy.kubeconfig将hel.kubeconfig kube-proxy.kubeconfig这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下
在node节点上
创建kubelet配置文件
# cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS--logtostderrtrue \
--v4 \
--hostname-override192.168.174.151 \
--kubeconfig/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig/opt/kubernetes/cfg/hel.kubeconfig \
--cert-dir/opt/kubernetes/ssl \
--cluster-dns10.0.0.2 \
--cluster-domaincluster.local. \
--fail-swap-onfalse \
--pod-infra-container-imageregistry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
参数说明--hostname-override 在集群中显示的主机名
--kubeconfig 指定kubeconfig文件位置会自动生成
--bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
--cert-dir 颁发证书存放位置
--cluster-dns 集群DNS IP先配置上后面会讲到
--cluster-domain DNS域
--fail-swap-onfalse 禁止使用swap
--pod-infra-container-image 管理Pod网络的镜像创建kubelet启动文件
# cat /usr/lib/systemd/system/kubelet.service
[Unit]
DescriptionKubernetes Kubelet
Afterdocker.service
Requiresdocker.service[Service]
EnvironmentFile/opt/kubernetes/cfg/kubelet
ExecStart/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restarton-failure
KillModeprocess[Install]
WantedBymulti-user.target启动服务
# systemctl daemon-reload
# systemctl enable kubelet
# systemctl restart kubelet在Master审批Node加入集群
启动后还没加入到集群中需要手动允许该节点才可以。 在Master节点查看请求签名的Node
# kubectl get csr
# kubectl certificate approve XXXXID
# kubectl get node部署kube-proxy组件
创建kube-proxy配置文件
# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS--logtostderrtrue \
--v4 \
--hostname-override192.168.174.151 \
--cluster-cidr10.0.0.0/24 \
--kubeconfig/opt/kubernetes/cfg/kube-proxy.kubeconfig创建kube-proxy启动文件
# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
DescriptionKubernetes Proxy
Afternetwork.target[Service]
EnvironmentFile-/opt/kubernetes/cfg/kube-proxy
ExecStart/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restarton-failure[Install]
WantedBymulti-user.target启动服务
# systemctl daemon-reload
# systemctl enable kube-proxy
# systemctl restart kube-proxyOK至此k8s二进制安装已完成。
测试是否能正常运行
查看集群节点状态 说明集群创建成功
三、测试
跑个nginx服务测试是否能生成pod并正常被访问
kubectl run nginx-deploy --imagenginx --port80 --replicas1