建设网站本地调试,网站安全 扫描,企业简介范文,南京软件外包k8s 1.31 环境初始化安装Container安装runc安装CNI插件部署k8s集群安装crictl使用kubeadm部署集群节点加入集群部署Calico网络配置dashboard 本实验基于VMware创建的Ubuntu2404虚拟机搭建k8s 1.31版本集群#xff0c;架构为一主一从#xff0c;容器运行时使用Container#… k8s 1.31 环境初始化安装Container安装runc安装CNI插件部署k8s集群安装crictl使用kubeadm部署集群节点加入集群部署Calico网络配置dashboard 本实验基于VMware创建的Ubuntu2404虚拟机搭建k8s 1.31版本集群架构为一主一从容器运行时使用Container网络插件采用Calico
注本实验上传软件包默认放在root目录下涉及到的镜像、软件包等可网盘自取
通过网盘分享的文件k8s-1.32.1 链接: https://pan.baidu.com/s/1qlPUNOMu1OWZlNRN7M7RSw?pwd6kf8 提取码: 6kf8 –来自百度网盘超级会员v4的分享
节点IPmaster192.168.200.160worker01192.168.200.161
官网 本次搭建的版本为最新的1.32版本引入了多项重要功能和增强提升了集群的性能、可管理性和可靠性。以下是该版本的主要特点 动态资源分配Dynamic Resource Allocation, DRA增强 DRA 是一个集群级 API用于在 Pod 和容器之间请求和共享资源。在 1.32 版本中DRA 得到了多项增强特别是在管理依赖 GPU 等专用硬件的 AI/ML 工作负载方面提高了资源分配的效率和灵活性。 Windows 节点的优雅关机支持 此前Kubernetes 仅在 Linux 节点上支持优雅关机功能。在 1.32 版本中kubelet 增强了对 Windows 节点的支持确保在关机时Pod 能够遵循正确的生命周期事件实现优雅终止从而保障工作负载的平稳迁移和不中断运行。 核心组件新增状态端点 为 kube-scheduler 和 kube-controller-manager 等核心组件新增了 /statusz 和 /flagz 两个 HTTP 状态端点。这使得收集集群健康状况和配置的详细信息更加容易有助于发现并排除故障。 调度器中的异步抢占Asynchronous Preemption 引入了调度器中的异步抢占机制允许高优先级的 Pod 通过并行驱逐低优先级的 Pod 来获取所需资源从而最大限度地减少集群中其他 Pod 的调度延迟。 自动删除由 StatefulSets 创建的 PersistentVolumeClaimPVC 这项功能已在 1.32 版本中达到稳定状态。它简化了有状态工作负载的存储管理降低了资源闲置的风险。 Kubelet 的 OpenTelemetry 跟踪数据导出改进 对 Kubelet 生成和导出 OpenTelemetry 跟踪数据的功能进行了全面改进旨在让监控、检测和解决与 Kubelet 相关的问题变得更容易。 其他增强和弃用 匿名授权配置端点允许对已配置的端点进行匿名授权增强了集群的安全性和灵活性。 卷扩展失败恢复引入了从卷扩展失败中恢复的功能允许以较小的容量重试卷扩展失败后的恢复降低数据丢失或损坏的风险。 API 移除移除了与 FlowSchema 和 PriorityLevelConfiguration 相关的 flowcontrol.apiserver.k8s.io/v1beta3 API鼓励用户迁移到自 1.29 版本起可用的 flowcontrol.apiserver.k8s.io/v1 API。
环境初始化
初始化集群信息包括主机名、映射、免密等等
vim init.sh#!/bin/bash# 定义节点信息
NODES(192.168.200.160 master root 192.168.200.161 worker01 root)# 定义当前节点的密码默认集群统一密码
HOST_PASS000000# 时间同步的目标节点
TIME_SERVERmaster# 时间同步的地址段
TIME_SERVER_IP192.160.200.0/24# 欢迎界面
cat /etc/motd EOF################################# Welcome to k8s #################################
EOF# 修改主机名
for node in ${NODES[]}; doip$(echo $node | awk {print $1})hostname$(echo $node | awk {print $2})# 获取当前节点的主机名和 IPcurrent_ip$(hostname -I | awk {print $1})current_hostname$(hostname)# 检查当前节点与要修改的节点信息是否匹配if [[ $current_ip $ip $current_hostname ! $hostname ]]; thenecho Updating hostname to $hostname on $current_ip...hostnamectl set-hostname $hostnameif [ $? -eq 0 ]; thenecho Hostname updated successfully.elseecho Failed to update hostname.fibreakfi
done# 遍历节点信息并添加到 hosts 文件
for node in ${NODES[]}; doip$(echo $node | awk {print $1})hostname$(echo $node | awk {print $2})# 检查 hosts 文件中是否已存在相应的解析if grep -q $ip $hostname /etc/hosts; thenecho Host entry for $hostname already exists in /etc/hosts.else# 添加节点的解析条目到 hosts 文件sudo sh -c echo $ip $hostname /etc/hostsecho Added host entry for $hostname in /etc/hosts.fi
doneif [[ ! -s ~/.ssh/id_rsa.pub ]]; thenssh-keygen -t rsa -N -f ~/.ssh/id_rsa -q -b 2048
fi# 检查并安装 sshpass 工具
if ! which sshpass /dev/null; thenecho sshpass 工具未安装正在安装 sshpass...sudo apt-get install -y sshpass
fi# 遍历所有节点进行免密操作
for node in ${NODES[]}; doip$(echo $node | awk {print $1})hostname$(echo $node | awk {print $2})user$(echo $node | awk {print $3})# 使用 sshpass 提供密码并自动确认密钥sshpass -p $HOST_PASS ssh-copy-id -o StrictHostKeyCheckingno -i /root/.ssh/id_rsa.pub $user$hostname
done# 时间同步
apt install -y chrony
if [[ $TIME_SERVER_IP *$(hostname -I)* ]]; then# 配置当前节点为时间同步源sed -i 20,23s/^/#/g /etc/chrony/chrony.confecho server $TIME_SERVER iburst maxsources 2 /etc/chrony/chrony.confecho allow $TIME_SERVER_IP /etc/chrony/chrony.confecho local stratum 10 /etc/chrony/chrony.conf
else# 配置当前节点同步到目标节点sed -i 20,23s/^/#/g /etc/chrony/chrony.confecho pool $TIME_SERVER iburst maxsources 2 /etc/chrony/chrony.conf
fi# 重启并启用 chrony 服务
systemctl daemon-reload
systemctl restart chrony
systemctl enable chrony# 关闭交换分区
swapoff -a
sed -i s/.*swap.*/#/ /etc/fstab#配置 Linux 主机以支持 Kubernetes 网络和容器桥接网络
cat EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOFsudo modprobe overlay
sudo modprobe br_netfiltercat EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables 1
net.bridge.bridge-nf-call-ip6tables 1
net.ipv4.ip_forward 1
EOFsudo sysctl --systemecho ###############################################################
echo ################# k8s集群初始化成功 ###################
echo ###############################################################双节点执行
bash init.sh安装Container
Containerd 是行业标准的容器运行时注重简单性、稳健性和可移植性。它可用作 Linux 和 Windows 的守护进程可以管理其主机系统的完整容器生命周期镜像传输和存储、容器执行和监督、低级存储和网络附件等。安装最新即可
Github下载软件包 将压缩包给上传后配置
vim ctr.sh#!/bin/bashtar -zxf containerd-2.0.2-linux-amd64.tar.gz -C /usr/local/#修改配置文件
cat /etc/systemd/system/containerd.service eof
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the License);
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.[Unit]
Descriptioncontainerd container runtime
Documentationhttps://containerd.io
Afternetwork.target local-fs.target[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#EnvironmentENABLE_CRI_SANDBOXESsandboxed
ExecStartPre-/sbin/modprobe overlay
ExecStart/usr/local/bin/containerdTypenotify
Delegateyes
KillModeprocess
Restartalways
RestartSec5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROCinfinity
LimitCOREinfinity
LimitNOFILEinfinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMaxinfinity
OOMScoreAdjust-999[Install]
WantedBymulti-user.target
eof#加载生效
systemctl daemon-reload
systemctl enable --now containerd#查看版本并生成配置文件
ctr version
mkdir /etc/containerd
containerd config default /etc/containerd/config.toml
systemctl restart containerd双节点执行
bash ctr.sh查看验证
rootmaster:~# ctr version
Client:Version: v2.0.2Revision: c507a0257ea6462fbd6f5ba4f5c74facb04021f4Go version: go1.23.4Server:Version: v2.0.2Revision: c507a0257ea6462fbd6f5ba4f5c74facb04021f4UUID: cbb1f812-d4da-4db7-982e-6d14100b1872
rootmaster:~# systemctl status containerd
● containerd.service - containerd container runtimeLoaded: loaded (/etc/systemd/system/containerd.service; enabled; preset: enabled)Active: active (running) since Sat 2025-02-08 06:13:07 UTC; 56s agoDocs: https://containerd.ioMain PID: 12066 (containerd)Tasks: 8Memory: 14.5M (peak: 15.0M)CPU: 121msCGroup: /system.slice/containerd.service└─12066 /usr/local/bin/containerdFeb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665875531Z levelinfo msgStart recovering state
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665955208Z levelinfo msgStart event monitor
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665966555Z levelinfo msgStart cni network conf
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665972388Z levelinfo msgStart streaming server
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665985026Z levelinfo msgRegistered namespace \
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665992443Z levelinfo msgruntime interface start
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.665997209Z levelinfo msgstarting plugins...
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.666004445Z levelinfo msgSynchronizing NRI (plug
Feb 08 06:13:07 master containerd[12066]: time2025-02-08T06:13:07.666080260Z levelinfo msgcontainerd successfully
Feb 08 06:13:07 master systemd[1]: Started containerd.service - containerd container runtime.安装runc
runc是一个根据 OCI 规范在 Linux 上生成和运行容器的 CLI 工具。安装最新即可
GitHub下载软件包 上传到两台节点安装
install -m 755 runc.amd64 /usr/local/sbin/runc验证查看
rootmaster:~# runc -v
runc version 1.2.4
commit: v1.2.4-0-g6c52b3fc
spec: 1.2.0
go: go1.22.10
libseccomp: 2.5.5安装CNI插件
容器网络接口提供网络资源通过CNI接口Kubernetes可以支持不同网络环境。安装最新即可
GitHub软件包下载 上传节点后配置
vim cni.sh#!/bin/bashmkdir -p /opt/cni/bin
tar -zxf cni-plugins-linux-amd64-v1.6.2.tgz -C /opt/cni/bin# 配置 containerd 证书路径
sed -i s/config_path\ .*/config_path \\/etc\/containerd\/certs.d\/g /etc/containerd/config.toml
mkdir -p /etc/containerd/certs.d/docker.io# 配置镜像加速和镜像源
cat /etc/containerd/certs.d/docker.io/hosts.toml EOF
server https://docker.io
# 配置阿里云镜像加速器仅支持 pull
[host.https://o90diikg.mirror.aliyuncs.com]capabilities [pull, resolve]# 配置官方 Docker Hub支持 pull push
[host.https://registry-1.docker.io]capabilities [pull, push, resolve]# 配置腾讯云镜像仓库支持 pull push
[host.https://registry-mirrors.yunyuan.co]capabilities [pull, push, resolve]skip_verify true # 如果是自签名证书可以跳过证书验证
EOF修改 cgroup 驱动 为 SystemdCgroup true确保 containerd 使用 systemd 管理 cgroup与 Kubernetes 默认 cgroup 方式匹配。
官网配置说明
vim /etc/containerd/config.toml[plugins.io.containerd.cri.v1.runtime.containerd.runtimes][plugins.io.containerd.cri.v1.runtime.containerd.runtimes.runc]runtime_type io.containerd.runc.v2runtime_path pod_annotations []container_annotations []privileged_without_host_devices falseprivileged_without_host_devices_all_devices_allowed falsebase_runtime_spec cni_conf_dir cni_max_conf_num 0snapshotter sandboxer podsandboxio_type SystemdCgroup true # 添加注意添加的这个镜像地址要与下面使用阿里云拉取的镜像名称版本一致否则初始化集群会报错镜像拉取失败 [plugins.io.containerd.grpc.v1.cri]disable_tcp_service truestream_server_address 127.0.0.1stream_server_port 0stream_idle_timeout 4h0m0senable_tls_streaming falsesandbox_image registry.aliyuncs.com/google_containers/pause:3.10 # 添加[plugins.io.containerd.cri.v1.images.pinned_images]sandbox registry.aliyuncs.com/google_containers/pause:3.10 # 修改systemctl daemon-reload ; systemctl restart containerd双节点执行
bash cni.sh部署k8s集群
安装crictl
Crictl是Kubelet 容器运行时接口 (CRI) 的 CLI 和验证工具。安装最新即可
GitHub软件包下载 上传节点后配置
vim cri.sh#!/bin/bashtar -zxf crictl-v1.32.0-linux-amd64.tar.gz -C /usr/local/bin/cat /etc/crictl.yaml EOF
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: true
EOFsystemctl daemon-reload;systemctl restart containerd双节点执行
bash cri.sh验证查看
rootmaster:~# crictl -v
crictl version v1.32.0使用kubeadm部署集群
k8s官网由于被墙下载不了
使用阿里云拉取所需镜像官网教程
两台节点执行
apt update apt-get install -y apt-transport-https
curl -fsSL https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/deb/Release.key |gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo deb [signed-by/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.32/deb/ / |tee /etc/apt/sources.list.d/kubernetes.list
apt update安装kubeadm、kubelet、kubectl1.32.1版本
apt install -y kubelet1.32.1-1.1 kubeadm1.32.1-1.1 kubectl1.32.1-1.1查看验证
rootmaster:~# kubeadm version
kubeadm version: version.Info{Major:1, Minor:32, GitVersion:v1.32.1, GitCommit:e9c9be4007d1664e68796af02b8978640d2c1b26, GitTreeState:clean, BuildDate:2025-01-15T14:39:14Z, GoVersion:go1.23.4, Compiler:gc, Platform:linux/amd64}
rootmaster:~# kubectl version --client
Client Version: v1.32.1
Kustomize Version: v5.5.0
rootmaster:~#master节点生成默认的配置文件并修改
rootmaster:~# kubeadm config print init-defaults kubeadm.yaml
rootmaster:~# vim kubeadm.yamlapiVersion: kubeadm.k8s.io/v1beta4
bootstrapTokens:
- groups:- system:bootstrappers:kubeadm:default-node-tokentoken: abcdef.0123456789abcdefttl: 24h0m0susages:- signing- authentication
kind: InitConfiguration
localAPIEndpoint:advertiseAddress: 192.168.200.160 # 修改为master的IPbindPort: 6443
nodeRegistration:criSocket: unix:///var/run/containerd/containerd.sock # Unix 域套接字不同容器运行时对应不同imagePullPolicy: IfNotPresentimagePullSerial: truename: master # 修改为控制节点主机名taints: null
timeouts:controlPlaneComponentHealthCheck: 4m0sdiscovery: 5m0setcdAPICall: 2m0skubeletHealthCheck: 4m0skubernetesAPICall: 1m0stlsBootstrap: 5m0supgradeManifests: 5m0s
---
apiServer: {}
apiVersion: kubeadm.k8s.io/v1beta4
caCertificateValidityPeriod: 87600h0m0s
certificateValidityPeriod: 8760h0m0s
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
encryptionAlgorithm: RSA-2048
etcd:local:dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改为阿里云镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.32.1 # 根据实际版本号修改
networking:dnsDomain: cluster.localserviceSubnet: 10.96.0.0/12podSubnet: 10.244.0.0/16 # 设置pod网段,需要添加如果没有添加则默认使用网络插件的网段
proxy: {}
scheduler: {}
#添加内容配置kubelet的CGroup为systemd
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd下载镜像
kubeadm config images pull --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-versionv1.32.1rootmaster:~# kubeadm config images pull --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-versionv1.32.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.32.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.32.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.32.1
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.32.1
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.3
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.10
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.16-0kubeadm开始初始化
kubeadm init --config kubeadm.yamlrootmaster:~# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.32.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action beforehand using kubeadm config images pull
W0209 04:05:45.955712 49522 checks.go:846] detected that the sandbox image of the container runtime is inconsistent with that used by kubeadm.It is recommended to use registry.aliyuncs.com/google_containers/pause:3.10 as the CRI sandbox image.
[certs] Using certificateDir folder /etc/kubernetes/pki
[certs] Generating ca certificate and key
[certs] Generating apiserver certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.96.0.1 192.168.200.160]
[certs] Generating apiserver-kubelet-client certificate and key
[certs] Generating front-proxy-ca certificate and key
[certs] Generating front-proxy-client certificate and key
[certs] Generating etcd/ca certificate and key
[certs] Generating etcd/server certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.200.160 127.0.0.1 ::1]
[certs] Generating etcd/peer certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.200.160 127.0.0.1 ::1]
[certs] Generating etcd/healthcheck-client certificate and key
[certs] Generating apiserver-etcd-client certificate and key
[certs] Generating sa key and public key
[kubeconfig] Using kubeconfig folder /etc/kubernetes
[kubeconfig] Writing admin.conf kubeconfig file
[kubeconfig] Writing super-admin.conf kubeconfig file
[kubeconfig] Writing kubelet.conf kubeconfig file
[kubeconfig] Writing controller-manager.conf kubeconfig file
[kubeconfig] Writing scheduler.conf kubeconfig file
[etcd] Creating static Pod manifest for local etcd in /etc/kubernetes/manifests
[control-plane] Using manifest folder /etc/kubernetes/manifests
[control-plane] Creating static Pod manifest for kube-apiserver
[control-plane] Creating static Pod manifest for kube-controller-manager
[control-plane] Creating static Pod manifest for kube-scheduler
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory /etc/kubernetes/manifests
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 503.750966ms
[api-check] Waiting for a healthy API server. This can take up to 4m0s
[api-check] The API server is healthy after 6.002633068s
[upload-config] Storing the configuration used in ConfigMap kubeadm-config in the kube-system Namespace
[kubelet] Creating a ConfigMap kubelet-config in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the cluster-info ConfigMap in the kube-public namespace
[kubelet-finalize] Updating /etc/kubernetes/kubelet.conf to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run kubectl apply -f [podnetwork].yaml with one of the options listed at:https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.200.160:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:fdd4e72c72ad3cb70675d5004cbc193650cfd919621758b74430261d694c628d
rootmaster:~#注如果初始化失败检查容器container、kubelet等日志然后重置集群重新初始化如果搭建的版本和本博文不一致注意修改registry.aliyuncs.com/google_containers/pause:3.10镜像地址在container配置文件中是否对应 官网故障排查
rm -rf /etc/kubernetes/*kubeadm reset配置访问k8s集群在初始化信息中可获取命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configrootmaster:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 11m v1.32.1节点加入集群
使用初始化信息中的加入集群命令
kubeadm join 192.168.200.160:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:fdd4e72c72ad3cb70675d5004cbc193650cfd919621758b74430261d694c628drootworker01:~# kubeadm join 192.168.200.160:6443 --token abcdef.0123456789abcdef \--discovery-token-ca-cert-hash sha256:fdd4e72c72ad3cb70675d5004cbc193650cfd919621758b74430261d694c628d
[preflight] Running pre-flight checks
[preflight] Reading configuration from the kubeadm-config ConfigMap in namespace kube-system...
[preflight] Use kubeadm init phase upload-config --config your-config.yaml to re-upload it.
[kubelet-start] Writing kubelet configuration to file /var/lib/kubelet/config.yaml
[kubelet-start] Writing kubelet environment file with flags to file /var/lib/kubelet/kubeadm-flags.env
[kubelet-start] Starting the kubelet
[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
[kubelet-check] The kubelet is healthy after 1.012874735s
[kubelet-start] Waiting for the kubelet to perform the TLS BootstrapThis node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run kubectl get nodes on the control-plane to see this node join the cluster.因为还没有部署网络coredns还起不来并且是notready状态
rootmaster:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 20h v1.32.1
worker01 NotReady none 20s v1.32.1
rootmaster:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6766b7b6bb-d6x7n 0/1 Pending 0 20h
kube-system coredns-6766b7b6bb-wpsq5 0/1 Pending 0 20h
kube-system etcd-master 1/1 Running 0 20h
kube-system kube-apiserver-master 1/1 Running 0 20h
kube-system kube-controller-manager-master 1/1 Running 1 20h
kube-system kube-proxy-bmfm5 1/1 Running 0 20h
kube-system kube-proxy-wctln 1/1 Running 0 4m41s
kube-system kube-scheduler-master 1/1 Running 0 20h配置node节点访问集群
scp -r rootmaster:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf
echo export KUBECONFIG/etc/kubernetes/admin.conf /etc/profile
source /etc/profilerootworker01:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane 20h v1.32.1
worker01 NotReady none 6m56s v1.32.1如果需要删除节点
让节点停止接受新的Pod使用kubectl drain命令驱逐该节点上的pod
kubectl drain 节点名称 --ignore-daemonsets --delete-emptydir-data参数说明 –ignore-daemonsets忽略 DaemonSet 运行的 Pod –delete-emptydir-data强制删除emptyDir卷的数据如果有 –force选项可强制删除
然后删除节点
kubectl delete node 节点名称部署Calico网络
GitHub官网介绍
Calico 项目由Tigera创建和维护是一个拥有活跃开发和用户社区的开源项目。Calico Open Source 已发展成为最广泛采用的容器网络和安全解决方案每天为 166 个国家/地区的 800 多万个节点提供支持。版本使用最新即可有如下两种方式搭建任选其一
1使用官方提供的 Calico 清单 适用于标准环境Calico 作为 DaemonSet 运行提供 Pod 网络、网络策略等功能
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/calico.yamlvi calico.yaml如果需要自定义设置容器网段取消注释进行修改 初始化集群时没有指定则使用calico的网段如果指定了则两边需要一致 注意yaml格式要对齐
6291 - name: CALICO_IPV4POOL_CIDR
6292 value: 192.168.0.0/16导入calico所需镜像 calico/node:v3.29.2 calico/kube-controllers:v3.29.2 calico/cni:v3.29.2
rootmaster:~# ctr -n k8s.io images import all_docker_images.tar
docker.io/calico/cni:v3.29.2 saved
docker.io/calico/node:v3.29.2 saved
docker.io/calico/kube controllers:v3.29. saved
application/vnd.oci.image.manifest.v1json sha256:38f50349e2d40f94db2c7d7e3edf777a40b884a3903932c07872ead2a4898b85
application/vnd.oci.image.manifest.v1json sha256:63bf93f2b3698ff702fa325d215f20a22d50a70d1da2a425b8ce1f9767a5e393
application/vnd.oci.image.manifest.v1json sha256:27942e6de00c7642be5522f45ab3efe3436de8223fc72c4be1df5da7d88a3b22
Importing elapsed: 25.0s total: 0.0 B (0.0 B/s)启动calico配置文件
rootmaster:~# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created查看节点信息已经处于ready状态并且容器pod的默认网段为172.17.0.0/16
rootmaster:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-77969b7d87-vbgzf 1/1 Running 0 46s
kube-system calico-node-52dd8 0/1 Running 0 46s
kube-system calico-node-d5zc2 1/1 Running 0 46s
kube-system coredns-6766b7b6bb-d6x7n 1/1 Running 0 26h
kube-system coredns-6766b7b6bb-wpsq5 1/1 Running 0 26h
kube-system etcd-master 1/1 Running 0 26h
kube-system kube-apiserver-master 1/1 Running 0 26h
kube-system kube-controller-manager-master 1/1 Running 1 26h
kube-system kube-proxy-bmfm5 1/1 Running 0 26h
kube-system kube-proxy-wctln 1/1 Running 0 5h37m
kube-system kube-scheduler-master 1/1 Running 0 26h
rootmaster:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 26h v1.32.1
worker01 Ready none 5h37m v1.32.1
rootmaster:~# kubectl get ippool -o yaml | grep cidrcidr: 172.17.0.0/162使用calico官网推荐方式安装calico 适用于生产环境通过Tigera Operator 统一管理Calico 部署提供自动更新、配置管理等功能
安装 Tigera Calico 操作员和自定义资源定义
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/tigera-operator.yamlkubectl create -f tigera-operator.yamlrootmaster:~# kubectl create -f tigera-operator.yaml
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/tiers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/adminnetworkpolicies.policy.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created通过创建必要的自定义资源来安装 Calico。
curl -O https://raw.githubusercontent.com/projectcalico/calico/v3.29.2/manifests/custom-resources.yaml配置文件中可以指定pod网段如果初始化集群时没有指定则使用calico的网段但指定了则两边需要一致
cidr: 192.168.0.0/16kubectl create -f custom-resources.yamlrootmaster:~# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default createdrootmaster:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-apiserver calico-apiserver-6966f49c46-5jwmd 1/1 Running 0 31m
calico-apiserver calico-apiserver-6966f49c46-6rlgk 1/1 Running 0 31m
kube-system coredns-69767bd799-lhjr5 1/1 Running 0 5s
kube-system coredns-7c6d7f5fd8-ljgt4 1/1 Running 0 3m40s
kube-system etcd-master 1/1 Running 0 86m
kube-system kube-apiserver-master 1/1 Running 0 86m
kube-system kube-controller-manager-master 1/1 Running 0 86m
kube-system kube-proxy-c8frl 1/1 Running 0 86m
kube-system kube-proxy-rtwtr 1/1 Running 0 86m
kube-system kube-scheduler-master 1/1 Running 0 86m
tigera-operator tigera-operator-ccfc44587-7kgzp 1/1 Running 0 81m配置k8s命令补全这样使用Tab就可以补全命令
apt install -y bash-completion
source /usr/share/bash-completion/bash_completion
source (kubectl completion bash)
echo source (kubectl completion bash) ~/.bashrc配置dashboard
Kubernetes Dashboard 是 Kubernetes 集群的通用 Web UI。它允许用户管理集群中运行的应用程序并对其进行故障排除以及管理集群本身。
从 7.0.0 版开始Kubernetes Dashboard 已不再支持基于 Manifest 的安装目前仅支持基于 Helm 的安装因为它速度更快并且让我们能够更好地控制运行 Dashboard 所需的所有依赖项
GitHub官网 安装helm Helm 是一款简化 Kubernetes 应用程序安装和管理的工具。可以将其视为 Kubernetes 的 apt/yum/homebrew GitHub官网下载 使用二进制安装其他安装方式例如脚本、软件包等参考helm官网 解压后配置
tar -xf helm-v3.17.0-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helmrootmaster:~# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION下面提供两种安装方式任选其一 提前手动导入文件所需要的镜像
rootmaster:~# ctr -n k8s.io images import k8s_dashboard_7.10.4.tar
docker.io/kubernetesui/dashboard metrics saved
docker.io/kubernetesui/dashboard api:1.1 saved
docker.io/kubernetesui/dashboard web:1.6 saved
docker.io/kubernetesui/dashboard auth:1. saved
application/vnd.oci.image.manifest.v1json sha256:0295030c88f838b4119dc12513131e81bfbdd1d04550049649750aec09bf45b4
application/vnd.oci.image.manifest.v1json sha256:fde4440560bbdd2a5330e225e7adda111097b8a08e351da6e9b1c5d1bd67e0e4
application/vnd.oci.image.manifest.v1json sha256:17979eb1f79ee098f0b2f663d2fb0bfc55002a2bcf07df87cd28d88a43e2fc3a
application/vnd.oci.image.manifest.v1json sha256:22ab8b612555e4f76c8757a262b8e1398abc81522af7b0f6d9159e5aa68625c1
Importing elapsed: 10.9s total: 0.0 B (0.0 B/s)
rootmaster:~# ctr -n k8s.io images import kong-3.8.tar
docker.io/library/kong:3.8 saved
application/vnd.oci.image.manifest.v1json sha256:d6eea46f36d363d1c27bb4b598a1cd3bbee30e79044c5e3717137bec84616d50
Importing elapsed: 7.4 s total: 0.0 B (0.0 B/s)1、仓库安装 Kubernetes Dashboard官网
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/rootmaster:~# helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
Release kubernetes-dashboard does not exist. Installing it now.
NAME: kubernetes-dashboard
LAST DEPLOYED: Tue Feb 11 02:28:55 2025
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************Congratulations! You have just installed Kubernetes Dashboard in your cluster.To access Dashboard run:kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443NOTE: In case port-forward command does not work, make sure that kong service name is correct.Check the services in Kubernetes Dashboard namespace using:kubectl -n kubernetes-dashboard get svcDashboard will be available at:https://localhost:8443安装后只能本地集群访问
rootmaster:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-77969b7d87-jsndm 1/1 Running 0 112m
kube-system calico-node-smkh7 1/1 Running 0 112m
kube-system calico-node-tjpg8 1/1 Running 0 112m
kube-system coredns-69767bd799-lhjr5 1/1 Running 0 17h
kube-system coredns-69767bd799-tzd2g 1/1 Running 0 17h
kube-system etcd-master 1/1 Running 0 19h
kube-system kube-apiserver-master 1/1 Running 0 19h
kube-system kube-controller-manager-master 1/1 Running 2 (47m ago) 19h
kube-system kube-proxy-c8frl 1/1 Running 0 19h
kube-system kube-proxy-rtwtr 1/1 Running 0 19h
kube-system kube-scheduler-master 1/1 Running 2 (47m ago) 19h
kubernetes-dashboard kubernetes-dashboard-api-6d65fb674d-2gzcc 1/1 Running 0 2m7s
kubernetes-dashboard kubernetes-dashboard-auth-65569d7fdd-vbr54 1/1 Running 0 2m7s
kubernetes-dashboard kubernetes-dashboard-kong-79867c9c48-6fvhh 1/1 Running 0 2m7s
kubernetes-dashboard kubernetes-dashboard-metrics-scraper-55cc88cbcb-gpdkz 1/1 Running 0 2m7s
kubernetes-dashboard kubernetes-dashboard-web-8f95766b5-qhq4p 1/1 Running 0 2m7s
rootmaster:~# kubectl -n kubernetes-dashboard get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard-api ClusterIP 10.105.81.232 none 8000/TCP 14s
kubernetes-dashboard-auth ClusterIP 10.108.168.165 none 8000/TCP 14s
kubernetes-dashboard-kong-proxy ClusterIP 10.109.133.99 none 443/TCP 14s
kubernetes-dashboard-metrics-scraper ClusterIP 10.104.172.95 none 8000/TCP 14s
kubernetes-dashboard-web ClusterIP 10.106.165.230 none 8000/TCP 14s让集群外部可以访问
rootmaster:~# kubectl -n kubernetes-dashboard patch svc kubernetes-dashboard-kong-proxy -p {spec: {type: NodePort}}
service/kubernetes-dashboard-kong-proxy patched
rootmaster:~# kubectl -n kubernetes-dashboard get svc kubernetes-dashboard-kong-proxy
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard-kong-proxy NodePort 10.109.133.99 none 443:30741/TCP 7m20s
rootmaster:~#如需自定义修改端口可执行修改选做
kubectl edit svc kubernetes-dashboard-kong-proxy -n kubernetes-dashboard重启coredns
rootmaster:~# kubectl -n kube-system delete pod -l k8s-appkube-dns
pod coredns-69767bd799-lhjr5 deleted
pod coredns-69767bd799-tzd2g deleted界面访问https:IP:30741 创建账号生成令牌登录
rootmaster:~# kubectl create serviceaccount admin-user -n kubernetes-dashboard
serviceaccount/admin-user created
rootmaster:~# kubectl create clusterrolebinding admin-user --clusterrolecluster-admin --serviceaccountkubernetes-dashboard:admin-user
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
rootmaster:~# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6Ikd4Z2syd3psak1WTGhNYlFKVW5CY09DdFJPTnVPMHdFV1RUTzlOWFNkVDAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzM5MjQ1NjI3LCJpYXQiOjE3MzkyNDIwMjcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiM2M2N2VkNmYtNTJiNS00NGIxLTkwYzMtODAwYzU2Njk4MGIyIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiM2RmZGM2NGEtYWFmYy00ODIxLWE3M2EtMDUwZTA5NGZkYmU2In19LCJuYmYiOjE3MzkyNDIwMjcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.eSt6MaWzSIYjl8AdPq4ndP5kkG-W7IE1fIV1m_zUBmpWMXkyyqOiiF6qzqrNP-6AoHq0mlLgeDdL_bzhTTAr0OWakWswLNXxd_IfUjcbEYzWJen7wvIsMmHMYy7j3WU4yaLefX_TxWuhzCegVvnEPjn3prEl0qVGyqqr5cnRylkcZigucV3dp3-3tXetTm5TAsUAASb8ZVKD3Mdp5TJmebNrvJ0VqnyKENyZvp-vW6zpM4NTpKprQwRWnchGThLMJM0W_QDV4wp_oSKkNsZ9N6XLQHhfCPhTxGuoSKWBBfbJzLv6VWHMRlM1NnXcNhjepinE7weQLUZ6IS7vGBwDUg注如果需要删除helm重新启用则使用以下命令
rootmaster:~# helm list -n kubernetes-dashboard
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
kubernetes-dashboard kubernetes-dashboard 1 2025-02-11 02:28:55.751187857 0000 UTC deployed kubernetes-dashboard-7.10.4
rootmaster:~# helm delete kubernetes-dashboard -n kubernetes-dashboard
release kubernetes-dashboard uninstalled2、自定义安装下载后编辑value文件 value官网参数说明 tar -xf kubernetes-dashboard-7.10.4.tgzvi kubernetes-dashboard/values.yaml配置启用http以及外部访问
kong:
-----type: NodePort # 修改http:enabled: true # 启用http访问nodePort: 30080 # 添加https:# 添加enabled: true # 添加nodePort: 30443 # 配置https访问端口创建命名空间
kubectl create ns kube-dashboard执行helm模板
rootmaster:~# helm install kubernetes-dashboard ./kubernetes-dashboard --namespace kube-dashboard
NAME: kubernetes-dashboard
LAST DEPLOYED: Tue Feb 11 03:16:02 2025
NAMESPACE: kube-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************Congratulations! You have just installed Kubernetes Dashboard in your cluster.To access Dashboard run:kubectl -n kube-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443NOTE: In case port-forward command does not work, make sure that kong service name is correct.Check the services in Kubernetes Dashboard namespace using:kubectl -n kube-dashboard get svcDashboard will be available at:https://localhost:8443rootmaster:~# kubectl get svc -n kube-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard-api ClusterIP 10.104.102.123 none 8000/TCP 3m29s
kubernetes-dashboard-auth ClusterIP 10.108.76.123 none 8000/TCP 3m29s
kubernetes-dashboard-kong-proxy NodePort 10.111.226.77 none 80:30080/TCP,443:30443/TCP 3m29s
kubernetes-dashboard-metrics-scraper ClusterIP 10.111.158.35 none 8000/TCP 3m29s
kubernetes-dashboard-web ClusterIP 10.110.180.200 none 8000/TCP 使用http访问 使用https进行访问
文章转载自: http://www.morning.qcbhb.cn.gov.cn.qcbhb.cn http://www.morning.frnjm.cn.gov.cn.frnjm.cn http://www.morning.jnrry.cn.gov.cn.jnrry.cn http://www.morning.51meihou.cn.gov.cn.51meihou.cn http://www.morning.rfjmy.cn.gov.cn.rfjmy.cn http://www.morning.wxccm.cn.gov.cn.wxccm.cn http://www.morning.wmmqf.cn.gov.cn.wmmqf.cn http://www.morning.wanjia-sd.com.gov.cn.wanjia-sd.com http://www.morning.hhxpl.cn.gov.cn.hhxpl.cn http://www.morning.nhzzn.cn.gov.cn.nhzzn.cn http://www.morning.mwjwy.cn.gov.cn.mwjwy.cn http://www.morning.dbcw.cn.gov.cn.dbcw.cn http://www.morning.mlfgx.cn.gov.cn.mlfgx.cn http://www.morning.rhpgk.cn.gov.cn.rhpgk.cn http://www.morning.nwllb.cn.gov.cn.nwllb.cn http://www.morning.grbgn.cn.gov.cn.grbgn.cn http://www.morning.xpzkr.cn.gov.cn.xpzkr.cn http://www.morning.bby45.cn.gov.cn.bby45.cn http://www.morning.zttjs.cn.gov.cn.zttjs.cn http://www.morning.tqjks.cn.gov.cn.tqjks.cn http://www.morning.hfbtt.cn.gov.cn.hfbtt.cn http://www.morning.znqfc.cn.gov.cn.znqfc.cn http://www.morning.blznh.cn.gov.cn.blznh.cn http://www.morning.pmmrb.cn.gov.cn.pmmrb.cn http://www.morning.svrud.cn.gov.cn.svrud.cn http://www.morning.qsy36.cn.gov.cn.qsy36.cn http://www.morning.dhxnr.cn.gov.cn.dhxnr.cn http://www.morning.yxbrn.cn.gov.cn.yxbrn.cn http://www.morning.fbccx.cn.gov.cn.fbccx.cn http://www.morning.fsbns.cn.gov.cn.fsbns.cn http://www.morning.lngyd.cn.gov.cn.lngyd.cn http://www.morning.rpjyl.cn.gov.cn.rpjyl.cn http://www.morning.mpflb.cn.gov.cn.mpflb.cn http://www.morning.ndxrm.cn.gov.cn.ndxrm.cn http://www.morning.wcgcm.cn.gov.cn.wcgcm.cn http://www.morning.qbpqw.cn.gov.cn.qbpqw.cn http://www.morning.zlrsy.cn.gov.cn.zlrsy.cn http://www.morning.xkhhy.cn.gov.cn.xkhhy.cn http://www.morning.lokext.com.gov.cn.lokext.com http://www.morning.dhwyl.cn.gov.cn.dhwyl.cn http://www.morning.gxtfk.cn.gov.cn.gxtfk.cn http://www.morning.xbptx.cn.gov.cn.xbptx.cn http://www.morning.wjxyg.cn.gov.cn.wjxyg.cn http://www.morning.pghfy.cn.gov.cn.pghfy.cn http://www.morning.rbmm.cn.gov.cn.rbmm.cn http://www.morning.wnnfh.cn.gov.cn.wnnfh.cn http://www.morning.lcbgf.cn.gov.cn.lcbgf.cn http://www.morning.hrzhg.cn.gov.cn.hrzhg.cn http://www.morning.xgcwm.cn.gov.cn.xgcwm.cn http://www.morning.wqmyh.cn.gov.cn.wqmyh.cn http://www.morning.jcfqg.cn.gov.cn.jcfqg.cn http://www.morning.a3e2r.com.gov.cn.a3e2r.com http://www.morning.dzqyn.cn.gov.cn.dzqyn.cn http://www.morning.bhdtx.cn.gov.cn.bhdtx.cn http://www.morning.lkkgq.cn.gov.cn.lkkgq.cn http://www.morning.wfjyn.cn.gov.cn.wfjyn.cn http://www.morning.qbwbs.cn.gov.cn.qbwbs.cn http://www.morning.hbnwr.cn.gov.cn.hbnwr.cn http://www.morning.cwjxg.cn.gov.cn.cwjxg.cn http://www.morning.qsxxl.cn.gov.cn.qsxxl.cn http://www.morning.glbnc.cn.gov.cn.glbnc.cn http://www.morning.gbljq.cn.gov.cn.gbljq.cn http://www.morning.qmbtn.cn.gov.cn.qmbtn.cn http://www.morning.dhyzr.cn.gov.cn.dhyzr.cn http://www.morning.nwczt.cn.gov.cn.nwczt.cn http://www.morning.pjbhk.cn.gov.cn.pjbhk.cn http://www.morning.thntp.cn.gov.cn.thntp.cn http://www.morning.bmgdl.cn.gov.cn.bmgdl.cn http://www.morning.ryztl.cn.gov.cn.ryztl.cn http://www.morning.krwzy.cn.gov.cn.krwzy.cn http://www.morning.banzou2034.cn.gov.cn.banzou2034.cn http://www.morning.sh-wj.com.cn.gov.cn.sh-wj.com.cn http://www.morning.dzrcj.cn.gov.cn.dzrcj.cn http://www.morning.wiitw.com.gov.cn.wiitw.com http://www.morning.xlndf.cn.gov.cn.xlndf.cn http://www.morning.lgmgn.cn.gov.cn.lgmgn.cn http://www.morning.wkmrl.cn.gov.cn.wkmrl.cn http://www.morning.8yitong.com.gov.cn.8yitong.com http://www.morning.bfsqz.cn.gov.cn.bfsqz.cn http://www.morning.mtqqx.cn.gov.cn.mtqqx.cn