手把手教你一套完善且高效的k8s離線部署方案

作者:郝建偉

背景

面對更多項目現場交付,偶而會遇到客戶環境不具備公網條件,完全內網部署,這就需要有一套完善且高效的離線部署方案。

系統資源

編號

主機名稱

IP

資源類型

CPU

內存

磁盤

01

k8s-master1

10.132.10.91

CentOS-7

4c

8g

40g

02

k8s-master1

10.132.10.92

CentOS-7

4c

8g

40g

03

k8s-master1

10.132.10.93

CentOS-7

4c

8g

40g

04

k8s-worker1

10.132.10.94

CentOS-7

8c

16g

200g

05

k8s-worker2

10.132.10.95

CentOS-7

8c

16g

200g

06

k8s-worker3

10.132.10.96

CentOS-7

8c

16g

200g

07

k8s-worker4

10.132.10.97

CentOS-7

8c

16g

200g

08

k8s-worker5

10.132.10.98

CentOS-7

8c

16g

200g

09

k8s-worker6

10.132.10.99

CentOS-7

8c

16g

200g

10

k8s-harbor&deploy

10.132.10.100

CentOS-7

4c

8g

500g

11

k8s-nfs

10.132.10.101

CentOS-7

2c

4g

2000g

12

k8s-lb

10.132.10.120

lb內網

2c

4g

40g

參數配置

注:在全部節點執行以下操作

系統基礎設置

工作、日誌及數據存儲目錄設定

$ mkdir -p /export/servers$ mkdir -p /export/logs$ mkdir -p /export/data$ mkdir -p /export/upload

內核及網絡參數優化

$ vim /etc/sysctl.conf# 設置以下內容fs.file-max = 1048576net.ipv4.tcp_syncookies = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_fin_timeout = 5net.ipv4.neigh.default.gc_stale_time = 120net.ipv4.conf.default.rp_filter = 0net.ipv4.conf.all.rp_filter = 0net.ipv4.conf.all.arp_announce = 2net.ipv4.conf.lo.arp_announce = 2 vm.max_map_count = 262144# 及時生效sysctl -w vm.max_map_count=262144

ulimit優化

$ vim /etc/security/limits.conf # 設置以下內容* soft memlock unlimited* hard memlock unlimited* soft nproc 102400* hard nproc 102400* soft nofile 1048576* hard nofile 1048576

基礎環境準備

ansible安裝

1.環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

ansible

2.9.27

節點

deploy

2.部署說明

物聯管理平臺機器數量繁多,需要ansible進行批量操作機器,節省時間,需要從部署節點至其他節點root免密。

注:在不知道root密碼情況下,可以手動操作名密,按以下操作步驟執行:# 需要在部署機器上執行以下命令生成公鑰$ ssh-keygen -t rsa# 複製~/.ssh/id_rsa.pub內容,並粘貼至其他節點~/.ssh/authorized_keys文件裡面# 如果沒有authorized_keys文件,可先執行創建創建在進行粘貼操作$ touch ~/.ssh/authorized_keys

3. 部署步驟

1) 在線安裝

$ yum -y install https://releases.ansible.com/ansible/rpm/release/epel-7-x86_64/ansible-2.9.27-1.el7.ans.noarch.rpm

2) 離線安裝

# 提前上傳ansible及所有依賴rpm包,並切換至rpm包目錄$ yum -y ./*rpm

3) 查看版本

$ ansible --versionansible 2.9.27config file = /etc/ansible/ansible.cfgconfigured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']ansible python module location = /usr/lib/python2.7/site-packages/ansibleexecutable location = /usr/bin/ansiblepython version = 2.7.5 (default, Apr2 2020, 13:16:51) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]

4) 設置管理主機列表

$ vim /etc/ansible/hosts[master]10.132.10.91node_name=k8s-master110.132.10.92node_name=k8s-master210.132.10.93node_name=k8s-master3[worker]10.132.10.94node_name=k8s-worker110.132.10.95node_name=k8s-worker210.132.10.96node_name=k8s-worker310.132.10.97node_name=k8s-worker410.132.10.98node_name=k8s-worker510.132.10.99node_name=k8s-worker6[etcd]10.132.10.91etcd_name=etcd110.132.10.92etcd_name=etcd210.132.10.93etcd_name=etcd3[k8s:children]masterworker

5) 禁用ssh主機檢查

$ vi /etc/ansible/ansible.cfg# 修改以下設置# uncomment this to disable SSH key host checkinghost_key_checking = False

6) 取消SELINUX設定及放開防火牆

$ ansible k8s -m command -a "setenforce 0"$ ansible k8s -m command -a "sed --follow-symlinks -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config"$ ansible k8s -m command -a "firewall-cmd --set-default-zone=trusted"$ ansible k8s -m command -a "firewall-cmd --complete-reload"$ ansible k8s -m command -a "swapoff -a"

7)hosts設置

$ cd /export/upload && vim hosts_set.sh#設置以下腳本內容#!/bin/bashcat > /etc/hosts << EOF127.0.0.1localhost localhost.localdomain localhost4 localhost4.localdomain4::1localhost localhost.localdomain localhost6 localhost6.localdomain610.132.10.100 deploy harbor10.132.10.91 master0110.132.10.92 master0210.132.10.93 master0310.132.10.94 worker0110.132.10.95 worker0210.132.10.96 worker0310.132.10.97 worker0410.132.10.98 worker0510.132.10.99 worker06EOF$ ansible new_worker -m copy -a 'src=/export/upload/hosts_set.sh dest=/export/upload'$ ansible new_worker -m command -a 'sh /export/upload/hosts_set.sh'

docker安裝

1. 環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

docker

docker-ce-20.10.17

節點

deploy

2. 部署說明

k8s容器運行環境docker部署

3. 部署方法

1) 在線安裝

$ yum -y install docker-ce-20.10.17

2) 離線安裝

# 提前上傳docker及所有依賴rpm包,並切換至rpm包目錄$ yum -y ./*rpm

3) 重新加載配置文件,啟動並查看狀態

$ systemctl start docker$ systemctl status docker

4) 設置開機自啟

$ systemctl enable docker

5) 查看版本

$ docker versionClient: Docker Engine - Community Version:20.10.17API version:1.41Go version:go1.17.11Git commit:100c701Built:Mon Jun6 23:05:12 2022OS/Arch:linux/amd64Context:defaultExperimental:true Server: Docker Engine - Community Engine:Version:20.10.17API version:1.41 (minimum version 1.12)Go version:go1.17.11Git commit:a89b842Built:Mon Jun6 23:03:33 2022OS/Arch:linux/amd64Experimental:falsecontainerd:Version:1.6.8GitCommit:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6runc:Version:1.1.4GitCommit:v1.1.4-0-g5fd4c4ddocker-init:Version:0.19.0GitCommit:de40ad0

docker-compose安裝

1. 環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

docker-compose

docker-compose-linux-x86_64

節點

deploy

2. 部署說明

harbor私有鏡像庫依賴。

3. 部署方法

1) 下載docker-compose並上傳至服務器

$ curl -L https://github.com/docker/compose/releases/download/v2.9.0/docker-compose-linux-x86_64 -o docker-compose

2) 修改docker-compose執行權限

$ mv docker-compose /usr/local/bin/$ chmod +x /usr/local/bin/docker-compose$ docker-compose version

3) 查看版本

$ docker-compose versionDocker Compose version v2.9.0

harbor安裝

1. 環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

harbor

harbor-offline-installer-v2.4.3

節點

harbor

2. 部署說明

私有鏡像倉庫。

3. 下載harbor離線安裝包並上傳至服務器

$ wget https://github.com/goharbor/harbor/releases/download/v2.4.3/harbor-offline-installer-v2.4.3.tgz

4. 解壓安裝包

$ tar -xzvf harbor-offline-installer-v2.4.3.tgz -C /export/servers/$ cd /export/servers/harbor

5. 修改配置文件

$ mv harbor.yml.tmpl harbor.yml$ vim harbor.yml

6. 設置以下內容

hostname: 10.132.10.port: 8090data_volume: /export/data/harborlog.location: /export/logs/harbor

7. 導入harbor鏡像

$ docker load -i harbor.v2.4.3.tar.gz # 等待導入harbor依賴鏡像文件$ docker imagesREPOSITORYTAGIMAGE IDCREATEDSIZEgoharbor/harbor-exporterv2.4.3776ac6ee91f44 weeks ago81.5MBgoharbor/chartmuseum-photonv2.4.3f39a9694988d4 weeks ago172MBgoharbor/redis-photonv2.4.3b168e9750dc84 weeks ago154MBgoharbor/trivy-adapter-photonv2.4.3a406a715461c4 weeks ago251MBgoharbor/notary-server-photonv2.4.3da89404c7cf94 weeks ago109MBgoharbor/notary-signer-photonv2.4.338468ac138364 weeks ago107MBgoharbor/harbor-registryctlv2.4.361243a84642b4 weeks ago135MBgoharbor/registry-photonv2.4.39855479dd6fa4 weeks ago77.9MBgoharbor/nginx-photonv2.4.30165c71ef7344 weeks ago44.4MBgoharbor/harbor-logv2.4.357ceb170dac44 weeks ago161MBgoharbor/harbor-jobservicev2.4.37fea87c4b8844 weeks ago219MBgoharbor/harbor-corev2.4.3d864774a3b8f4 weeks ago197MBgoharbor/harbor-portalv2.4.385f00db668624 weeks ago53.4MBgoharbor/harbor-dbv2.4.37693d44a2ad64 weeks ago225MBgoharbor/preparev2.4.3c882d74725ee4 weeks ago268MB

8. 啟動harbor

./prepare# 如果有二次修改harbor.yml文件,請執行使配置文件生效./install.sh --help # 查看啟動參數./install.sh --with-chartmuseum

運行環境搭建

docker安裝

1. 環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

docker

docker-ce-20.10.17

節點

k8s集群全部節點

2. 部署說明

k8s容器運行環境docker部署

3. 部署方法

1) 上傳docker及依賴rpm包

$ ls /export/upload/docker-rpm.tgz

2) 分發安裝包

$ ansible k8s -m copy -a "src=/export/upload/docker-rpm.tgz dest=/export/upload/"# 全部節點返回以下信息CHANGED => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"},"changed": true,"checksum": "acd3897edb624cd18a197bcd026e6769797f4f05","dest": "/export/upload/docker-rpm.tgz","gid": 0,"group": "root","md5sum": "3ba6d9fe6b2ac70860b6638b88d3c89d","mode": "0644","owner": "root","secontext": "system_u:object_r:usr_t:s0","size": 103234394,"src": "/root/.ansible/tmp/ansible-tmp-1661836788.82-13591-17885284311930/source","state": "file","uid": 0}

3) 執行解壓並安裝

$ ansible k8s -m shell -a "tar xzvf /export/upload/docker-rpm.tgz -C /export/upload/ && yum -y install /export/upload/docker-rpm/*"

4) 設置開機自啟並啟動

$ ansible k8s -m shell -a "systemctl enable docker && systemctl start docker"

5) 查看版本

$ ansible k8s -m shell -a "docker version"# 全部節點返回以下信息CHANGED | rc=0 >>Client: Docker Engine - Community Version:20.10.17 API version:1.41 Go version:go1.17.11 Git commit:100c701 Built:Mon Jun6 23:05:12 2022 OS/Arch:linux/amd64 Context:default Experimental:trueServer: Docker Engine - Community Engine:Version:20.10.17API version:1.41 (minimum version 1.12)Go version:go1.17.11Git commit:a89b842Built:Mon Jun6 23:03:33 2022OS/Arch:linux/amd64Experimental:false containerd:Version:1.6.8GitCommit:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 runc:Version:1.1.4GitCommit:v1.1.4-0-g5fd4c4d docker-init:Version:0.19.0GitCommit:de40ad0

kubernetes安裝

有網環境安裝

# 添加阿里雲YUM的軟件源:cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.aliyun.com/kubernetes/yum/doc/yum-key.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

下載離線安裝包

# 創建rpm軟件存儲目錄:mkdir -p /export/download/kubeadm-rpm# 執行命令:yum install -y kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4 --downloadonly --downloaddir /export/download/kubeadm-rpm

無網環境安裝

1) 上傳kubeadm及依賴rpm包

$ ls /export/upload/kubeadm-rpm.tgz

2) 分發安裝包

$ ansible k8s -m copy -a "src=/export/upload/kubeadm-rpm.tgz dest=/export/upload/"# 全部節點返回以下信息CHANGED => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"},"changed": true,"checksum": "3fe96fe1aa7f4a09d86722f79f36fb8fde69facb","dest": "/export/upload/kubeadm-rpm.tgz","gid": 0,"group": "root","md5sum": "80d5bda420db6ea23ad75dcf0f76e858","mode": "0644","owner": "root","secontext": "system_u:object_r:usr_t:s0","size": 67423355,"src": "/root/.ansible/tmp/ansible-tmp-1661840257.4-33361-139823848282879/source","state": "file","uid": 0}

3) 執行解壓並安裝

$ ansible k8s -m shell -a "tar xzvf /export/upload/kubeadm-rpm.tgz -C /export/upload/ && yum -y install /export/upload/kubeadm-rpm/*"

4) 設置開機自啟並啟動

$ ansible k8s -m shell -a "systemctl enable kubelet && systemctl start kubelet"注:此時kubelet啟動失敗,會進入不斷重啟,這個是正常現象,執行init或join後問題會自動解決,對此官網有如下描述,也就是此時不用理會kubelet.service,可執行發下命令查看kubelet狀態。$ journalctl -xefu kubelet

4) 分發依賴鏡像至集群節點

# 可以在有公網環境提前下載鏡像$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5$ docker pull rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0$ docker pull rancher/mirrored-flannelcni-flannel:v0.19.1# 導出鏡像文件,上傳部署節點並導入鏡像庫$ ls /export/upload$ docker load -i google_containers-coredns-v1.8.4.tar$ docker load -i google_containers-etcd:3.5.0-0.tar$ docker load -i google_containers-kube-apiserver:v1.22.4.tar$ docker load -i google_containers-kube-controller-manager-v1.22.4.tar$ docker load -i google_containers-kube-proxy-v1.22.4.tar$ docker load -i google_containers-kube-scheduler-v1.22.4.tar$ docker load -i google_containers-pause-3.5.tar$ docker load -i rancher-mirrored-flannelcni-flannel-cni-plugin-v1.1.0.tar$ docker load -i rancher-mirrored-flannelcni-flannel-v0.19.1.tar# 鏡像打harbor鏡像庫tag$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 10.132.10.100:8090/community/coredns:v1.8.4$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 10.132.10.100:8090/community/etcd:3.5.0-0$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4 10.132.10.100:8090/community/kube-apiserver:v1.22.4$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4 10.132.10.100:8090/community/kube-controller-manager:v1.22.4$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4 10.132.10.100:8090/community/kube-proxy:v1.22.4$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4 10.132.10.100:8090/community/kube-scheduler:v1.22.4$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 10.132.10.100:8090/community/pause:3.5$ docker tag rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0$ docker tag rancher/mirrored-flannelcni-flannel:v0.19.1 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1# 推送至harbor鏡像庫$ docker push 192.168.186.120:8090/community/coredns:v1.8.4$ docker push 192.168.186.120:8090/community/etcd:3.5.0-0$ docker push 192.168.186.120:8090/community/kube-apiserver:v1.22.4$ docker push 192.168.186.120:8090/community/kube-controller-manager:v1.22.4$ docker push 192.168.186.120:8090/community/kube-proxy:v1.22.4$ docker push 192.168.186.120:8090/community/kube-scheduler:v1.22.4$ docker push 192.168.186.120:8090/community/pause:3.5$ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0$ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel:v0.19.1

5)部署首個master

$ kubeadm init --control-plane-endpoint "10.132.10.91:6443" --image-repository 10.132.10.100/community --kubernetes-version v1.22.4 --service-cidr=172.16.0.0/16 --pod-network-cidr=10.244.0.0/16 --token "abcdef.0123456789abcdef" --token-ttl "0" --upload-certs# 顯示以下信息[init] Using Kubernetes version: v1.22.4[preflight] Running pre-flight checks[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [172.16.0.1 10.132.10.91][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [10.132.10.91 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [10.132.10.91 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 11.008638 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace[upload-certs] Using certificate key:9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13[mark-control-plane] Marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml"io/docs/concepts/cluster-administration/addons/You can now join any number of the control-plane node running the following command on each as root:kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 --control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13Please note that the certificate-key gives access to cluster sensitive data, keep it secret!As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2

6)生成kubelet環境配置文件

# 執行命令$ mkdir -p $HOME/.kube$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

7)配置網絡插件flannel

# 創建flannel.yml文件$ touch /export/servers/kubernetes/flannel.yml$ vim /export/servers/kubernetes/flannel.yml# 設置以下內容,需要關注有網無網時對應的地址切換---kind: NamespaceapiVersion: v1metadata:name: kube-flannellabels:pod-security.kubernetes.io/enforce: privileged---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:name: flannelrules:- apiGroups:- ""resources:- podsverbs:- get- apiGroups:- ""resources:- nodesverbs:- list- watch- apiGroups:- ""resources:- nodes/statusverbs:- patch---kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata:name: flannelroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: flannelsubjects:- kind: ServiceAccountname: flannelnamespace: kube-flannel---apiVersion: v1kind: ServiceAccountmetadata:name: flannelnamespace: kube-flannel---kind: ConfigMapapiVersion: v1metadata:name: kube-flannel-cfgnamespace: kube-flannellabels:tier: nodeapp: flanneldata:cni-conf.json: |{"name": "cbr0","cniVersion": "0.3.1","plugins": [{"type": "flannel","delegate": {"hairpinMode": true,"isDefaultGateway": true}},{"type": "portmap","capabilities": {"portMappings": true}}]}net-conf.json: |{"Network": "10.244.0.0/16","Backend": {"Type": "vxlan"}}---apiVersion: apps/v1kind: DaemonSetmetadata:name: kube-flannel-dsnamespace: kube-flannellabels:tier: nodeapp: flannelspec:selector:matchLabels:app: flanneltemplate:metadata:labels:tier: nodeapp: flannelspec:affinity:nodeAffinity:requiredDuringSchedulingIgnoredDuringExecution:nodeSelectorTerms:- matchExpressions:- key: kubernetes.io/osoperator: Invalues:- linuxhostNetwork: truepriorityClassName: system-node-criticaltolerations:- operator: Existseffect: NoScheduleserviceAccountName: flannelinitContainers:- name: install-cni-plugin# 在有網環境下可以切換下面地址# image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0# 在無網環境下需要使用私有harbor地址image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0command:- cpargs:- -f- /flannel- /opt/cni/bin/flannelvolumeMounts:- name: cni-pluginmountPath: /opt/cni/bin- name: install-cni# 在有網環境下可以切換下面地址# image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1# 在無網環境下需要使用私有harbor地址image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1command:- cpargs:- -f- /etc/kube-flannel/cni-conf.json- /etc/cni/net.d/10-flannel.conflistvolumeMounts:- name: cnimountPath: /etc/cni/net.d- name: flannel-cfgmountPath: /etc/kube-flannel/containers:- name: kube-flannel# 在有網環境下可以切換下面地址# image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1# 在無網環境下需要使用私有harbor地址image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1command:- /opt/bin/flanneldargs:- --ip-masq- --kube-subnet-mgrresources:requests:cpu: "100m"memory: "50Mi"limits:cpu: "100m"memory: "50Mi"securityContext:privileged: falsecapabilities:add: ["NET_ADMIN", "NET_RAW"]env:- name: POD_NAMEvalueFrom:fieldRef:fieldPath: metadata.name- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespace- name: EVENT_QUEUE_DEPTHvalue: "5000"volumeMounts:- name: runmountPath: /run/flannel- name: flannel-cfgmountPath: /etc/kube-flannel/- name: xtables-lockmountPath: /run/xtables.lockvolumes:- name: runhostPath:path: /run/flannel- name: cni-pluginhostPath:path: /opt/cni/bin- name: cnihostPath:path: /etc/cni/net.d- name: flannel-cfgconfigMap:name: kube-flannel-cfg- name: xtables-lockhostPath:path: /run/xtables.locktype: FileOrCreate

8)安裝網絡插件flannel

# 生效yml配置文件$ kubectl apply -f kube-flannel.yml# 查看pods狀態$ kubectl get pods -ANAMESPACENAMEREADYSTATUSRESTARTSAGEkube-flannelkube-flannel-ds-kjmt41/1Running0148mkube-systemcoredns-7f84d7b4b5-7qr8g1/1Running04h18mkube-systemcoredns-7f84d7b4b5-fljws1/1Running04h18mkube-systemetcd-master011/1Running04h19mkube-systemkube-apiserver-master011/1Running04h19mkube-systemkube-controller-manager-master011/1Running04h19mkube-systemkube-proxy-wzq2t1/1Running04h18mkube-systemkube-scheduler-master011/1Running04h19m

9)加入其他master節點

# 在master01執行如下操作# 查看token列表$ kubeadm token list# master01執行init操作後生成加入命令如下$ kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 --control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13# 在其他master節點執行如下操作# 分別執行上一步的加入命令,加入master節點至集群$ kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 --control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13# 此處如果報錯,一般是certificate-key過期,可以在master01執行如下命令更新$ kubeadm init phase upload-certs --upload-certs3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f# 將上面生成的值替換certificate-key值再次在其他master節點執行如下命令$ kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2--control-plane --certificate-key 3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f# 生成kubelet環境配置文件$ mkdir -p $HOME/.kube$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config$ sudo chown $(id -u):$(id -g) $HOME/.kube/config# 在任意master節點執行查看節點狀態命令$ kubectl get nodesNAMESTATUSROLESAGEVERSIONmaster01Readycontrol-plane,master5h58mv1.22.4master02Readycontrol-plane,master45mv1.22.4master03Readycontrol-plane,master44mv1.22.4

9)加入worker節點

# 在其他worker節點執行master01執行init操作後生成的加入命令如下# 分別執行上一步的加入命令,加入master節點至集群$ kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2# 此處如果報錯,一般是token過期,可以在master01執行如下命令重新生成加入命令$ kubeadm token create --print-join-commandkubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:cf30ddd3df1c6215b886df1ea378a68ad5a9faad7933d53ca9891ebbdf9a1c3f# 將上面生成的加入命令再次在其他worker節點執行# 查看集成狀態$ kubectl get nodesNAMESTATUSROLESAGEVERSIONmaster01Readycontrol-plane,master6h12mv1.22.4master02Readycontrol-plane,master58mv1.22.4master03Readycontrol-plane,master57mv1.22.4worker01Ready<none>5m12sv1.22.4worker02Ready<none>4m10sv1.22.4worker03Ready<none>3m42sv1.22.4

10)配置kubernetes dashboard

apiVersion: v1kind: Namespacemetadata:name: kubernetes-dashboard---apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: ServiceapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:type: NodePortports:- port: 443targetPort: 8443nodePort: 31001selector:k8s-app: kubernetes-dashboard---apiVersion: v1kind: Secretmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-certsnamespace: kubernetes-dashboardtype: Opaque---apiVersion: v1kind: Secretmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-key-holdernamespace: kubernetes-dashboardtype: Opaque---kind: ConfigMapapiVersion: v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboard-settingsnamespace: kubernetes-dashboard---kind: RoleapiVersion: rbac.authorization.k8s.io/v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardrules:# Allow Dashboard to get, update and delete Dashboard exclusive secrets.- apiGroups: [""]resources: ["secrets"]resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]verbs: ["get", "update", "delete"]# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.- apiGroups: [""]resources: ["configmaps"]resourceNames: ["kubernetes-dashboard-settings"]verbs: ["get", "update"]# Allow Dashboard to get metrics.- apiGroups: [""]resources: ["services"]resourceNames: ["heapster", "dashboard-metrics-scraper"]verbs: ["proxy"]- apiGroups: [""]resources: ["services/proxy"]resourceNames: ["heapster", ", ", "dashboard-metrics-scraper", "]verbs: ["get"]---kind: ClusterRoleapiVersion: rbac.authorization.k8s.io/v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardrules:# Allow Metrics Scraper to get metrics from the Metrics server- apiGroups: ["metrics.k8s.io"]resources: ["pods", "nodes"]verbs: ["get", "list", "watch"]---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: kubernetes-dashboardsubjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: kubernetes-dashboardroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: kubernetes-dashboardsubjects:- kind: ServiceAccountname: kubernetes-dashboardnamespace: kubernetes-dashboard---kind: DeploymentapiVersion: apps/v1metadata:labels:k8s-app: kubernetes-dashboardname: kubernetes-dashboardnamespace: kubernetes-dashboardspec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: kubernetes-dashboardtemplate:metadata:labels:k8s-app: kubernetes-dashboardspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: kubernetes-dashboardimage: kubernetesui/dashboard:v2.5.0imagePullPolicy: Alwaysports:- containerPort: 8443protocol: TCPargs:- --auto-generate-certificates- --namespace=kubernetes-dashboard# Uncomment the following line to manually specify Kubernetes API server Host# If not specified, Dashboard will attempt to auto discover the API server and connect# to it. Uncomment only if the default does not work.kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedule---kind: ServiceapiVersion: v1metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboardspec:ports:- port: 8000targetPort: 8000selector:k8s-app: dashboard-metrics-scraper---kind: DeploymentapiVersion: apps/v1metadata:labels:k8s-app: dashboard-metrics-scrapername: dashboard-metrics-scrapernamespace: kubernetes-dashboardspec:replicas: 1revisionHistoryLimit: 10selector:matchLabels:k8s-app: dashboard-metrics-scrapertemplate:metadata:labels:k8s-app: dashboard-metrics-scraperspec:securityContext:seccompProfile:type: RuntimeDefaultcontainers:- name: dashboard-metrics-scraperimage: kubernetesui/metrics-scraper:v1.0.kubernetes.io/os": linux# Comment the following tolerations if Dashboard must not be deployed on mastertolerations:- key: node-role.kubernetes.io/mastereffect: NoSchedulevolumes:- name: tmp-volumeemptyDir: {}

11)生成dashboard自簽證書

$ mkdir -p /export/servers/kubernetes/certs && cd /export/servers/kubernetes/certs/$ openssl genrsa -out dashboard.key 2048$ openssl req -days 3650 -new -key dashboard.key -out dashboard.csr -subj /C=CN/ST=BEIJING/L=BEIJING/O=JD/OU=JD/CN=172.16.16.42$ openssl x509 -req -days 3650 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

12)執行以下操作命令

# 去除主節點的汙點$ kubectl taint nodes --all node-role.kubernetes.io/master-# 創建命名空間$ kubectl create namespace kubernetes-dashboard# 創建Secret$ kubectl create secret tls kubernetes-dashboard-certs -n kubernetes-dashboard --key dashboard.key --cert dashboard.crt

13)生效dashboard yml配置文件

$ kubectl apply -f /export/servers/kubernetes/dashboard.yml# 查看pods狀態$ kubectl get pods -A | grep kubernetes-dashboardkubernetes-dashboarddashboard-metrics-scraper-c45b7869d-rbdt41/1Running015mkubernetes-dashboardkubernetes-dashboard-764b4dd7-rt66t1/1Running015m

14)訪問dashboard頁面

168.186.121:31001/#/login

15)製作訪問token

# 新增配置文件 dashboard-adminuser.yaml$ touch /export/servers/kubernetes/dashboard-adminuser.yaml && vim /export/servers/kubernetes/dashboard-adminuser.yaml# 輸入以下內容---apiVersion: v1kind: ServiceAccountmetadata:name: admin-usernamespace: kubernetes-dashboard---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: admin-userroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: cluster-adminsubjects:- kind: ServiceAccountname: admin-usernamespace: kubernetes-dashboard# 執行yaml文件$ kubectl create -f /export/servers/kubernetes/dashboard-adminuser.yaml# 預期輸出結果serviceaccount/admin-user createdclusterrolebinding.rbac.authorization.k8s.io/admin-user created# 說明:上面創建了一個叫admin-user的服務賬號,並放在kubernetes-dashboard命名空間下,並將cluster-admin角色綁定到admin-user賬戶,這樣admin-user賬戶就有了管理員的權限。默認情況下,kubeadm創建集群時已經創建了cluster-admin角色,直接綁定即可# 查看admin-user賬戶的token$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')# 預期輸出結果Name:admin-user-token-9fppsNamespace:kubernetes-dashboardLabels:<none>Annotations:kubernetes.io/service-account.name: admin-userkubernetes.io/service-account.uid: 72c1aa28-6385-4d1a-b22c-42427b74b4c7Type:kubernetes.io/service-account-tokenData====ca.crt:1099 bytesnamespace:20 bytestoken:eyJhbGciOiJSUzI1NiIsImtpZCI6IjFEckU0NXB5Yno5UV9MUFkxSUpPenJhcTFuektHazM1c2QzTGFmRzNES0EifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTlmcHBzIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI3MmMxYWEyOC02Mzg1LTRkMWEtYjIyYy00MjQyN2I3NGI0YzciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.oA3NLhhTaXd2qvWrPDXat2w9ywdWi_77SINk4vWkfIIzMmxBEHnqvDIBvhRC3frIokNSvT71y6mXN0KHu32hBa1YWi0MuzF165ZNFtM_rSQiq9OnPxeFvLaKS-0Vzr2nWuBx_-fTt7gESReSMLEJStbPb1wOnR6kqtY66ajKK5ILeIQ77I0KXYIi7GlPEyc6q4bIjweZ0HSXDPR4JSnEAhrP8Qslrv3Oft4QZVNj47x7xKC4dyyZOMHUIj9QhkpI2gMbiZ8XDUmNok070yDc0TCxeTZKDuvdsigxCMQx6AesD-8dca5Hb8Sm4mEPkGJekvMzkLkM97y_pOBPkfTAIA# 把上面命令執行獲取到的Token複製到登錄界面的Token輸入框中,即可正常登錄dashboard

13)登錄dashboard如下




kubectl安裝

1. 環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

kubectl

kubectl-1.22.4-0.x86_64

節點

deploy

2. 部署說明

Kubernetes kubectl客戶端。

3. 解壓之前上傳的kubadm-rpm包

$ tar xzvf kubeadm-rpm.tgz

4. 執行安裝

$ rpm -ivh bc7a9f8e7c6844cfeab2066a84b8fecf8cf608581e56f6f96f80211250f9a5e7-kubectl-1.22.4-0.x86_64.rpm

5. 增加執行權限

# 生成kubelet環境配置文件$ mkdir -p $HOME/.kube$ sudo touch $HOME/.kube/config$ sudo chown $(id -u):$(id -g) $HOME/.kube/config# 從任意master節點複製內容至上面的配置文件

6. 查看版本

$ kubectl versionClient Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}

helm安裝

1. 環境說明

名稱

說明

操作系統

CentOS Linux release 7.8.2003

helm

helm-v3.9.3-linux-amd64.tar.gz

節點

deploy

2. 部署說明

Kubernetes資源包及配置管理工具。

3. 下載helm離線安裝包並上傳至服務器

$ wget https://get.helm.sh/helm-v3.9.3-linux-amd64.tar.gz

4. 解壓安裝包

$ tar -zxvf helm-v3.9.3-linux-amd64.tar.gz -C /export/servers/$ cd /export/servers/linux-amd64

5. 增加執行權限

$ cp linux-amd64/helm /usr/local/bin/$ chmod +x /usr/local/bin/helm

6. 查看版本

$ helm versionversion.BuildInfo{Version:"v3.9.3", GitCommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", GitTreeState:"clean", GoVersion:"go1.17.13"}

設置本地存儲掛載nas

$ mkdir /export/servers/helm_chart/local-path-storage && cd /export/servers/helm_chart/local-path-storage/local-path-storage.yaml$ vim local-path-storage.yaml# 設置以下內容,設置"paths":["/home/admin/local-path-provisioner"] 為nas目錄,沒有目錄需要創建apiVersion: v1kind: Namespacemetadata:name: local-path-storage ---apiVersion: v1kind: ServiceAccountmetadata:name: local-path-provisioner-service-accountnamespace: local-path-storage ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:name: local-path-provisioner-rolerules:- apiGroups: [ "" ]resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]verbs: [ "get", "list", "watch" ]- apiGroups: [ "" ]resources: [ "endpoints", "persistentvolumes", "pods" ]verbs: [ "*" ]- apiGroups: [ "" ]resources: [ "events" ]verbs: [ "create", "patch" ]- apiGroups: [ "storage.k8s.io" ]resources: [ "storageclasses" ]verbs: [ "get", "list", "watch" ] ---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:name: local-path-provisioner-bindroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: local-path-provisioner-rolesubjects:- kind: ServiceAccountname: local-path-provisioner-service-accountnamespace: local-path-storage ---apiVersion: apps/v1kind: Deploymentmetadata:name: local-path-provisionernamespace: local-path-storagespec:replicas: 1selector:matchLabels:app: local-path-provisionertemplate:metadata:labels:app: local-path-provisionerspec:serviceAccountName: local-path-provisioner-service-accountcontainers:- name: local-path-provisionerimage: rancher/local-path-provisioner:v0.0.21imagePullPolicy: IfNotPresentcommand:- local-path-provisioner- --debug- start- --config- /etc/config/config.jsonvolumeMounts:- name: config-volumemountPath: /etc/config/env:- name: POD_NAMESPACEvalueFrom:fieldRef:fieldPath: metadata.namespacevolumes:- name: config-volumeconfigMap:name: local-path-config ---apiVersion: storage.k8s.io/v1kind: StorageClassmetadata:name: local-pathprovisioner: rancher.io/local-pathvolumeBindingMode: WaitForFirstConsumerreclaimPolicy: Delete ---kind: ConfigMapapiVersion: v1metadata:name: local-path-confignamespace: local-path-storagedata:config.json: |-{"nodePathMap":[{"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES","paths":["/nas_data/jdiot/local-path-provisioner"]}]}setup: |-#!/bin/shwhile getopts "m:s:p:" optdocase $opt inp)absolutePath=$OPTARG;;s)sizeInBytes=$OPTARG;;m)volMode=$OPTARG;;esacdonemkdir -m 0777 -p ${absolutePath}teardown: |-#!/bin/shwhile getopts "m:s:p:" optdocase $opt inp)absolutePath=$OPTARG;;s)sizeInBytes=$OPTARG;;m)volMode=$OPTARG;;esacdonerm -rf ${absolutePath}helperPod.yaml: |-apiVersion: v1kind: Podmetadata:name: helper-podspec:containers:- name: helper-podimage: busybox

注:以上依賴鏡像需要從公網環境下載依賴並導入鏡像庫,需要設置以上對應鏡像地址從私有鏡像庫拉取鏡像

生效本地存儲yaml

$ kubectl apply -f local-path-storage.yaml -n local-path-storage

設置k8s默認存儲

$ kubectl patch storageclass local-path-p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

注:後面部署的中間件及服務需要修改對應的存儲為本地存儲:"storageClass": "local-path"

版权声明:手把手教你一套完善且高效的k8s離線部署方案内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,请联系 删除。

本文链接:https://www.fcdong.com/f/be871847861116681990d48c37084567.html