分类: 云计算
2022-12-20 11:59:11
面对更多项目现场交付,偶而会遇到客户环境不具备公网条件,完全内网部署,这就需要有一套完善且高效的离线部署方案。
编号 | 主机名称 | ip | 资源类型 | cpu | 内存 | 磁盘 |
---|---|---|---|---|---|---|
01 | k8s-master1 | 10.132.10.91 | centos-7 | 4c | 8g | 40g |
02 | k8s-master1 | 10.132.10.92 | centos-7 | 4c | 8g | 40g |
03 | k8s-master1 | 10.132.10.93 | centos-7 | 4c | 8g | 40g |
04 | k8s-worker1 | 10.132.10.94 | centos-7 | 8c | 16g | 200g |
05 | k8s-worker2 | 10.132.10.95 | centos-7 | 8c | 16g | 200g |
06 | k8s-worker3 | 10.132.10.96 | centos-7 | 8c | 16g | 200g |
07 | k8s-worker4 | 10.132.10.97 | centos-7 | 8c | 16g | 200g |
08 | k8s-worker5 | 10.132.10.98 | centos-7 | 8c | 16g | 200g |
09 | k8s-worker6 | 10.132.10.99 | centos-7 | 8c | 16g | 200g |
10 | k8s-harbor&deploy | 10.132.10.100 | centos-7 | 4c | 8g | 500g |
11 | k8s-nfs | 10.132.10.101 | centos-7 | 2c | 4g | 2000g |
12 | k8s-lb | 10.132.10.120 | lb内网 | 2c | 4g | 40g |
注:在全部节点执行以下操作
工作、日志及数据存储目录设定
$ mkdir -p /export/servers
$ mkdir -p /export/logs
$ mkdir -p /export/data
$ mkdir -p /export/upload
内核及网络参数优化
$ vim /etc/sysctl.conf
# 设置以下内容
fs.file-max = 1048576
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 5
net.ipv4.neigh.default.gc_stale_time = 120
net.ipv4.conf.default.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
vm.max_map_count = 262144
# 及时生效
sysctl -w vm.max_map_count=262144
$ vim /etc/security/limits.conf
# 设置以下内容
* soft memlock unlimited
* hard memlock unlimited
* soft nproc 102400
* hard nproc 102400
* soft nofile 1048576
* hard nofile 1048576
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
ansible | 2.9.27 |
节点 | deploy |
物联管理平台机器数量繁多,需要ansible进行批量操作机器,节省时间,需要从部署节点至其他节点root免密。
注:在不知道root密码情况下,可以手动操作名密,按以下操作步骤执行:
# 需要在部署机器上执行以下命令生成公钥
$ ssh-keygen -t rsa
# 复制~/.ssh/id_rsa.pub内容,并粘贴至其他节点~/.ssh/authorized_keys文件里面
# 如果没有authorized_keys文件,可先执行创建创建在进行粘贴操作
$ touch ~/.ssh/authorized_keys
1) 在线安装
$ yum -y install
2) 离线安装
# 提前上传ansible及所有依赖rpm包,并切换至rpm包目录
$ yum -y ./*rpm
3) 查看版本
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, apr 2 2020, 13:16:51) [gcc 4.8.5 20150623 (red hat 4.8.5-39)]
4) 设置管理主机列表
$ vim /etc/ansible/hosts
[master]
10.132.10.91 node_name=k8s-master1
10.132.10.92 node_name=k8s-master2
10.132.10.93 node_name=k8s-master3
[worker]
10.132.10.94 node_name=k8s-worker1
10.132.10.95 node_name=k8s-worker2
10.132.10.96 node_name=k8s-worker3
10.132.10.97 node_name=k8s-worker4
10.132.10.98 node_name=k8s-worker5
10.132.10.99 node_name=k8s-worker6
[etcd]
10.132.10.91 etcd_name=etcd1
10.132.10.92 etcd_name=etcd2
10.132.10.93 etcd_name=etcd3
[k8s:children]
master
worker
5) 禁用ssh主机检查
$ vi /etc/ansible/ansible.cfg
# 修改以下设置
# uncomment this to disable ssh key host checking
host_key_checking = false
6) 取消selinux设定及放开防火墙
$ ansible k8s -m command -a "setenforce 0"
$ ansible k8s -m command -a "sed --follow-symlinks -i 's/selinux=enforcing/selinux=disabled/g' /etc/selinux/config"
$ ansible k8s -m command -a "firewall-cmd --set-default-zone=trusted"
$ ansible k8s -m command -a "firewall-cmd --complete-reload"
$ ansible k8s -m command -a "swapoff -a"
7)hosts设置
$ cd /export/upload && vim hosts_set.sh
#设置以下脚本内容
#!/bin/bashcat > /etc/hosts << eof
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.132.10.100 deploy harbor
10.132.10.91 master01
10.132.10.92 master02
10.132.10.93 master03
10.132.10.94 worker01
10.132.10.95 worker02
10.132.10.96 worker03
10.132.10.97 worker04
10.132.10.98 worker05
10.132.10.99 worker06
eof
$ ansible new_worker -m copy -a 'src=/export/upload/hosts_set.sh dest=/export/upload'
$ ansible new_worker -m command -a 'sh /export/upload/hosts_set.sh'
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
docker | docker-ce-20.10.17 |
节点 | deploy |
此处的docker主要用于harbor针对镜像的操作,包括镜像的打tag以及push,用于后期部署pod的时候直接通过harbor私有镜像库拉取。
1) 在线安装
$ yum -y install docker-ce-20.10.17
2) 离线安装
# 提前上传docker及所有依赖rpm包,并切换至rpm包目录
$ yum -y ./*rpm
3) 重新加载配置文件,启动并查看状态
$ systemctl start docker
$ systemctl status docker
4) 设置开机自启
$ systemctl enable docker
5) 查看版本
$ docker version
client: docker engine - community
version: 20.10.17
api version: 1.41
go version: go1.17.11
git commit: 100c701
built: mon jun 6 23:05:12 2022
os/arch: linux/amd64
context: default
experimental: true
server: docker engine - community
engine:
version: 20.10.17
api version: 1.41 (minimum version 1.12)
go version: go1.17.11
git commit: a89b842
built: mon jun 6 23:03:33 2022
os/arch: linux/amd64
experimental: false
containerd:
version: 1.6.8
gitcommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
version: 1.1.4
gitcommit: v1.1.4-0-g5fd4c4d
docker-init:
version: 0.19.0
gitcommit: de40ad0
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
docker-compose | docker-compose-linux-x86_64 |
节点 | deploy |
harbor私有镜像库依赖。
1) 下载docker-compose并上传至服务器
$ curl -l -o docker-compose
2) 修改docker-compose执行权限
$ mv docker-compose /usr/local/bin/
$ chmod x /usr/local/bin/docker-compose
$ docker-compose version
3) 查看版本
$ docker-compose version
docker compose version v2.9.0
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
harbor | harbor-offline-installer-v2.4.3 |
节点 | harbor |
私有镜像仓库。
$ wget
$ tar -xzvf harbor-offline-installer-v2.4.3.tgz -c /export/servers/
$ cd /export/servers/harbor
$ mv harbor.yml.tmpl harbor.yml
$ vim harbor.yml
hostname: 10.132.10.100
http.port: 8090
data_volume: /export/data/harbor
log.location: /export/logs/harbor
$ docker load -i harbor.v2.4.3.tar.gz
# 等待导入harbor依赖镜像文件
$ docker images
repository tag image id created size
goharbor/harbor-exporter v2.4.3 776ac6ee91f4 4 weeks ago 81.5mb
goharbor/chartmuseum-photon v2.4.3 f39a9694988d 4 weeks ago 172mb
goharbor/redis-photon v2.4.3 b168e9750dc8 4 weeks ago 154mb
goharbor/trivy-adapter-photon v2.4.3 a406a715461c 4 weeks ago 251mb
goharbor/notary-server-photon v2.4.3 da89404c7cf9 4 weeks ago 109mb
goharbor/notary-signer-photon v2.4.3 38468ac13836 4 weeks ago 107mb
goharbor/harbor-registryctl v2.4.3 61243a84642b 4 weeks ago 135mb
goharbor/registry-photon v2.4.3 9855479dd6fa 4 weeks ago 77.9mb
goharbor/nginx-photon v2.4.3 0165c71ef734 4 weeks ago 44.4mb
goharbor/harbor-log v2.4.3 57ceb170dac4 4 weeks ago 161mb
goharbor/harbor-jobservice v2.4.3 7fea87c4b884 4 weeks ago 219mb
goharbor/harbor-core v2.4.3 d864774a3b8f 4 weeks ago 197mb
goharbor/harbor-portal v2.4.3 85f00db66862 4 weeks ago 53.4mb
goharbor/harbor-db v2.4.3 7693d44a2ad6 4 weeks ago 225mb
goharbor/prepare v2.4.3 c882d74725ee 4 weeks ago 268mb
./prepare # 如果有二次修改harbor.yml文件,请执行使配置文件生效
./install.sh --help # 查看启动参数
./install.sh --with-chartmuseum
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
docker | docker-ce-20.10.17 |
节点 | k8s集群全部节点 |
k8s容器运行环境docker部署
1) 上传docker及依赖rpm包
$ ls /export/upload/docker-rpm.tgz
2) 分发安装包
$ ansible k8s -m copy -a "src=/export/upload/docker-rpm.tgz dest=/export/upload/"
# 全部节点返回以下信息
changed => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "acd3897edb624cd18a197bcd026e6769797f4f05",
"dest": "/export/upload/docker-rpm.tgz",
"gid": 0,
"group": "root",
"md5sum": "3ba6d9fe6b2ac70860b6638b88d3c89d",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:usr_t:s0",
"size": 103234394,
"src": "/root/.ansible/tmp/ansible-tmp-1661836788.82-13591-17885284311930/source",
"state": "file",
"uid": 0
}
3) 执行解压并安装
$ ansible k8s -m shell -a "tar xzvf /export/upload/docker-rpm.tgz -c /export/upload/ && yum -y install /export/upload/docker-rpm/*"
4) 设置开机自启并启动
$ ansible k8s -m shell -a "systemctl enable docker && systemctl start docker"
5) 查看版本
$ ansible k8s -m shell -a "docker version"
# 全部节点返回以下信息
changed | rc=0 >>
client: docker engine - community
version: 20.10.17
api version: 1.41
go version: go1.17.11
git commit: 100c701
built: mon jun 6 23:05:12 2022
os/arch: linux/amd64
context: default
experimental: true
server: docker engine - community
engine:
version: 20.10.17
api version: 1.41 (minimum version 1.12)
go version: go1.17.11
git commit: a89b842
built: mon jun 6 23:03:33 2022
os/arch: linux/amd64
experimental: false
containerd:
version: 1.6.8
gitcommit: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc:
version: 1.1.4
gitcommit: v1.1.4-0-g5fd4c4d
docker-init:
version: 0.19.0
gitcommit: de40ad0
有网环境安装
# 添加阿里云yum的软件源:
cat > /etc/yum.repos.d/kubernetes.repo << eof
[kubernetes]
name=kubernetes
baseurl=
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=
eof
下载离线安装包
# 创建rpm软件存储目录:
mkdir -p /export/download/kubeadm-rpm
# 执行命令:
yum install -y kubelet-1.22.4 kubeadm-1.22.4 kubectl-1.22.4 --downloadonly --downloaddir /export/download/kubeadm-rpm
无网环境安装
1) 上传kubeadm及依赖rpm包
$ ls /export/upload/
kubeadm-rpm.tgz
2) 分发安装包
$ ansible k8s -m copy -a "src=/export/upload/kubeadm-rpm.tgz dest=/export/upload/"
# 全部节点返回以下信息
changed => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": true,
"checksum": "3fe96fe1aa7f4a09d86722f79f36fb8fde69facb",
"dest": "/export/upload/kubeadm-rpm.tgz",
"gid": 0,
"group": "root",
"md5sum": "80d5bda420db6ea23ad75dcf0f76e858",
"mode": "0644",
"owner": "root",
"secontext": "system_u:object_r:usr_t:s0",
"size": 67423355,
"src": "/root/.ansible/tmp/ansible-tmp-1661840257.4-33361-139823848282879/source",
"state": "file",
"uid": 0
}
3) 执行解压并安装
$ ansible k8s -m shell -a "tar xzvf /export/upload/kubeadm-rpm.tgz -c /export/upload/ && yum -y install /export/upload/kubeadm-rpm/*"
4) 设置开机自启并启动
$ ansible k8s -m shell -a "systemctl enable kubelet && systemctl start kubelet"
注:此时kubelet启动失败,会进入不断重启,这个是正常现象,执行init或join后问题会自动解决,对此凯发k8官网下载客户端中心官网有如下描述,也就是此时不用理会kubelet.service,可执行发下命令查看kubelet状态。
$ journalctl -xefu kubelet
4) 分发依赖镜像至集群节点
# 可以在有公网环境提前下载镜像
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4
$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5
$ docker pull rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
$ docker pull rancher/mirrored-flannelcni-flannel:v0.19.1
# 导出镜像文件,上传部署节点并导入镜像库
$ ls /export/upload
$ docker load -i google_containers-coredns-v1.8.4.tar
$ docker load -i google_containers-etcd:3.5.0-0.tar
$ docker load -i google_containers-kube-apiserver:v1.22.4.tar
$ docker load -i google_containers-kube-controller-manager-v1.22.4.tar
$ docker load -i google_containers-kube-proxy-v1.22.4.tar
$ docker load -i google_containers-kube-scheduler-v1.22.4.tar
$ docker load -i google_containers-pause-3.5.tar
$ docker load -i rancher-mirrored-flannelcni-flannel-cni-plugin-v1.1.0.tar
$ docker load -i rancher-mirrored-flannelcni-flannel-v0.19.1.tar
# 镜像打harbor镜像库tag
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.8.4 10.132.10.100:8090/community/coredns:v1.8.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.0-0 10.132.10.100:8090/community/etcd:3.5.0-0
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.22.4 10.132.10.100:8090/community/kube-apiserver:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.22.4 10.132.10.100:8090/community/kube-controller-manager:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.22.4 10.132.10.100:8090/community/kube-proxy:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.22.4 10.132.10.100:8090/community/kube-scheduler:v1.22.4
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5 10.132.10.100:8090/community/pause:3.5
$ docker tag rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
$ docker tag rancher/mirrored-flannelcni-flannel:v0.19.1 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
# 推送至harbor镜像库
$ docker push 192.168.186.120:8090/community/coredns:v1.8.4
$ docker push 192.168.186.120:8090/community/etcd:3.5.0-0
$ docker push 192.168.186.120:8090/community/kube-apiserver:v1.22.4
$ docker push 192.168.186.120:8090/community/kube-controller-manager:v1.22.4
$ docker push 192.168.186.120:8090/community/kube-proxy:v1.22.4
$ docker push 192.168.186.120:8090/community/kube-scheduler:v1.22.4
$ docker push 192.168.186.120:8090/community/pause:3.5
$ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
$ docker push 192.168.186.120:8090/community/mirrored-flannelcni-flannel:v0.19.1
5)部署{banned}首选master
$ kubeadm init \
--control-plane-endpoint "10.132.10.91:6443" \
--image-repository 10.132.10.100/community \
--kubernetes-version v1.22.4 \
--service-cidr=172.16.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token "abcdef.0123456789abcdef" \
--token-ttl "0" \
--upload-certs
# 显示以下信息
[init] using kubernetes version: v1.22.4
[preflight] running pre-flight checks
[warning firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[preflight] pulling images required for setting up a kubernetes cluster
[preflight] this might take a minute or two, depending on the speed of your internet connection
[preflight] you can also perform this action in beforehand using 'kubeadm config images pull'
[certs] using certificatedir folder "/etc/kubernetes/pki"
[certs] generating "ca" certificate and key
[certs] generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for dns names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and ips [172.16.0.1 10.132.10.91]
[certs] generating "apiserver-kubelet-client" certificate and key
[certs] generating "front-proxy-ca" certificate and key
[certs] generating "front-proxy-client" certificate and key
[certs] generating "etcd/ca" certificate and key
[certs] generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for dns names [localhost master01] and ips [10.132.10.91 127.0.0.1 ::1]
[certs] generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for dns names [localhost master01] and ips [10.132.10.91 127.0.0.1 ::1]
[certs] generating "etcd/healthcheck-client" certificate and key
[certs] generating "apiserver-etcd-client" certificate and key
[certs] generating "sa" key and public key
[kubeconfig] using kubeconfig folder "/etc/kubernetes"
[kubeconfig] writing "admin.conf" kubeconfig file
[kubeconfig] writing "kubelet.conf" kubeconfig file
[kubeconfig] writing "controller-manager.conf" kubeconfig file
[kubeconfig] writing "scheduler.conf" kubeconfig file
[kubelet-start] writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] starting the kubelet
[control-plane] using manifest folder "/etc/kubernetes/manifests"
[control-plane] creating static pod manifest for "kube-apiserver"
[control-plane] creating static pod manifest for "kube-controller-manager"
[control-plane] creating static pod manifest for "kube-scheduler"
[etcd] creating static pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] waiting for the kubelet to boot up the control plane as static pods from directory "/etc/kubernetes/manifests". this can take up to 4m0s
[apiclient] all control plane components are healthy after 11.008638 seconds
[upload-config] storing the configuration used in configmap "kubeadm-config" in the "kube-system" namespace
[kubelet] creating a configmap "kubelet-config-1.22" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] storing the certificates in secret "kubeadm-certs" in the "kube-system" namespace
[upload-certs] using certificate key:
9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
[mark-control-plane] marking the node master01 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:noschedule]
[bootstrap-token] using token: abcdef.0123456789abcdef
[bootstrap-token] configuring bootstrap tokens, cluster-info configmap, rbac roles
[bootstrap-token] configured rbac rules to allow node bootstrap tokens to get nodes
[bootstrap-token] configured rbac rules to allow node bootstrap tokens to post csrs in order for nodes to get long term certificate credentials
[bootstrap-token] configured rbac rules to allow the csrapprover controller automatically approve csrs from a node bootstrap token
[bootstrap-token] configured rbac rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" configmap in the "kube-public" namespace
[kubelet-finalize] updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] applied essential addon: coredns
[addons] applied essential addon: kube-proxy
your kubernetes control-plane has initialized successfully!
to start using your cluster, you need to run the following as a regular user:
mkdir -p $home/.kube
sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
sudo chown $(id -u):$(id -g) $home/.kube/config
alternatively, if you are the root user, you can run:
export kubeconfig=/etc/kubernetes/admin.conf
you should now deploy a pod network to the cluster.
run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
you can now join any number of the control-plane node running the following command on each as root:
kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
--control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
please note that the certificate-key gives access to cluster sensitive data, keep it secret!
as a safeguard, uploaded-certs will be deleted in two hours; if necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.132.10.91:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2
6)生成kubelet环境配置文件
# 执行命令
$ mkdir -p $home/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
$ sudo chown $(id -u):$(id -g) $home/.kube/config
7)配置网络插件flannel
# 创建flannel.yml文件
$ touch /export/servers/kubernetes/flannel.yml
$ vim /export/servers/kubernetes/flannel.yml
# 设置以下内容,需要关注有网无网时对应的地址切换
---
kind: namespace
apiversion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: clusterrole
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apigroups:
- ""
resources:
- pods
verbs:
- get
- apigroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apigroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: clusterrolebinding
apiversion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleref:
apigroup: rbac.authorization.k8s.io
kind: clusterrole
name: flannel
subjects:
- kind: serviceaccount
name: flannel
namespace: kube-flannel
---
apiversion: v1
kind: serviceaccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: configmap
apiversion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniversion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinmode": true,
"isdefaultgateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portmappings": true
}
}
]
}
net-conf.json: |
{
"network": "10.244.0.0/16",
"backend": {
"type": "vxlan"
}
}
---
apiversion: apps/v1
kind: daemonset
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchlabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeaffinity:
requiredduringschedulingignoredduringexecution:
nodeselectorterms:
- matchexpressions:
- key: kubernetes.io/os
operator: in
values:
- linux
hostnetwork: true
priorityclassname: system-node-critical
tolerations:
- operator: exists
effect: noschedule
serviceaccountname: flannel
initcontainers:
- name: install-cni-plugin
# 在有网环境下可以切换下面地址
# image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
# 在无网环境下需要使用私有harbor地址
image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumemounts:
- name: cni-plugin
mountpath: /opt/cni/bin
- name: install-cni
# 在有网环境下可以切换下面地址
# image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
# 在无网环境下需要使用私有harbor地址
image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumemounts:
- name: cni
mountpath: /etc/cni/net.d
- name: flannel-cfg
mountpath: /etc/kube-flannel/
containers:
- name: kube-flannel
# 在有网环境下可以切换下面地址
# image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
# 在无网环境下需要使用私有harbor地址
image: 10.132.10.100:8090/community/mirrored-flannelcni-flannel:v0.19.1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50mi"
limits:
cpu: "100m"
memory: "50mi"
securitycontext:
privileged: false
capabilities:
add: ["net_admin", "net_raw"]
env:
- name: pod_name
valuefrom:
fieldref:
fieldpath: metadata.name
- name: pod_namespace
valuefrom:
fieldref:
fieldpath: metadata.namespace
- name: event_queue_depth
value: "5000"
volumemounts:
- name: run
mountpath: /run/flannel
- name: flannel-cfg
mountpath: /etc/kube-flannel/
- name: xtables-lock
mountpath: /run/xtables.lock
volumes:
- name: run
hostpath:
path: /run/flannel
- name: cni-plugin
hostpath:
path: /opt/cni/bin
- name: cni
hostpath:
path: /etc/cni/net.d
- name: flannel-cfg
configmap:
name: kube-flannel-cfg
- name: xtables-lock
hostpath:
path: /run/xtables.lock
type: fileorcreate
8)安装网络插件flannel
# 生效yml配置文件
$ kubectl apply -f kube-flannel.yml
# 查看pods状态
$ kubectl get pods -a
namespace name ready status restarts age
kube-flannel kube-flannel-ds-kjmt4 1/1 running 0 148m
kube-system coredns-7f84d7b4b5-7qr8g 1/1 running 0 4h18m
kube-system coredns-7f84d7b4b5-fljws 1/1 running 0 4h18m
kube-system etcd-master01 1/1 running 0 4h19m
kube-system kube-apiserver-master01 1/1 running 0 4h19m
kube-system kube-controller-manager-master01 1/1 running 0 4h19m
kube-system kube-proxy-wzq2t 1/1 running 0 4h18m
kube-system kube-scheduler-master01 1/1 running 0 4h19m
9)加入其他master节点
# 在master01执行如下操作
# 查看token列表
$ kubeadm token list
# master01执行init操作后生成加入命令如下
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
--control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
# 在其他master节点执行如下操作
# 分别执行上一步的加入命令,加入master节点至集群
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2 \
--control-plane --certificate-key 9151ea7bb260a42297f2edc486d5792f67d9868169310b82ef1eb18f6e4c0f13
# 此处如果报错,一般是certificate-key过期,可以在master01执行如下命令更新
$ kubeadm init phase upload-certs --upload-certs
3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f
# 将上面生成的值替换certificate-key值再次在其他master节点执行如下命令
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2
--control-plane
--certificate-key 3b647155b06311d39faf70cb094d9a5e102afd1398323e820cfb3cfd868ae58f
# 生成kubelet环境配置文件
$ mkdir -p $home/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $home/.kube/config
$ sudo chown $(id -u):$(id -g) $home/.kube/config
# 在任意master节点执行查看节点状态命令
$ kubectl get nodes
name status roles age version
master01 ready control-plane,master 5h58m v1.22.4
master02 ready control-plane,master 45m v1.22.4
master03 ready control-plane,master 44m v1.22.4
9)加入worker节点
# 在其他worker节点执行master01执行init操作后生成的加入命令如下
# 分别执行上一步的加入命令,加入master节点至集群
$ kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:4884a98b0773bc89c36dc5fa51569293103ff093e9124431c4c8c2d5801a96a2
# 此处如果报错,一般是token过期,可以在master01执行如下命令重新生成加入命令
$ kubeadm token create --print-join-command
kubeadm join 10.132.10.91:6443 \
--token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cf30ddd3df1c6215b886df1ea378a68ad5a9faad7933d53ca9891ebbdf9a1c3f
# 将上面生成的加入命令再次在其他worker节点执行
# 查看集成状态
$ kubectl get nodes
name status roles age version
master01 ready control-plane,master 6h12m v1.22.4
master02 ready control-plane,master 58m v1.22.4
master03 ready control-plane,master 57m v1.22.4
worker01 ready 5m12s v1.22.4
worker02 ready 4m10s v1.22.4
worker03 ready 3m42s v1.22.4
10)配置kubernetes dashboard
apiversion: v1
kind: namespace
metadata:
name: kubernetes-dashboard
---
apiversion: v1
kind: serviceaccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: service
apiversion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: nodeport
ports:
- port: 443
targetport: 8443
nodeport: 31001
selector:
k8s-app: kubernetes-dashboard
---
apiversion: v1
kind: secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: opaque
---
apiversion: v1
kind: secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: opaque
---
kind: configmap
apiversion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: role
apiversion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# allow dashboard to get, update and delete dashboard exclusive secrets.
- apigroups: [""]
resources: ["secrets"]
resourcenames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# allow dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apigroups: [""]
resources: ["configmaps"]
resourcenames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# allow dashboard to get metrics.
- apigroups: [""]
resources: ["services"]
resourcenames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apigroups: [""]
resources: ["services/proxy"]
resourcenames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: clusterrole
apiversion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# allow metrics scraper to get metrics from the metrics server
- apigroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiversion: rbac.authorization.k8s.io/v1
kind: rolebinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleref:
apigroup: rbac.authorization.k8s.io
kind: role
name: kubernetes-dashboard
subjects:
- kind: serviceaccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiversion: rbac.authorization.k8s.io/v1
kind: clusterrolebinding
metadata:
name: kubernetes-dashboard
roleref:
apigroup: rbac.authorization.k8s.io
kind: clusterrole
name: kubernetes-dashboard
subjects:
- kind: serviceaccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: deployment
apiversion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionhistorylimit: 10
selector:
matchlabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securitycontext:
seccompprofile:
type: runtimedefault
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.5.0
imagepullpolicy: always
ports:
- containerport: 8443
protocol: tcp
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# uncomment the following line to manually specify kubernetes api server host
# if not specified, dashboard will attempt to auto discover the api server and connect
# to it. uncomment only if the default does not work.
# - --apiserver-host=
volumemounts:
- name: kubernetes-dashboard-certs
mountpath: /certs
# create on-disk volume to store exec logs
- mountpath: /tmp
name: tmp-volume
livenessprobe:
httpget:
scheme: https
path: /
port: 8443
initialdelayseconds: 30
timeoutseconds: 30
securitycontext:
allowprivilegeescalation: false
readonlyrootfilesystem: true
runasuser: 1001
runasgroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretname: kubernetes-dashboard-certs
- name: tmp-volume
emptydir: {}
serviceaccountname: kubernetes-dashboard
nodeselector:
"kubernetes.io/os": linux
# comment the following tolerations if dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: noschedule
---
kind: service
apiversion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetport: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: deployment
apiversion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionhistorylimit: 10
selector:
matchlabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securitycontext:
seccompprofile:
type: runtimedefault
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.7
ports:
- containerport: 8000
protocol: tcp
livenessprobe:
httpget:
scheme: http
path: /
port: 8000
initialdelayseconds: 30
timeoutseconds: 30
volumemounts:
- mountpath: /tmp
name: tmp-volume
securitycontext:
allowprivilegeescalation: false
readonlyrootfilesystem: true
runasuser: 1001
runasgroup: 2001
serviceaccountname: kubernetes-dashboard
nodeselector:
"kubernetes.io/os": linux
# comment the following tolerations if dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: noschedule
volumes:
- name: tmp-volume
emptydir: {}
11)生成dashboard自签证书
$ mkdir -p /export/servers/kubernetes/certs && cd /export/servers/kubernetes/certs/
$ openssl genrsa -out dashboard.key 2048
$ openssl req -days 3650 -new -key dashboard.key -out dashboard.csr -subj /c=cn/st=beijing/l=beijing/o=jd/ou=jd/cn=172.16.16.42
$ openssl x509 -req -days 3650 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
12)执行以下操作命令
# 去除主节点的污点
$ kubectl taint nodes --all node-role.kubernetes.io/master-
# 创建命名空间
$ kubectl create namespace kubernetes-dashboard
# 创建secret
$ kubectl create secret tls kubernetes-dashboard-certs -n kubernetes-dashboard --key dashboard.key \
--cert dashboard.crt
13)生效dashboard yml配置文件
$ kubectl apply -f /export/servers/kubernetes/dashboard.yml
# 查看pods状态
$ kubectl get pods -a | grep kubernetes-dashboard
kubernetes-dashboard dashboard-metrics-scraper-c45b7869d-rbdt4 1/1 running 0 15m
kubernetes-dashboard kubernetes-dashboard-764b4dd7-rt66t 1/1 running 0 15m
14)访问dashboard页面
# web浏览器访问地址:ip地址为集群任意节点(可以是lb地址)
15)制作访问token
# 新增配置文件 dashboard-adminuser.yaml
$ touch /export/servers/kubernetes/dashboard-adminuser.yaml && vim /export/servers/kubernetes/dashboard-adminuser.yaml
# 输入以下内容
---
apiversion: v1
kind: serviceaccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiversion: rbac.authorization.k8s.io/v1
kind: clusterrolebinding
metadata:
name: admin-user
roleref:
apigroup: rbac.authorization.k8s.io
kind: clusterrole
name: cluster-admin
subjects:
- kind: serviceaccount
name: admin-user
namespace: kubernetes-dashboard
# 执行yaml文件
$ kubectl create -f /export/servers/kubernetes/dashboard-adminuser.yaml
# 预期输出结果
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 说明:上面创建了一个叫admin-user的服务账号,并放在kubernetes-dashboard命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,直接绑定即可
# 查看admin-user账户的token
$ kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
# 预期输出结果
name: admin-user-token-9fpps
namespace: kubernetes-dashboard
labels:
annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 72c1aa28-6385-4d1a-b22c-42427b74b4c7
type: kubernetes.io/service-account-token
data
====
ca.crt: 1099 bytes
namespace: 20 bytes
token: eyjhbgcioijsuzi1niisimtpzci6ijfecku0nxb5yno5uv9mufkxsuppenjhctfuekthazm1c2qztgfmrznes0eifq.eyjpc3mioijrdwjlcm5ldgvzl3nlcnzpy2vhy2nvdw50iiwia3vizxjuzxrlcy5pby9zzxj2awnlywnjb3vudc9uyw1lc3bhy2uioijrdwjlcm5ldgvzlwrhc2hib2fyzcisimt1ymvybmv0zxmuaw8vc2vydmljzwfjy291bnqvc2vjcmv0lm5hbwuioijhzg1pbi11c2vylxrva2vultlmchbziiwia3vizxjuzxrlcy5pby9zzxj2awnlywnjb3vudc9zzxj2awnllwfjy291bnqubmftzsi6imfkbwlulxvzzxiilcjrdwjlcm5ldgvzlmlvl3nlcnzpy2vhy2nvdw50l3nlcnzpy2utywnjb3vudc51awqioii3mmmxyweyoc02mzg1ltrkmwetyjiyyy00mjqyn2i3ngi0yzcilcjzdwiioijzexn0zw06c2vydmljzwfjy291bnq6a3vizxjuzxrlcy1kyxnoym9hcmq6ywrtaw4tdxnlcij9.oa3nlhhtaxd2qvwrpdxat2w9ywdwi_77sink4vwkfiizmmxbehnqvdibvhrc3frioknsvt71y6mxn0khu32hba1ywi0muzf165znftm_rsqiq9onpxefvlaks-0vzr2nwubx_-ftt7gesresmlejstbpb1wonr6kqty66ajkk5ileiq77i0kxyii7glpeyc6q4bijwez0hsxdpr4jsneahrp8qslrv3oft4qzvnj47x7xkc4dyyzomhuij9qhkpi2gmbiz8xdumnok070ydc0tcxetzkduvdsigxcmqx6aesd-8dca5hb8sm4mepkgjekvmzklkm97y_pobpkftaia
# 把上面命令执行获取到的token复制到登录界面的token输入框中,即可正常登录dashboard
13)登录dashboard如下
1. 环境说明
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
kubectl | kubectl-1.22.4-0.x86_64 |
节点 | deploy |
2. 部署说明
kubernetes kubectl客户端。
3. 解压之前上传的kubadm-rpm包
$ tar xzvf kubeadm-rpm.tgz
4. 执行安装
$ rpm -ivh bc7a9f8e7c6844cfeab2066a84b8fecf8cf608581e56f6f96f80211250f9a5e7-kubectl-1.22.4-0.x86_64.rpm
5. 增加执行权限
# 生成kubelet环境配置文件
$ mkdir -p $home/.kube
$ sudo touch $home/.kube/config
$ sudo chown $(id -u):$(id -g) $home/.kube/config
# 从任意master节点复制内容至上面的配置文件
6. 查看版本
$ kubectl version
client version: version.info{major:"1", minor:"22", gitversion:"v1.22.4", gitcommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", gittreestate:"clean", builddate:"2021-11-17t15:48:33z", goversion:"go1.16.10", compiler:"gc", platform:"linux/amd64"}
server version: version.info{major:"1", minor:"22", gitversion:"v1.22.4", gitcommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", gittreestate:"clean", builddate:"2021-11-17t15:42:41z", goversion:"go1.16.10", compiler:"gc", platform:"linux/amd64"}
1. 环境说明
名称 | 说明 |
---|---|
操作系统 | centos linux release 7.8.2003 |
helm | helm-v3.9.3-linux-amd64.tar.gz |
节点 | deploy |
2. 部署说明
kubernetes资源包及配置管理工具。
3. 下载helm离线安装包并上传至服务器
$ wget
4. 解压安装包
$ tar -zxvf helm-v3.9.3-linux-amd64.tar.gz -c /export/servers/
$ cd /export/servers/linux-amd64
5. 增加执行权限
$ cp linux-amd64/helm /usr/local/bin/
$ chmod x /usr/local/bin/helm
6. 查看版本
$ helm version
version.buildinfo{version:"v3.9.3", gitcommit:"414ff28d4029ae8c8b05d62aa06c7fe3dee2bc58", gittreestate:"clean", goversion:"go1.17.13"}
设置本地存储挂载nas
$ mkdir /export/servers/helm_chart/local-path-storage && cd /export/servers/helm_chart/local-path-storage/local-path-storage.yaml
$ vim local-path-storage.yaml
# 设置以下内容,设置"paths":["/home/admin/local-path-provisioner"] 为nas目录,没有目录需要创建
apiversion: v1
kind: namespace
metadata:
name: local-path-storage
---
apiversion: v1
kind: serviceaccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiversion: rbac.authorization.k8s.io/v1
kind: clusterrole
metadata:
name: local-path-provisioner-role
rules:
- apigroups: [ "" ]
resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
verbs: [ "get", "list", "watch" ]
- apigroups: [ "" ]
resources: [ "endpoints", "persistentvolumes", "pods" ]
verbs: [ "*" ]
- apigroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "patch" ]
- apigroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
---
apiversion: rbac.authorization.k8s.io/v1
kind: clusterrolebinding
metadata:
name: local-path-provisioner-bind
roleref:
apigroup: rbac.authorization.k8s.io
kind: clusterrole
name: local-path-provisioner-role
subjects:
- kind: serviceaccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiversion: apps/v1
kind: deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchlabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceaccountname: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.21
imagepullpolicy: ifnotpresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumemounts:
- name: config-volume
mountpath: /etc/config/
env:
- name: pod_namespace
valuefrom:
fieldref:
fieldpath: metadata.namespace
volumes:
- name: config-volume
configmap:
name: local-path-config
---
apiversion: storage.k8s.io/v1
kind: storageclass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumebindingmode: waitforfirstconsumer
reclaimpolicy: delete
---
kind: configmap
apiversion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodepathmap":[
{
"node":"default_path_for_non_listed_nodes",
"paths":["/nas_data/jdiot/local-path-provisioner"]
}
]
}
setup: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutepath=$optarg
;;
s)
sizeinbytes=$optarg
;;
m)
volmode=$optarg
;;
esac
done
mkdir -m 0777 -p ${absolutepath}
teardown: |-
#!/bin/sh
while getopts "m:s:p:" opt
do
case $opt in
p)
absolutepath=$optarg
;;
s)
sizeinbytes=$optarg
;;
m)
volmode=$optarg
;;
esac
done
rm -rf ${absolutepath}
helperpod.yaml: |-
apiversion: v1
kind: pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
注:以上依赖镜像需要从公网环境下载依赖并导入镜像库,需要设置以上对应镜像地址从私有镜像库拉取镜像
生效本地存储yaml
$ kubectl apply -f local-path-storage.yaml -n local-path-storage
设置k8s默认存储
$ kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
注:后面部署的中间件及服务需要修改对应的存储为本地存储:"storageclass": "local-path"