Blogger Information
Blog 291
fans 0
comment 0
visits 352648
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
kubeadm安装k8s 1.23.5
Original
1559 people have browsed it

镜像下载、域名解析、时间同步请点击 阿里云开源镜像站

一. 环境准备

1.1 配置yum阿里源

  1. yum -y install wget
  2. mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
  3. wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
  4. yum clean all
  5. yum makecache

1.2 关闭防火墙

  1. # 查看防火墙状态
  2. firewall-cmd --state
  3. # 临时停止防火墙
  4. systemctl stop firewalld.service
  5. # 禁止防火墙开机启动
  6. systemctl disable firewalld.service

1.3 关闭selinux

  1. # 查看selinux状态
  2. getenforce
  3. # 临时关闭selinux
  4. setenforce 0
  5. # 永久关闭selinux
  6. sed -i 's/^ *SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

1.4 关闭swap

  1. # 临时关闭swap
  2. swapoff -a
  3. # 永久关闭swap
  4. sed -i.bak '/swap/s/^/#/' /etc/fstab
  5. # 查看
  6. free -g

1.5 调整内核参数及模块

加载所需内核模块

  1. cat <<EOF> /etc/modules-load.d/k8s.conf
  2. br_netfilter
  3. EOF
  4. cat <<EOF> /etc/modules-load.d/containerd.conf
  5. overlay
  6. br_netfilter
  7. EOF
  8. modprobe overlay
  9. modprobe br_netfilter

设置必需的 sysctl 参数,允许iptables检查桥接流量,这些参数在重新启动后仍然存在

  1. cat <<EOF> /etc/sysctl.d/99-kubernetes-cri.conf
  2. net.bridge.bridge-nf-call-iptables = 1
  3. net.ipv4.ip_forward = 1
  4. net.bridge.bridge-nf-call-ip6tables = 1
  5. EOF
  6. # 应用 sysctl 参数而无需重新启动
  7. sudo sysctl --system

1.6 开启ipvs

不开启ipvs将会使用iptables进行数据包转发,但是效率低,所以推荐开通ipvs,使用

  1. cat <<EOF> /etc/sysconfig/modules/ipvs.modules
  2. #!/bin/bash
  3. modprobe -- ip_vs
  4. modprobe -- ip_vs_rr
  5. modprobe -- ip_vs_wrr
  6. modprobe -- ip_vs_sh
  7. modprobe -- nf_conntrack_ipv4
  8. EOF
  9. # 加载模块
  10. chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
  11. # 安装了ipset软件包
  12. yum install ipset -y
  13. # 安装管理工具ipvsadm
  14. yum install ipvsadm -y

1.7 同步服务器时间

  1. yum install chrony -y
  2. systemctl enable chronyd
  3. systemctl start chronyd
  4. [root@master ~]# chronyc sources
  5. 210 Number of sources = 4
  6. MS Name/IP address Stratum Poll Reach LastRx Last sample
  7. ===============================================================================
  8. ^- ntp.wdc1.us.leaseweb.net 2 9 201 329 -8125us[-8125us] +/- 264ms
  9. ^- ntp5.flashdance.cx 2 9 373 189 -43ms[ -43ms] +/- 223ms
  10. ^+ time.cloudflare.com 3 8 377 197 +38ms[ +38ms] +/- 121ms
  11. ^* 119.28.183.184 2 8 155 30m -8460us[ -13ms] +/- 67ms
  12. [root@master ~]# date
  13. 2022 03 26 星期六 15:11:32 CST

1.8 安装containerd

  1. yum install -y yum-utils device-mapper-persistent-data lvm2
  2. yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
  3. # 查看最新版本
  4. yum list containerd --showduplicates | sort -r
  5. yum install containerd -y
  6. # 安装了`containerd.io-1.5.11-3.1.el7.x86_64`
  7. containerd config default > /etc/containerd/config.toml
  8. systemctl start containerd
  9. systemctl enable containerd

配置

  1. # 修改cgroups为systemd
  2. sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml
  3. # 修改基础设施镜像
  4. sed -i 's#sandbox_image = "k8s.gcr.io/pause:3.5"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"#' /etc/containerd/config.toml
  5. systemctl daemon-reload
  6. systemctl restart containerd

安装 CRI 客户端 crictl
选择版本 https://github.com/kubernetes-sigs/cri-tools/releases/

  1. wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz
  2. tar zxvf crictl-v1.23.0-linux-amd64.tar.gz -C /usr/local/bin
  3. cat <<EOF> /etc/crictl.yaml
  4. runtime-endpoint: unix:///run/containerd/containerd.sock
  5. image-endpoint: unix:///run/containerd/containerd.sock
  6. timeout: 10
  7. debug: false
  8. EOF
  9. # 验证是否可用
  10. crictl pull nginx:alpine
  11. crictl images
  12. crictl rmi nginx:alpine

1.9 修改hostname和hosts

修改hostname

  1. # master节点
  2. hostnamectl set-hostname master
  3. # node1节点
  4. hostnamectl set-hostname node1
  5. # node2节点
  6. hostnamectl set-hostname node2

添加hosts

  1. cat <<EOF> /etc/hosts
  2. 192.168.4.27 master
  3. 192.168.4.28 node1
  4. 192.168.4.29 node2
  5. EOF

二. 安装k8s

2.1 安装 kubelet、kubeadm、kubectl

添加kubernetes源

  1. cat <<EOF > /etc/yum.repos.d/kubernetes.repo
  2. [kubernetes]
  3. name=Kubernetes
  4. baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
  5. enabled=1
  6. gpgcheck=0
  7. repo_gpgcheck=0
  8. gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
  9. http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
  10. EOF

然后安装 kubeadm、kubelet、kubectl

  1. # 查看版本,最新版 1.23.5-0
  2. yum list kubeadm --showduplicates | sort -r
  3. yum install -y kubelet-1.23.5-0 kubectl-1.23.5-0 kubeadm-1.23.5-0
  4. [root@master ~]# kubeadm version
  5. kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.5", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:57:37Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}

修改kubelet配置

  1. cat <<EOF> /etc/sysconfig/kubelet
  2. KUBELET_KUBEADM_ARGS="--container-runtime=remote --runtime-request-timeout=15m --container-runtime-endpoint=unix:///run/containerd/containerd.sock"
  3. EOF

启动kubelet服务,并设置开机自启

  1. systemctl start kubelet
  2. systemctl enable kubelet

2.2 初始化k8s集群

2.2.1 master节点(二选一)

1. 通过配置文件初始化:

  1. kubeadm config print init-defaults > kubeadm.yaml
  2. 修改为
  3. cat <<EOF> kubeadm.yaml
  4. apiVersion: kubeadm.k8s.io/v1beta3
  5. bootstrapTokens:
  6. - groups:
  7. - system:bootstrappers:kubeadm:default-node-token
  8. token: abcdef.0123456789abcdef
  9. ttl: 24h0m0s
  10. usages:
  11. - signing
  12. - authentication
  13. kind: InitConfiguration
  14. localAPIEndpoint:
  15. advertiseAddress: 192.168.4.27 # apiserver 节点内网IP
  16. bindPort: 6443
  17. nodeRegistration:
  18. criSocket: /run/containerd/containerd.sock # 修改为containerd
  19. imagePullPolicy: IfNotPresent
  20. name: master
  21. taints:
  22. - effect: NoSchedule
  23. key: node-role.kubernetes.io/master
  24. ---
  25. apiServer:
  26. timeoutForControlPlane: 4m0s
  27. apiVersion: kubeadm.k8s.io/v1beta3
  28. certificatesDir: /etc/kubernetes/pki
  29. clusterName: kubernetes
  30. controllerManager: {}
  31. dns:
  32. type: CoreDNS # dns类型 type: CoreDNS
  33. etcd:
  34. local:
  35. dataDir: /var/lib/etcd
  36. imageRepository: registry.aliyuncs.com/google_containers # 修改这个镜像能下载
  37. kind: ClusterConfiguration
  38. kubernetesVersion: 1.23.5 # k8s版本
  39. networking:
  40. dnsDomain: cluster.local
  41. podSubnet: 10.244.0.0/16
  42. serviceSubnet: 10.96.0.0/12
  43. scheduler: {}
  44. ---
  45. apiVersion: kubeproxy.config.k8s.io/v1alpha1
  46. kind: KubeProxyConfiguration
  47. mode: ipvs # kube-proxy 模式
  48. EOF
  49. kubeadm init --config kubeadm.yaml

2. 直接初始化:

kube-proxy 模式是 iptables,可以通过kubectl edit configmap kube-proxy -n kube-system修改

  1. kubeadm init \
  2. --kubernetes-version v1.23.5 \
  3. --apiserver-advertise-address 192.168.4.27 \
  4. --control-plane-endpoint master \
  5. --image-repository registry.aliyuncs.com/google_containers \
  6. --pod-network-cidr 10.244.0.0/16 \
  7. --cri-socket /run/containerd/containerd.sock
  • —kubernetes-version:指定的版本
  • —apiserver-advertise-address:K8S主节点的地址
  • —pod-network-cidr:pod的网络IP范围

如果您的网络运行在192.168..,需要将 pod-network-cidr 设置为10.0.0.0/16;
如果您的网络是10.0..使用192.168.0.0/16,此时使用calico网络(如果设置错了部署calico 网络插件后coredns也运行不起来,会报错coredns Failed to list *v1.Endpoints,该错误解决办法参考https://blog.csdn.net/u011663005/article/details/87937800):

  • —image-repository:指定下载源

复制config文件

  1. mkdir -p $HOME/.kube
  2. sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  3. sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.2.2 node节点

  1. kubeadm join master:6443 --token f6e3hv.uk6ctfgehstt92jw \
  2. --discovery-token-ca-cert-hash sha256:9962caed607e31de7b93732347c1ac681f216c290e6b35f91f3f5d67cd12cbcf

2.3 安装Calico网络插件(master节点执行)

  1. mkdir -p /root/i && cd /root/i
  2. # 下载
  3. curl https://docs.projectcalico.org/manifests/calico.yaml -o /root/i/calico.yaml
  4. 查看一下版本`v3.22.2`,如果不是替换不生效
  5. # 修改镜像
  6. sed -i 's#docker.io/calico/cni:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/cni:v3.22.2#' /root/i/calico.yaml
  7. sed -i 's#docker.io/calico/pod2daemon-flexvol:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/pod2daemon-flexvol:v3.22.2#' /root/i/calico.yaml
  8. sed -i 's#docker.io/calico/node:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/node:v3.22.2#' /root/i/calico.yaml
  9. sed -i 's#docker.io/calico/kube-controllers:v3.22.2#registry.cn-shanghai.aliyuncs.com/wanfei/kube-controllers:v3.22.2#' /root/i/calico.yaml
  10. # 执行
  11. kubectl apply -f /root/i/calico.yaml

等几分钟

  1. [root@master i]# kubectl get pods -n kube-system
  2. NAME READY STATUS RESTARTS AGE
  3. calico-kube-controllers-57845f44bb-tpvbr 1/1 Running 0 79s
  4. calico-node-fpfxj 1/1 Running 0 79s
  5. calico-node-qcvqx 1/1 Running 0 79s
  6. calico-node-r4gsf 1/1 Running 0 79s
  7. coredns-6d8c4cb4d-7bclr 1/1 Running 0 29m
  8. coredns-6d8c4cb4d-djwxf 1/1 Running 0 29m
  9. etcd-master 1/1 Running 0 29m
  10. kube-apiserver-master 1/1 Running 0 29m
  11. kube-controller-manager-master 1/1 Running 0 29m
  12. kube-proxy-pjkmd 1/1 Running 0 7m35s
  13. kube-proxy-snb84 1/1 Running 0 7m46s
  14. kube-proxy-tp7wm 1/1 Running 0 29m
  15. kube-scheduler-master 1/1 Running 0 29m
  16. [root@master i]# kubectl get nodes
  17. NAME STATUS ROLES AGE VERSION
  18. master Ready control-plane,master 29m v1.23.5
  19. node1 Ready <none> 8m4s v1.23.5
  20. node2 Ready <none> 7m53s v1.23.5

三. 安装其他工具

3.1 持久化存储 nfs

3.1.1 搭建NFS Server(随便安装那个节点,现在安装master节点)

  1. yum -y install nfs-utils rpcbind
  2. #分配权限
  3. mkdir /nfsdata && chmod 666 /nfsdata && chown nfsnobody /nfsdata
  4. # 配置挂载
  5. cat <<EOF> /etc/exports
  6. /nfsdata *(rw,no_root_squash,no_all_squash,sync)
  7. EOF
  8. # 启动
  9. systemctl start rpcbind.service
  10. systemctl enable rpcbind.service
  11. systemctl start nfs.service
  12. systemctl enable nfs.service

3.1.2 安装NFS客户端(所有node节点)

如果不安装,使用StorageClass的nfs-client 的自动配置程序,我们也叫它 Provisioner所在的node节点就会一直ContainerCreating

  1. [root@master nfs-client]# kubectl get pods -o wide
  2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  3. nfs-client-provisioner-798cfd7476-zrndd 0/1 ContainerCreating 0 3m53s <none> node1 <none> <none>

安装

  1. yum -y install nfs-utils rpcbind
  2. systemctl start rpcbind.service
  3. systemctl enable rpcbind.service
  4. systemctl start nfs.service
  5. systemctl enable nfs.service
  6. [root@node1 ~]# showmount -e 192.168.4.27
  7. Export list for 192.168.4.27:
  8. /nfsdata *

3.1.3 安装nfs-client-provisioner

设置StorageClass,自动生成PV

nfs-rbac.yaml

  1. cat <<EOF> nfs-rbac.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: nfs-client-provisioner
  6. # replace with namespace where provisioner is deployed
  7. namespace: default #根据实际环境设定namespace,下面类同
  8. ---
  9. kind: ClusterRole
  10. apiVersion: rbac.authorization.k8s.io/v1
  11. metadata:
  12. name: nfs-client-provisioner-runner
  13. rules:
  14. - apiGroups: [""]
  15. resources: ["persistentvolumes"]
  16. verbs: ["get", "list", "watch", "create", "delete"]
  17. - apiGroups: [""]
  18. resources: ["persistentvolumeclaims"]
  19. verbs: ["get", "list", "watch", "update"]
  20. - apiGroups: ["storage.k8s.io"]
  21. resources: ["storageclasses"]
  22. verbs: ["get", "list", "watch"]
  23. - apiGroups: [""]
  24. resources: ["events"]
  25. verbs: ["create", "update", "patch"]
  26. ---
  27. kind: ClusterRoleBinding
  28. apiVersion: rbac.authorization.k8s.io/v1
  29. metadata:
  30. name: run-nfs-client-provisioner
  31. subjects:
  32. - kind: ServiceAccount
  33. name: nfs-client-provisioner
  34. # replace with namespace where provisioner is deployed
  35. namespace: default
  36. roleRef:
  37. kind: ClusterRole
  38. name: nfs-client-provisioner-runner
  39. apiGroup: rbac.authorization.k8s.io
  40. ---
  41. kind: Role
  42. apiVersion: rbac.authorization.k8s.io/v1
  43. metadata:
  44. name: leader-locking-nfs-client-provisioner
  45. # replace with namespace where provisioner is deployed
  46. namespace: default
  47. rules:
  48. - apiGroups: [""]
  49. resources: ["endpoints"]
  50. verbs: ["get", "list", "watch", "create", "update", "patch"]
  51. ---
  52. kind: RoleBinding
  53. apiVersion: rbac.authorization.k8s.io/v1
  54. metadata:
  55. name: leader-locking-nfs-client-provisioner
  56. subjects:
  57. - kind: ServiceAccount
  58. name: nfs-client-provisioner
  59. # replace with namespace where provisioner is deployed
  60. namespace: default
  61. roleRef:
  62. kind: Role
  63. name: leader-locking-nfs-client-provisioner
  64. apiGroup: rbac.authorization.k8s.io
  65. EOF

nfs-storage.yaml

  1. cat <<EOF> nfs-storage.yaml
  2. apiVersion: storage.k8s.io/v1
  3. kind: StorageClass
  4. metadata:
  5. name: nfs-storage
  6. provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致
  7. parameters:
  8. archiveOnDelete: "true"
  9. reclaimPolicy: Retain
  10. EOF

nfs-provisioner.yaml

  1. cat <<EOF> nfs-provisioner.yaml
  2. apiVersion: apps/v1
  3. kind: Deployment
  4. metadata:
  5. name: nfs-client-provisioner
  6. labels:
  7. app: nfs-client-provisioner
  8. # replace with namespace where provisioner is deployed
  9. namespace: default #与RBAC文件中的namespace保持一致
  10. spec:
  11. replicas: 1
  12. selector:
  13. matchLabels:
  14. app: nfs-client-provisioner
  15. strategy:
  16. type: Recreate
  17. selector:
  18. matchLabels:
  19. app: nfs-client-provisioner
  20. template:
  21. metadata:
  22. labels:
  23. app: nfs-client-provisioner
  24. spec:
  25. serviceAccountName: nfs-client-provisioner
  26. containers:
  27. - name: nfs-client-provisioner
  28. #image: quay.io/external_storage/nfs-client-provisioner:latest
  29. #这里特别注意,在k8s-1.20以后版本中使用上面提供的包,并不好用,这里我折腾了好久,才解决,后来在官方的github上,别人提的问题中建议使用下面这个包才解决的,我这里是下载后,传到我自已的仓库里
  30. #easzlab/nfs-subdir-external-provisioner:v4.0.2
  31. image: registry.cn-shanghai.aliyuncs.com/wanfei/nfs-subdir-external-provisioner:v4.0.2
  32. volumeMounts:
  33. - name: nfs-client-root
  34. mountPath: /persistentvolumes
  35. env:
  36. - name: PROVISIONER_NAME
  37. value: nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
  38. - name: NFS_SERVER
  39. value: 192.168.4.27 #NFS Server IP地址
  40. - name: NFS_PATH
  41. value: "/nfsdata" #NFS挂载卷
  42. volumes:
  43. - name: nfs-client-root
  44. nfs:
  45. server: 192.168.4.27 #NFS Server IP地址
  46. path: "/nfsdata" #NFS 挂载卷
  47. EOF

安装

  1. kubectl apply -f .
  2. [root@master nfs-client]# kubectl get pods | grep nfs-client
  3. nfs-client-provisioner-777fbf8b55-2ptbm 1/1 Running 0 34s

设置默认的StorageClass(有default)

  1. kubectl patch storageclass nfs-storage -p '{ "metadata" : { "annotations" :{"storageclass.kubernetes.io/is-default-class": "true"}}}'
  2. [root@master ~]# kubectl get sc | grep nfs-storage
  3. nfs-storage (default) nfs-storage Retain Immediate false 71s
  4. # 取消default,值为"false"
  5. kubectl patch storageclass nfs-storage -p '{ "metadata" : { "annotations" :{"storageclass.kubernetes.io/is-default-class": "false"}}}'

参考 https://blog.csdn.net/m0_48898914/article/details/121752973

3.2 安装helm

3.2.1 下载

下载地址 https://github.com/helm/helm/releases

  1. [root@master helm]# wget https://get.helm.sh/helm-v3.8.1-linux-amd64.tar.gz

服务器下载速度太慢,可以科学上网下载好了上传到服务器,安装lrzsz https://blog.csdn.net/qq_22356995/article/details/104071562

解压

  1. [root@master helm]# tar -xvf helm-v3.8.1-linux-amd64.tar.gz
  2. linux-amd64/
  3. linux-amd64/helm
  4. linux-amd64/README.md
  5. linux-amd64/LICENSE

3.2.2 安装

将helm移到/usr/local/bin目录

  1. [root@master helm]# mv linux-amd64/helm /usr/local/bin
  2. [root@master helm]# helm version
  3. version.BuildInfo{Version:"v3.8.1", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}

3.2.3 添加几个repo

  1. helm repo add apphub https://apphub.aliyuncs.com
  2. helm repo add stable https://charts.helm.sh/stable
  3. helm repo add bitnami https://charts.bitnami.com/bitnami
  4. helm repo update
  5. # 例如搜索redis chart
  6. [root@master helm]# helm search repo redis
  7. NAME CHART VERSION APP VERSION DESCRIPTION
  8. apphub/prometheus-redis-exporter 3.2.2 1.3.4 Prometheus exporter for Redis metrics
  9. apphub/redis 10.5.3 5.0.7 Open source, advanced key-value store. It is of...
  10. apphub/redis-cache 0.5.0 4.0.12-alpine A pure in-memory redis cache, using statefulset...
  11. apphub/redis-ha 4.3.3 5.0.6 Highly available Kubernetes implementation of R...
  12. apphub/redis-operator 1.0.0 Redis Operator provides high availability redis...
  13. apphub/redispapa 0.0.1 0.0.1 利用redisinfo信息对redis的使用情况进行监控的一...
  14. bitnami/redis 16.6.0 6.2.6 Redis(TM) is an open source, advanced key-value...
  15. bitnami/redis-cluster 7.4.1 6.2.6 Redis(TM) is an open source, scalable, distribu...
  16. stable/prometheus-redis-exporter 3.5.1 1.3.4 DEPRECATED Prometheus exporter for Redis metrics
  17. stable/redis 10.5.7 5.0.7 DEPRECATED Open source, advanced key-value stor...
  18. stable/redis-ha 4.4.6 5.0.6 DEPRECATED - Highly available Kubernetes implem...
  19. stable/sensu 0.2.5 0.28 DEPRECATED Sensu monitoring framework backed by...
  20. apphub/codis 3.2 3.2 A Helm chart for Codis

本文转自:https://blog.csdn.net/qq_38983728/article/details/123755691

Statement of this Website
The copyright of this blog article belongs to the blogger. Please specify the address when reprinting! If there is any infringement or violation of the law, please contact admin@php.cn Report processing!
All comments Speak rationally on civilized internet, please comply with News Comment Service Agreement
0 comments
Author's latest blog post