本文档描述了如何在CentOS上使用kubeadm安装单个控制面板节点(主节点)的Kubernetes集群( v1.15 ),然后部署到一个外部OpenStack云厂商,并且使用Cinder CSI插件将Cinder存储卷用作Kubernetes中的持久卷。
OpenStack的准备工作
Kubernetes集群要在OpenStack VM上运行,因此让我们首先需要在OpenStack中创建一些东西。
- Kubernetes集群的项目/租户
- Kubernetes集群项目的用户,以查询节点信息和附加存储卷等
- 专用网络和子网
- 专用网络的路由器,并将其连接到公用网络以获取浮动IP
- 所有Kubernetes 虚拟机的安全组
- 一个虚拟机作为控制面板节点(主节点),几个虚拟机作为工作节点
安全组将具有以下规则来打开Kubernetes的端口。
控制面板节点(主节点)
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 6443 | Kubernetes API Server |
TCP协议 | 2379-2380 | etcd server client API |
TCP协议 | 10250 | Kubelet API |
TCP协议 | 10251 | kube-scheduler |
TCP协议 | 10252 | kube-controller-manager |
TCP协议 | 10255 | 只读 Kubelet API |
工作节点
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 10250 | Kubelet API |
TCP协议 | 10255 | Read-only Kubelet API |
TCP协议 | 30000-32767 | NodePort Services |
控制面板节点(主节点)和工作节点上的CNI端口
协议 | 端口号 | 描述 |
---|---|---|
TCP协议 | 179 | Calico BGP network |
TCP协议 | 9099 | Calico felix(健康检查) |
UDP协议 | 8285 | Flannel |
UDP协议 | 8472 | Flannel |
TCP协议 | 6781-6784 | Weave Net |
UDP协议 | 6783-6784 | Weave Net |
仅在使用特定的CNI插件时才需要打开CNI特定的端口。在本指南中,我们将使用Weave Net。在安全组中仅需要打开Weave Net端口(TCP 6781-6784和UDP 6783-6784)。
控制面板节点(主节点)至少需要2个内核和4GB RAM。启动虚拟机后,请验证其主机名,并确保其与Nova中的节点名相同。如果主机名不可解析,请将其添加到中/etc/hosts。
例如,如果虚拟机名为master1,并且它的内部IP是192.168.1.4。将其添加到/etc/hosts并将主机名设置为master1。
<span role="presentation">echo "192.168.1.4 master1" >> /etc/hosts</span> <span role="presentation"></span> <span role="presentation">hostnamectl set-hostname master1</span>
www#gaodaima.com来源gao.dai.ma.com搞@代*码网搞代码
安装Docker和Kubernetes
接下来,我们将按照官方文档使用kubeadm安装docker和Kubernetes。
按照容器运行时文档中的步骤安装Docker 。
请注意,最佳做法是将systemd用作Kubernetes 的cgroup驱动程序。如果你使用内部容器注册表,请将其添加到docker 配置中。
<span role="presentation"># Install Docker CE</span> <span role="presentation">## Set up the repository</span> <span role="presentation">### Install required packages.</span> <span role="presentation"></span> <span role="presentation">yum install yum-utils device-mapper-persistent-data lvm2</span> <span role="presentation"></span> <span role="presentation">### Add Docker repository.</span> <span role="presentation"></span> <span role="presentation">yum-config-manager </span> <span role="presentation"> --add-repo </span> <span role="presentation"> https://download.docker.com/linux/centos/docker-ce.repo</span> <span role="presentation"></span> <span role="presentation">## Install Docker CE.</span> <span role="presentation"></span> <span role="presentation">yum update && yum install docker-ce-18.06.2.ce</span> <span role="presentation"></span> <span role="presentation">## Create /etc/docker directory.</span> <span role="presentation"></span> <span role="presentation">mkdir /etc/docker</span> <span role="presentation"></span> <span role="presentation"># Configure the Docker daemon</span> <span role="presentation"></span> <span role="presentation">cat > /etc/docker/daemon.json <<EOF</span> <span role="presentation">{</span> <span role="presentation"> "exec-opts": ["native.cgroupdriver=systemd"],</span> <span role="presentation"> "log-driver": "json-file",</span> <span role="presentation"> "log-opts": {</span> <span role="presentation"> "max-size": "100m"</span> <span role="presentation"> },</span> <span role="presentation"> "storage-driver": "overlay2",</span> <span role="presentation"> "storage-opts": [</span> <span role="presentation"> "overlay2.override_kernel_check=true"</span> <span role="presentation"> ]</span> <span role="presentation">}</span> <span role="presentation">EOF</span> <span role="presentation"></span> <span role="presentation">mkdir -p /etc/systemd/system/docker.service.d</span> <span role="presentation"></span> <span role="presentation"># Restart Docker</span> <span role="presentation">systemctl daemon-reload</span> <span role="presentation">systemctl restart docker</span> <span role="presentation">systemctl enable docker</span>
按照Kubeadm安装文档中的步骤,安装kubeadm。
<span role="presentation">cat <<EOF > /etc/yum.repos.d/kubernetes.repo</span> <span role="presentation">[kubernetes]</span> <span role="presentation">name=Kubernetes</span> <span role="presentation">baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64</span> <span role="presentation">enabled=1</span> <span role="presentation">gpgcheck=1</span> <span role="presentation">repo_gpgcheck=1</span> <span role="presentation">gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg</span> <span role="presentation">EOF</span> <span role="presentation"></span> <span role="presentation"># Set SELinux in permissive mode (effectively disabling it)</span> <span role="presentation"># Caveat: In a production environment you may not want to disable SELinux, please refer to Kubernetes documents about SELinux</span> <span role="presentation">setenforce 0</span> <span role="presentation">sed -i "s/^SELINUX=enforcing$/SELINUX=permissive/" /etc/selinux/config</span> <span role="presentation"></span> <span role="presentation">yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes</span> <span role="presentation"></span> <span role="presentation">systemctl enable --now kubelet</span> <span role="presentation"></span> <span role="presentation">cat <<EOF > /etc/sysctl.d/k8s.conf</span> <span role="presentation">net.bridge.bridge-nf-call-ip6tables = 1</span> <span role="presentation">net.bridge.bridge-nf-call-iptables = 1</span> <span role="presentation">EOF</span> <span role="presentation">sysctl --system</span> <span role="presentation"></span> <span role="presentation"># check if br_netfilter module is loaded</span> <span role="presentation">lsmod | grep br_netfilter</span> <span role="presentation"></span> <span role="presentation"># if not, load it explicitly with </span> <span role="presentation">modprobe br_netfilter</span>
根据官方文档创建单个控制面板(主节点)群集中找到,我们来创建单个控制面板(主节点)集群
我们将主要遵循该文档来创建集群,但还会为云厂商添加其他内容。为了使事情更清楚,对于控制面板节点(主节点)我们将使用一个kubeadm-config.yml配置文件。在此配置中,我们指定使用外部OpenStack云厂商,以及在何处找到其配置。**我们还在API服务器的运行时配置中启用了存储API,因此我们可以将OpenStack存储卷用作Kubernetes中的持久卷。
<span role="presentation">apiVersion: kubeadm.k8s.io/v1beta1</span> <span role="presentation">kind: InitConfiguration</span> <span role="presentation">nodeRegistration:</span> <span role="presentation"> kubeletExtraArgs:</span> <span role="presentation"> cloud-provider: "external"</span> <span role="presentation">---</span> <span role="presentation">apiVersion: kubeadm.k8s.io/v1beta2</span> <span role="presentation">kind: ClusterConfiguration</span> <span role="presentation">kubernetesVersion: "v1.15.1"</span> <span role="presentation">apiServer:</span> <span role="presentation"> extraArgs:</span> <span role="presentation"> enable-admission-plugins: NodeRestriction</span> <span role="presentation"> runtime-config: "storage.k8s.io/v1=true"</span> <span role="presentation">controllerManager:</span> <span role="presentation"> extraArgs:</span> <span role="presentation"> external-cloud-volume-plugin: openstack</span> <span role="presentation"> extraVolumes:</span> <span role="presentation"> - name: "cloud-config"</span> <span role="presentation"> hostPath: "/etc/kubernetes/cloud-config"</span> <span role="presentation"> mountPath: "/etc/kubernetes/cloud-config"</span> <span role="presentation"> readOnly: true</span> <span role="presentation"> pathType: File</span> <span role="presentation">networking:</span> <span role="presentation"> serviceSubnet: "10.96.0.0/12"</span> <span role="presentation"> podSubnet: "10.224.0.0/16"</span> <span role="presentation"> dnsDomain: "cluster.local"</span>
现在,我们将为OpenStack 创建配置/etc/kubernetes/cloud-config。请注意,此处的租户是我们一开始为所有Kubernetes VM创建的租户。所有虚拟机都应在该项目/租户中启动。另外,你需要在此租户中创建一个用户,以便Kubernetes进行查询。 ca-file 是OpenStack API端点的CA根证书,目前,云厂商不允许不安全的连接(跳过CA检查),例如https://openstack.cloud:5000/v3 。
<span role="presentation">[Global]</span> <span role="presentation">region=RegionOne</span> <span role="presentation">username=username</span> <span role="presentation">password=password</span> <span role="presentation">auth-url=https://openstack.cloud:5000/v3</span> <span role="presentation">tenant-id=14ba698c0aec4fd6b7dc8c310f664009</span> <span role="presentation">domain-id=default</span> <span role="presentation">ca-file=/etc/kubernetes/ca.pem</span> <span role="presentation"></span> <span role="presentation">[LoadBalancer]</span> <span role="presentation">subnet-id=b4a9a292-ea48-4125-9fb2-8be2628cb7a1</span> <span role="presentation">floating-network-id=bc8a590a-5d65-4525-98f3-f7ef29c727d5</span> <span role="presentation"></span> <span role="presentation">[BlockStorage]</span> <span role="presentation">bs-version=v2</span> <span role="presentation"></span> <span role="presentation">[Networking]</span> <span role="presentation">public-network-name=public</span> <span role="presentation">ipv6-support-disabled=false</span>
接下来,我们来运行kubeadm以启动控制面板节点(主节点)
<span role="presentation">kubeadm init --config=kubeadm-config.yml</span>
完成初始化后,将admin config复制到.kube路径下
<span role="presentation"> mkdir -p $HOME/.kube</span> <span role="presentation"> sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config</span> <span role="presentation"> sudo chown $(id -u):$(id -g) $HOME/.kube/config</span>
在此阶段,控制面板节点(主节点)已创建但尚未就绪。所有节点都有污点node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule,正在等待由 cloud-controller-manager(云厂商控制器管理器)初始化。
<span role="presentation"># kubectl describe no master1</span> <span role="presentation">Name: master1</span> <span role="presentation">Roles: master</span> <span role="presentation">......</span> <span role="presentation">Taints: node-role.kubernetes.io/master:NoSchedule</span> <span role="presentation"> node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule</span> <span role="presentation"> node.kubernetes.io/not-ready:NoSchedule</span> <span role="presentation">......</span>
现在,根据将控制器管理器与kubeadm结合使用文档,将OpenStack云控制器管理器部署到kubernetes集群中。
使用cloud-config为openstack云厂商创建一个密钥。
<span role="presentation">kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml</span> <span role="presentation">kubectl apply -f cloud-config-secret.yaml </span>
获取OpenStack API端点的CA证书,并将其放入/etc/kubernetes/ca.pem中。
创建RBAC资源。
<span role="presentation">kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml</span> <span role="presentation">kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml</span>
我们将以DaemonSet而不是Pod的形式运行OpenStack云控制器管理器。该管理器仅在控制面板节点(主节点)上运行,因此,如果有多个控制面板节点(主节点),则将运行多个Pod以实现高可用性。创建如下的openstack-cloud-controller-manager-ds.yaml文件,然后应用它。
<span role="presentation">---</span> <span role="presentation">apiVersion: v1</span> <span role="presentation">kind: ServiceAccount</span> <span role="presentation">metadata:</span> <span role="presentation"> name: cloud-controller-manager</span> <span role="presentation"> namespace: kube-system</span> <span role="presentation">---</span> <span role="presentation">apiVersion: apps/v1</span> <span role="presentation">kind: DaemonSet</span> <span role="presentation">metadata:</span> <span role="presentation"> name: openstack-cloud-controller-manager</span> <span role="presentation"> namespace: kube-system</span> <span role="presentation"> labels:</span> <span role="presentation"> k8s-app: openstack-cloud-controller-manager</span> <span role="presentation">spec:</span> <span role="presentation"> selector:</span> <span role="presentation"> matchLabels:</span> <span role="presentation"> k8s-app: openstack-cloud-controller-manager</span> <span role="presentation"> updateStrategy:</span> <span role="presentation"> type: RollingUpdate</span> <span role="presentation"> template:</span> <span role="presentation"> metadata:</span> <span role="presentation"> labels:</span> <span role="presentation"> k8s-app: openstack-cloud-controller-manager</span> <span role="presentation"> spec:</span> <span role="presentation"> nodeSelector:</span> <span role="presentation"> node-role.kubernetes.io/master: ""</span> <span role="presentation"> securityContext:</span> <span role="presentation"> runAsUser: 1001</span> <span role="presentation"> tolerations:</span> <span role="presentation"> - key: node.cloudprovider.kubernetes.io/uninitialized</span> <span role="presentation"> value: "true"</span> <span role="presentation"> effect: NoSchedule</span> <span role="presentation"> - key: node-role.kubernetes.io/master</span> <span role="presentation"> effect: NoSchedule</span> <span role="presentation"> - effect: NoSchedule</span> <span role="presentation"> key: node.kubernetes.io/not-ready</span> <span role="presentation"> serviceAccountName: cloud-controller-manager</span> <span role="presentation"> containers:</span> <span role="presentation"> - name: openstack-cloud-controller-manager</span> <span role="presentation"> image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0</span> <span role="presentation"> args:</span> <span role="presentation"> - /bin/openstack-cloud-controller-manager</span> <span role="presentation"> - --v=1</span> <span role="presentation"> - --cloud-config=$(CLOUD_CONFIG)</span> <span role="presentation"> - --cloud-provider=openstack</span> <span role="presentation"> - --use-service-account-credentials=true</span> <span role="presentation"> - --address=127.0.0.1</span> <span role="presentation"> volumeMounts:</span> <span role="presentation"> - mountPath: /etc/kubernetes/pki</span> <span role="presentation"> name: k8s-certs</span> <span role="presentation"> readOnly: true</span> <span role="presentation"> - mountPath: /etc/ssl/certs</span> <span role="presentation"> name: ca-certs</span> <span role="presentation"> readOnly: true</span> <span role="presentation"> - mountPath: /etc/config</span> <span role="presentation"> name: cloud-config-volume</span> <span role="presentation"> readOnly: true</span> <span role="presentation"> - mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec</span> <span role="presentation"> name: flexvolume-dir</span> <span role="presentation"> - mountPath: /etc/kubernetes</span> <span role="presentation"> name: ca-cert</span> <span role="presentation"> readOnly: true</span> <span role="presentation"> resources:</span> <span role="presentation"> requests:</span> <span role="presentation"> cpu: 200m</span> <span role="presentation"> env:</span> <span role="presentation"> - name: CLOUD_CONFIG</span> <span role="presentation"> value: /etc/config/cloud.conf</span> <span role="presentation"> hostNetwork: true</span> <span role="presentation"> volumes:</span> <span role="presentation"> - hostPath:</span> <span role="presentation"> path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec</span> <span role="presentation"> type: DirectoryOrCreate</span> <span role="presentation"> name: flexvolume-dir</span> <span role="presentation"> - hostPath:</span> <span role="presentation"> path: /etc/kubernetes/pki</span> <span role="presentation"> type: DirectoryOrCreate</span> <span role="presentation"> name: k8s-certs</span> <span role="presentation"> - hostPath:</span> <span role="presentation"> path: /etc/ssl/certs</span> <span role="presentation"> type: DirectoryOrCreate</span> <span role="presentation"> name: ca-certs</span> <span role="presentation"> - name: cloud-config-volume</span> <span role="presentation"> secret:</span> <span role="presentation"> secretName: cloud-config</span> <span role="presentation"> - name: ca-cert</span> <span role="presentation"> secret:</span> <span role="presentation"> secretName: openstack-ca-cert</span>
controller manager 运行时,它将查询OpenStack以获取有关节点的信息并删除污点。在节点信息中,你将看到OpenStack中VM的UUID。
<span role="presentation"># kubectl describe no master1</span> <span role="presentation">Name: master1</span> <span role="presentation">Roles: master</span> <span role="presentation">......</span> <span role="presentation">Taints: node-role.kubernetes.io/master:NoSchedule</span> <span role="presentation"> node.kubernetes.io/not-ready:NoSchedule</span> <span role="presentation">......</span> <span role="presentation">sage:docker: network plugin is not ready: cni config uninitialized</span> <span role="presentation">......</span> <span role="presentation">PodCIDR: 10.224.0.0/24</span> <span role="presentation">ProviderID: openstack:///548e3c46-2477-4ce2-968b-3de1314560a5</span>
现在安装你喜欢的CNI,控制面板节点(主节点)将准备就绪。
例如,要安装Weave Net,请运行以下命令:
<span role="presentation">kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d " ")"</span>
接下来,我们将设置工作节点。
首先,用在控制面板节点(主节点)中相同的方式,安装docker和kubeadm。要将它们加入集群,我们需要从控制面板节点(主节点)安装输出中获得令牌和ca cert 哈希值。如果它已过期或丢失,我们可以使用以下命令重新创建它。
<span role="presentation"># check if token is expired</span> <span role="presentation">kubeadm token list</span> <span role="presentation"></span> <span role="presentation"># re-create token and show join command</span> <span role="presentation">kubeadm token create --print-join-command</span>
使用上述令牌和ca cert哈希值为工作节点创建kubeadm-config.yml配置文件。
<span role="presentation">apiVersion: kubeadm.k8s.io/v1beta2</span> <span role="presentation">discovery:</span> <span role="presentation"> bootstrapToken:</span> <span role="presentation"> apiServerEndpoint: 192.168.1.7:6443</span> <span role="presentation"> token: 0c0z4p.dnafh6vnmouus569</span> <span role="presentation"> caCertHashes: ["sha256:fcb3e956a6880c05fc9d09714424b827f57a6fdc8afc44497180905946527adf"]</span> <span role="presentation">kind: JoinConfiguration</span> <span role="presentation">nodeRegistration:</span> <span role="presentation"> kubeletExtraArgs:</span> <span role="presentation"> cloud-provider: "external"</span>
apiServerEndpoint是控制面板节点(主节点),令牌和caCertHashes可从kubeadm token create命令的输出中打印的 join 命令中获取。
运行kubeadm,工作节点将加入集群。
<span role="presentation">kubeadm join --config kubeadm-config.yml </span>
在这个阶段,我们将拥有一个在外部OpenStack云厂商运行的Kubernetes集群。厂商会告知Kubernetes,Kubernetes节点与OpenStack VM之间的映射。如果Kubernetes想要将持久卷附加到Pod,则可以从映射中找出Pod在哪个OpenStack VM上运行,并可以将底层OpenStack存储卷相应地附加到VM。
部署Cinder CSI
与Cinder的集成由外部Cinder CSI插件提供,正如Cinder CSI文档中所述。
我们将执行以下步骤来安装Cinder CSI插件。
首先,使用CA证书为OpenStack的API端点创建一个密钥。与我们在上面的云厂商中使用的证书文件相同。
kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml kubectl apply -f openstack-ca-cert.yaml
然后创建RBAC资源。
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
Cinder CSI插件包括控制器插件和节点插件。控制器与Kubernetes API和Cinder API通信以 create/attach/detach/delete (创建/附加/分离/删除)Cinder卷。节点插件依次在每个工作节点上运行,以将存储设备(附加的卷)绑定到Pod,并在删除过程中取消绑定。创建cinder-csi-controllerplugin.yaml并应用它以创建csi控制器。
kind: Service apiVersion: v1 metadata: name: csi-cinder-controller-service namespace: kube-system labels: app: csi-cinder-controllerplugin spec: selector: app: csi-cinder-controllerplugin ports: - name: dummy port: 12345 --- kind: StatefulSet apiVersion: apps/v1 metadata: name: csi-cinder-controllerplugin namespace: kube-system spec: serviceName: "csi-cinder-controller-service" replicas: 1 selector: matchLabels: app: csi-cinder-controllerplugin template: metadata: labels: app: csi-cinder-controllerplugin spec: serviceAccount: csi-cinder-controller-sa containers: - name: csi-attacher image: quay.io/k8scsi/csi-attacher:v1.0.1 args: - "--v=5" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /var/lib/csi/sockets/pluginproxy/ - name: csi-provisioner image: quay.io/k8scsi/csi-provisioner:v1.0.1 args: - "--provisioner=csi-cinderplugin" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /var/lib/csi/sockets/pluginproxy/ - name: csi-snapshotter image: quay.io/k8scsi/csi-snapshotter:v1.0.1 args: - "--connection-timeout=15s" - "--csi-address=$(ADDRESS)" env: - name: ADDRESS value: /var/lib/csi/sockets/pluginproxy/csi.sock imagePullPolicy: Always volumeMounts: - mountPath: /var/lib/csi/sockets/pluginproxy/ name: socket-dir - name: cinder-csi-plugin image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 args : - /bin/cinder-csi-plugin - "--v=5" - "--nodeid=$(NODE_ID)" - "--endpoint=$(CSI_ENDPOINT)" - "--cloud-config=$(CLOUD_CONFIG)" - "--cluster=$(CLUSTER_NAME)" env: - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: CSI_ENDPOINT value: unix://csi/csi.sock - name: CLOUD_CONFIG value: /etc/config/cloud.conf - name: CLUSTER_NAME value: kubernetes imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: secret-cinderplugin mountPath: /etc/config readOnly: true - mountPath: /etc/kubernetes name: ca-cert readOnly: true volumes: - name: socket-dir hostPath: path: /var/lib/csi/sockets/pluginproxy/ type: DirectoryOrCreate - name: secret-cinderplugin secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
创建cinder-csi-nodeplugin.yaml并应用它来创建csi节点。
kind: DaemonSet apiVersion: apps/v1 metadata: name: csi-cinder-nodeplugin namespace: kube-system spec: selector: matchLabels: app: csi-cinder-nodeplugin template: metadata: labels: app: csi-cinder-nodeplugin spec: serviceAccount: csi-cinder-node-sa hostNetwork: true containers: - name: node-driver-registrar image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0 args: - "--v=5" - "--csi-address=$(ADDRESS)" - "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)" lifecycle: preStop: exec: command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"] env: - name: ADDRESS value: /csi/csi.sock - name: DRIVER_REG_SOCK_PATH value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock - name: KUBE_NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: registration-dir mountPath: /registration - name: cinder-csi-plugin securityContext: privileged: true capabilities: add: ["SYS_ADMIN"] allowPrivilegeEscalation: true image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0 args : - /bin/cinder-csi-plugin - "--nodeid=$(NODE_ID)" - "--endpoint=$(CSI_ENDPOINT)" - "--cloud-config=$(CLOUD_CONFIG)" env: - name: NODE_ID valueFrom: fieldRef: fieldPath: spec.nodeName - name: CSI_ENDPOINT value: unix://csi/csi.sock - name: CLOUD_CONFIG value: /etc/config/cloud.conf imagePullPolicy: "IfNotPresent" volumeMounts: - name: socket-dir mountPath: /csi - name: pods-mount-dir mountPath: /var/lib/kubelet/pods mountPropagation: "Bidirectional" - name: kubelet-dir mountPath: /var/lib/kubelet mountPropagation: "Bidirectional" - name: pods-cloud-data mountPath: /var/lib/cloud/data readOnly: true - name: pods-probe-dir mountPath: /dev mountPropagation: "HostToContainer" - name: secret-cinderplugin mountPath: /etc/config readOnly: true - mountPath: /etc/kubernetes name: ca-cert readOnly: true volumes: - name: socket-dir hostPath: path: /var/lib/kubelet/plugins/cinder.csi.openstack.org type: DirectoryOrCreate - name: registration-dir hostPath: path: /var/lib/kubelet/plugins_registry/ type: Directory - name: kubelet-dir hostPath: path: /var/lib/kubelet type: Directory - name: pods-mount-dir hostPath: path: /var/lib/kubelet/pods type: Directory - name: pods-cloud-data hostPath: path: /var/lib/cloud/data type: Directory - name: pods-probe-dir hostPath: path: /dev type: Directory - name: secret-cinderplugin secret: secretName: cloud-config - name: ca-cert secret: secretName: openstack-ca-cert
当它们都在运行时,为Cinder创建一个 storage class 。
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-sc-cinderplugin provisioner: csi-cinderplugin
然后,我们可以使用 storage class 创建PVC。
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myvol spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: csi-sc-cinderplugin
创建PVC时,将相应地创建一个Cinder卷。
# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE myvol Bound pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad 1Gi RWO csi-sc-cinderplugin 3s
在OpenStack中,存储卷名称将与Kubernetes持久卷生成的名称匹配。在此示例中为:pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad
现在,我们可以使用PVC创建容器。
apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: nginx ports: - name: web containerPort: 80 hostPort: 8081 protocol: TCP volumeMounts: - mountPath: "/usr/share/nginx/html" name: mypd volumes: - name: mypd persistentVolumeClaim: claimName: myvol
当pod运行时,该存储卷将绑定到pod上。如果回到OpenStack,我们可以看到Cinder存储卷已安装到运行Pod的工作节点上。
# openstack volume show 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | attachments | [{u"server_id": u"1c5e1439-edfa-40ed-91fe-2a0e12bc7eb4", u"attachment_id": u"11a15b30-5c24-41d4-86d9-d92823983a32", u"attached_at": u"2019-07-24T05:02:34.000000", u"host_name": u"compute-6", u"volume_id": u"6b5f3296-b0eb-40cd-bd4f-2067a0d6287f", u"device": u"/dev/vdb", u"id": u"6b5f3296-b0eb-40cd-bd4f-2067a0d6287f"}] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2019-07-24T05:02:18.000000 | | description | Created by OpenStack Cinder CSI driver | | encrypted | False | | id | 6b5f3296-b0eb-40cd-bd4f-2067a0d6287f | | migration_status | None | | multiattach | False | | name | pvc-14b8bc68-6c4c-4dc6-ad79-4cb29a81faad | | os-vol-host-attr:host | rbd:volumes@rbd#rbd | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 14ba698c0aec4fd6b7dc8c310f664009 | | properties | attached_mode="rw", cinder.csi.openstack.org/cluster="kubernetes" | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | in-use | | type | rbd | | updated_at | 2019-07-24T05:02:35.000000 | | user_id | 5f6a7a06f4e3456c890130d56babf591 | +--------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
总结
在本演练中,我们在OpenStack VM上部署了一个Kubernetes集群,并与外部OpenStack云厂商集成。然后,在此Kubernetes集群上,我们部署了Cinder CSI插件,该插件可以创建Cinder存储卷,并将它们作为持久卷在Kubernetes中使用。
译文链接: https://kubernetes.io/blog/2020/02/07/deploying-external-openstack-cloud-provider-with-kubeadm/