实验环境说明
实验架构图
<span class="line">lab1: etcd master haproxy keepalived 11.11.11.111</span> <span class="line">lab2: etcd master haproxy keepalived 11.11.11.112</span> <span class="line">lab3: etcd master haproxy keepalived 11.11.11.113</span> <span class="line">lab4: node 11.11.11.114</span> <span class="line">lab5: node 11.11.11.115</span> <span class="line">lab6: node 11.11.11.116</span> <span class="line">vip(loadblancer ip): 11.11.11.110</span>
www#gaodaima.com来源gaodai$ma#com搞$$代**码)网搞代码
实验使用的Vagrantfile
<span class="line"><span class="comment"># -*- mode: ruby -*-</span></span> <span class="line"><span class="comment"># vi: set ft=ruby :</span></span> <span class="line">ENV[<span class="string">"LC_ALL"</span>] = <span class="string">"en_US.UTF-8"</span></span> <span class="line">Vagrant.configure(<span class="string">"2"</span>) <span class="keyword">do</span> <span class="params">|config|</span></span> <span class="line"> (<span class="number">1</span>..<span class="number">6</span>).each <span class="keyword">do</span> <span class="params">|i|</span></span> <span class="line"> config.vm.define <span class="string">"lab<span class="subst">#{i}</span>"</span> <span class="keyword">do</span> <span class="params">|node|</span></span> <span class="line"> node.vm.box = <span class="string">"centos-7.4-docker-17"</span></span> <span class="line"> node.ssh.insert_key = <span class="literal">false</span></span> <span class="line"> node.vm.hostname = <span class="string">"lab<span class="subst">#{i}</span>"</span></span> <span class="line"> node.vm.network <span class="string">"private_network"</span>, <span class="symbol">ip:</span> <span class="string">"11.11.11.11<span class="subst">#{i}</span>"</span></span> <span class="line"> node.vm.provision <span class="string">"shell"</span>,</span> <span class="line"> <span class="symbol">inline:</span> <span class="string">"echo hello from node <span class="subst">#{i}</span>"</span></span> <span class="line"> node.vm.provider <span class="string">"virtualbox"</span> <span class="keyword">do</span> <span class="params">|v|</span></span> <span class="line"> v.cpus = <span class="number">2</span></span> <span class="line"> v.customize [<span class="string">"modifyvm"</span>, <span class="symbol">:id</span>, <span class="string">"--name"</span>, <span class="string">"lab<span class="subst">#{i}</span>"</span>, <span class="string">"--memory"</span>, <span class="string">"2048"</span>]</span> <span class="line"> <span class="keyword">end</span></span> <span class="line"> <span class="keyword">end</span></span> <span class="line"> <span class="keyword">end</span></span> <span class="line"><span class="keyword">end</span></span>
安装配置docker
v1.11.0版本推荐使用docker v17.03,
v1.11,v1.12,v1.13, 也可以使用,再高版本的docker可能无法正常使用。
测试发现17.09无法正常使用,不能使用资源限制(内存CPU)如下操作在所有节点操作
安装docker
<span class="line"><span class="comment"># 卸载安装指定版本docker-ce</span></span> <span class="line">yum remove -y docker-ce docker-ce-selinux container-selinux</span> <span class="line">yum install -y --<span class="built_in">setopt</span>=obsoletes=0 </span> <span class="line">docker-ce-17.03.1.ce-1.el7.centos </span> <span class="line">docker-ce-selinux-17.03.1.ce-1.el7.centos</span>
启动docker
<span class="line">systemctl <span class="built_in">enable</span> docker && systemctl restart docker</span>
安装 kubeadm, kubelet 和 kubectl
如下操作在所有节点操作
使用阿里镜像安装
<span class="line"><span class="comment"># 配置源</span></span> <span class="line">cat <<EOF > /etc/yum.repos.d/kubernetes.repo</span> <span class="line">[kubernetes]</span> <span class="line">name=Kubernetes</span> <span class="line">baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64</span> <span class="line">enabled=1</span> <span class="line">gpgcheck=1</span> <span class="line">repo_gpgcheck=1</span> <span class="line">gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg</span> <span class="line">EOF</span> <span class="line"><span class="comment"># 安装</span></span> <span class="line">yum install -y kubelet kubeadm kubectl ipvsadm</span>
配置系统相关参数
<span class="line"><span class="comment"># 临时禁用selinux</span></span> <span class="line"><span class="comment"># 永久关闭 修改/etc/sysconfig/selinux文件设置</span></span> <span class="line">sed -i <span class="string">"s/SELINUX=permissive/SELINUX=disabled/"</span> /etc/sysconfig/selinux</span> <span class="line">setenforce 0</span> <span class="line"><span class="comment"># 临时关闭swap</span></span> <span class="line"><span class="comment"># 永久关闭 注释/etc/fstab文件里swap相关的行</span></span> <span class="line">swapoff -a</span> <span class="line"><span class="comment"># 开启forward</span></span> <span class="line"><span class="comment"># Docker从1.13版本开始调整了默认的防火墙规则</span></span> <span class="line"><span class="comment"># 禁用了iptables filter表中FOWARD链</span></span> <span class="line"><span class="comment"># 这样会引起Kubernetes集群中跨Node的Pod无法通信</span></span> <span class="line">iptables -P FORWARD ACCEPT</span> <span class="line"><span class="comment"># 配置转发相关参数,否则可能会出错</span></span> <span class="line">cat <<EOF > /etc/sysctl.d/k8s.conf</span> <span class="line">net.bridge.bridge-nf-call-ip6tables = 1</span> <span class="line">net.bridge.bridge-nf-call-iptables = 1</span> <span class="line">vm.swappiness=0</span> <span class="line">EOF</span> <span class="line">sysctl --system</span> <span class="line"><span class="comment"># 加载ipvs相关内核模块</span></span> <span class="line"><span class="comment"># 如果重新开机,需要重新加载</span></span> <span class="line">modprobe ip_vs</span> <span class="line">modprobe ip_vs_rr</span> <span class="line">modprobe ip_vs_wrr</span> <span class="line">modprobe ip_vs_sh</span> <span class="line">modprobe nf_conntrack_ipv4</span> <span class="line">lsmod | grep ip_vs</span>
配置hosts解析
如下操作在所有节点操作
<span class="line">cat >>/etc/hosts<<EOF</span> <span class="line">11.11.11.111 lab1</span> <span class="line">11.11.11.112 lab2</span> <span class="line">11.11.11.113 lab3</span> <span class="line">11.11.11.114 lab4</span> <span class="line">11.11.11.115 lab5</span> <span class="line">11.11.11.116 lab6</span> <span class="line">EOF</span>
配置haproxy代理和keepalived
如下操作在节点
lab1,lab2,lab3
操作
<span class="line"><span class="comment"># 拉取haproxy镜像</span></span> <span class="line">docker pull haproxy:1.7.8-alpine</span> <span class="line">mkdir /etc/haproxy</span> <span class="line">cat >/etc/haproxy/haproxy.cfg<<EOF</span> <span class="line">global</span> <span class="line"> <span class="built_in">log</span> 127.0.0.1 local0 err</span> <span class="line"> maxconn 50000</span> <span class="line"> uid 99</span> <span class="line"> gid 99</span> <span class="line"> <span class="comment">#daemon</span></span> <span class="line"> nbproc 1</span> <span class="line"> pidfile haproxy.pid</span> <span class="line">defaults</span> <span class="line"> mode http</span> <span class="line"> <span class="built_in">log</span> 127.0.0.1 local0 err</span> <span class="line"> maxconn 50000</span> <span class="line"> retries 3</span> <span class="line"> timeout connect 5s</span> <span class="line"> timeout client 30s</span> <span class="line"> timeout server 30s</span> <span class="line"> timeout check 2s</span> <span class="line">listen admin_stats</span> <span class="line"> mode http</span> <span class="line"> <span class="built_in">bind</span> 0.0.0.0:1080</span> <span class="line"> <span class="built_in">log</span> 127.0.0.1 local0 err</span> <span class="line"> stats refresh 30s</span> <span class="line"> stats uri /haproxy-status</span> <span class="line"> stats realm Haproxy Statistics</span> <span class="line"> stats auth will:will</span> <span class="line"> stats hide-version</span> <span class="line"> stats admin <span class="keyword">if</span> TRUE</span> <span class="line">frontend k8s-https</span> <span class="line"> <span class="built_in">bind</span> 0.0.0.0:8443</span> <span class="line"> mode tcp</span> <span class="line"> <span class="comment">#maxconn 50000</span></span> <span class="line"> default_backend k8s-https</span> <span class="line">backend k8s-https</span> <span class="line"> mode tcp</span> <span class="line"> balance roundrobin</span> <span class="line"> server lab1 11.11.11.111:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3</span> <span class="line"> server lab2 11.11.11.112:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3</span> <span class="line"> server lab3 11.11.11.113:6443 weight 1 maxconn 1000 check inter 2000 rise 2 fall 3</span> <span class="line">EOF</span> <span class="line"><span class="comment"># 启动haproxy</span></span> <span class="line">docker run -d --name my-haproxy </span> <span class="line">-v /etc/haproxy:/usr/<span class="built_in">local</span>/etc/haproxy:ro </span> <span class="line">-p 8443:8443 </span> <span class="line">-p 1080:1080 </span> <span class="line">--restart always </span> <span class="line">haproxy:1.7.8-alpine</span> <span class="line"><span class="comment"># 查看日志</span></span> <span class="line">docker logs my-haproxy</span> <span class="line"><span class="comment"># 浏览器查看状态</span></span> <span class="line">http://11.11.11.111:1080/haproxy-status</span> <span class="line">http://11.11.11.112:1080/haproxy-status</span> <span class="line"><span class="comment"># 拉取keepalived镜像</span></span> <span class="line">docker pull osixia/keepalived:1.4.4</span> <span class="line"><span class="comment"># 启动</span></span> <span class="line"><span class="comment"># 载入内核相关模块</span></span> <span class="line">lsmod | grep ip_vs</span> <span class="line">modprobe ip_vs</span> <span class="line"><span class="comment"># 启动keepalived</span></span> <span class="line"><span class="comment"># eth1为本次实验11.11.11.0/24网段的所在网卡</span></span> <span class="line">docker run --net=host --<span class="built_in">cap</span>-add=NET_ADMIN </span> <span class="line">-e KEEPALIVED_INTERFACE=eth1 </span> <span class="line">-e KEEPALIVED_VIRTUAL_IPS=<span class="string">"#PYTHON2BASH:["11.11.11.110"]"</span> </span> <span class="line">-e KEEPALIVED_UNICAST_PEERS=<span class="string">"#PYTHON2BASH:["11.11.11.111","11.11.11.112","11.11.11.113"]"</span> </span> <span class="line">-e KEEPALIVED_PASSWORD=hello </span> <span class="line">--name k8s-keepalived </span> <span class="line">--restart always </span> <span class="line">-d osixia/keepalived:1.4.4</span> <span class="line"><span class="comment"># 查看日志</span></span> <span class="line"><span class="comment"># 会看到两个成为backup 一个成为master</span></span> <span class="line">docker logs k8s-keepalived</span> <span class="line"><span class="comment"># 此时会配置 11.11.11.110 到其中一台机器</span></span> <span class="line"><span class="comment"># ping测试</span></span> <span class="line">ping -c4 11.11.11.110</span> <span class="line"><span class="comment"># 如果失败后清理后,重新实验</span></span> <span class="line">docker rm -f k8s-keepalived</span> <span class="line">ip a del 11.11.11.110/32 dev eth1</span>
配置启动kubelet
如下操作在所有节点操作
<span class="line"><span class="comment"># 配置kubelet使用国内pause镜像</span></span> <span class="line"><span class="comment"># 配置kubelet的cgroups</span></span> <span class="line"><span class="comment"># 获取docker的cgroups</span></span> <span class="line">DOCKER_CGROUPS=$(docker info | grep <span class="string">"Cgroup"</span> | cut -d<span class="string">" "</span> -f3)</span> <span class="line"><span class="built_in">echo</span> <span class="variable">$DOCKER_CGROUPS</span></span> <span class="line">cat >/etc/sysconfig/kubelet<<EOF</span> <span class="line">KUBELET_EXTRA_ARGS=<span class="string">"--cgroup-driver=<span class="variable">$DOCKER_CGROUPS</span> --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1"</span></span> <span class="line">EOF</span> <span class="line"><span class="comment"># 启动</span></span> <span class="line">systemctl daemon-reload</span> <span class="line">systemctl <span class="built_in">enable</span> kubelet && systemctl restart kubelet</span>
配置master
配置第一个master节点
如下操作在
lab1
节点操作
<span class="line"><span class="comment"># 1.11 版本 centos 下使用 ipvs 模式会出问题</span></span> <span class="line"><span class="comment"># 参考 https://github.com/kubernetes/kubernetes/issues/65461</span></span> <span class="line"><span class="comment"># 生成配置文件</span></span> <span class="line">CP0_IP=<span class="string">"11.11.11.111"</span></span> <span class="line">CP0_HOSTNAME=<span class="string">"lab1"</span></span> <span class="line">cat >kubeadm-master.config<<EOF</span> <span class="line">apiVersion: kubeadm.k8s.io/v1alpha2</span> <span class="line">kind: MasterConfiguration</span> <span class="line">kubernetesVersion: v1.11.0</span> <span class="line">imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers</span> <span class="line">apiServerCertSANs:</span> <span class="line">- <span class="string">"lab1"</span></span> <span class="line">- <span class="string">"lab2"</span></span> <span class="line">- <span class="string">"lab3"</span></span> <span class="line">- <span class="string">"11.11.11.111"</span></span> <span class="line">- <span class="string">"11.11.11.112"</span></span> <span class="line">- <span class="string">"11.11.11.113"</span></span> <span class="line">- <span class="string">"11.11.11.110"</span></span> <span class="line">- <span class="string">"127.0.0.1"</span></span> <span class="line">api:</span> <span class="line"> advertiseAddress: <span class="variable">$CP0_IP</span></span> <span class="line"> controlPlaneEndpoint: 11.11.11.110:8443</span> <span class="line">etcd:</span> <span class="line"> <span class="built_in">local</span>:</span> <span class="line"> extraArgs:</span> <span class="line"> listen-client-urls: <span class="string">"https://127.0.0.1:2379,https://<span class="variable">$CP0_IP</span>:2379"</span></span> <span class="line"> advertise-client-urls: <span class="string">"https://<span class="variable">$CP0_IP</span>:2379"</span></span> <span class="line"> listen-peer-urls: <span class="string">"https://<span class="variable">$CP0_IP</span>:2380"</span></span> <span class="line"> initial-advertise-peer-urls: <span class="string">"https://<span class="variable">$CP0_IP</span>:2380"</span></span> <span class="line"> initial-cluster: <span class="string">"<span class="variable">$CP0_HOSTNAME</span>=https://<span class="variable">$CP0_IP</span>:2380"</span></span> <span class="line"> serverCertSANs:</span> <span class="line"> - <span class="variable">$CP0_HOSTNAME</span></span> <span class="line"> - <span class="variable">$CP0_IP</span></span> <span class="line"> peerCertSANs:</span> <span class="line"> - <span class="variable">$CP0_HOSTNAME</span></span> <span class="line"> - <span class="variable">$CP0_IP</span></span> <span class="line">controllerManagerExtraArgs:</span> <span class="line"> node-monitor-grace-period: 10s</span> <span class="line"> pod-eviction-timeout: 10s</span> <span class="line">networking:</span> <span class="line"> podSubnet: 10.244.0.0/16</span> <span class="line">kubeProxy:</span> <span class="line"> config:</span> <span class="line"> <span class="comment"># mode: ipvs</span></span> <span class="line"> mode: iptables</span> <span class="line">EOF</span> <span class="line"><span class="comment"># 提前拉取镜像</span></span> <span class="line"><span class="comment"># 如果执行失败 可以多次执行</span></span> <span class="line">kubeadm config images pull --config kubeadm-master.config</span> <span class="line"><span class="comment"># 初始化</span></span> <span class="line"><span class="comment"># 注意保存返回的 join 命令</span></span> <span class="line">kubeadm init --config kubeadm-master.config</span> <span class="line"><span class="comment"># 打包ca相关文件上传至其他master节点</span></span> <span class="line"><span class="built_in">cd</span> /etc/kubernetes && tar cvzf k8s-key.tgz admin.conf pki/ca.* pki/sa.* pki/front-proxy-ca.* pki/etcd/ca.*</span> <span class="line">scp k8s-key.tgz lab2:~/</span> <span class="line">scp k8s-key.tgz lab3:~/</span> <span class="line">ssh lab2 <span class="string">"tar xf k8s-key.tgz -C /etc/kubernetes/"</span></span> <span class="line">ssh lab3 <span class="string">"tar xf k8s-key.tgz -C /etc/kubernetes/"</span></span>
配置第二个master节点
如下操作在
lab2
节点操作
<span class="line"><span class="comment"># 1.11 版本 centos 下使用 ipvs 模式会出问题</span></span> <span class="line"><span class="comment"># 参考 https://github.com/kubernetes/kubernetes/issues/65461</span></span> <span class="line"><span class="comment"># 生成配置文件</span></span> <span class="line">CP0_IP=<span class="string">"11.11.11.111"</span></span> <span class="line">CP0_HOSTNAME=<span class="string">"lab1"</span></span> <span class="line">CP1_IP=<span class="string">"11.11.11.112"</span></span> <span class="line">CP1_HOSTNAME=<span class="string">"lab2"</span></span> <span class="line">cat >kubeadm-master.config<<EOF</span> <span class="line">apiVersion: kubeadm.k8s.io/v1alpha2</span> <span class="line">kind: MasterConfiguration</span> <span class="line">kubernetesVersion: v1.11.0</span> <span class="line">imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers</span> <span class="line">apiServerCertSANs:</span> <span class="line">- <span class="string">"lab1"</span></span> <span class="line">- <span class="string">"lab2"</span></span> <span class="line">- <span class="string">"lab3"</span></span> <span class="line">- <span class="string">"11.11.11.111"</span></span> <span class="line">- <span class="string">"11.11.11.112"</span></span> <span class="line">- <span class="string">"11.11.11.113"</span></span> <span class="line">- <span class="string">"11.11.11.110"</span></span> <span class="line">- <span class="string">"127.0.0.1"</span></span> <span class="line">api:</span> <span class="line"> advertiseAddress: <span class="variable">$CP1_IP</span></span> <span class="line"> controlPlaneEndpoint: 11.11.11.110:8443</span> <span class="line">etcd:</span> <span class="line"> <span class="built_in">local</span>:</span> <span class="line"> extraArgs:</span> <span class="line"> listen-client-urls: <span class="string">"https://127.0.0.1:2379,https://<span class="variable">$CP1_IP</span>:2379"</span></span> <span class="line"> advertise-client-urls: <span class="string">"https://<span class="variable">$CP1_IP</span>:2379"</span></span> <span class="line"> listen-peer-urls: <span class="string">"https://<span class="variable">$CP1_IP</span>:2380"</span></span> <span class="line"> initial-advertise-peer-urls: <span class="string">"https://<span class="variable">$CP1_IP</span>:2380"</span></span> <span class="line"> initial-cluster: <span class="string">"<span class="variable">$CP0_HOSTNAME</span>=https://<span class="variable">$CP0_IP</span>:2380,<span class="variable">$CP1_HOSTNAME</span>=https://<span class="variable">$CP1_IP</span>:2380"</span></span> <span class="line"> initial-cluster-state: existing</span> <span class="line"> serverCertSANs:</span> <span class="line"> - <span class="variable">$CP1_HOSTNAME</span></span> <span class="line"> - <span class="variable">$CP1_IP</span></span> <span class="line"> peerCertSANs:</span> <span class="line"> - <span class="variable">$CP1_HOSTNAME</span></span> <span class="line"> - <span class="variable">$CP1_IP</span></span> <span class="line">controllerManagerExtraArgs:</span> <span class="line"> node-monitor-grace-period: 10s</span> <span class="line"> pod-eviction-timeout: 10s</span> <span class="line">networking:</span> <span class="line"> podSubnet: 10.244.0.0/16</span> <span class="line">kubeProxy:</span> <span class="line"> config:</span> <span class="line"> <span class="comment"># mode: ipvs</span></span> <span class="line"> mode: iptables</span> <span class="line">EOF</span> <span class="line"><span class="comment"># 配置kubelet</span></span> <span class="line">kubeadm alpha phase certs all --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config</span> <span class="line">systemctl restart kubelet</span> <span class="line"><span class="comment"># 添加etcd到集群中</span></span> <span class="line">CP0_IP=<span class="string">"11.11.11.111"</span></span> <span class="line">CP0_HOSTNAME=<span class="string">"lab1"</span></span> <span class="line">CP1_IP=<span class="string">"11.11.11.112"</span></span> <span class="line">CP1_HOSTNAME=<span class="string">"lab2"</span></span> <span class="line">KUBECONFIG=/etc/kubernetes/admin.conf kubectl <span class="built_in">exec</span> -n kube-system etcd-<span class="variable">${CP0_HOSTNAME}</span> -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://<span class="variable">${CP0_IP}</span>:2379 member add <span class="variable">${CP1_HOSTNAME}</span> https://<span class="variable">${CP1_IP}</span>:2380</span> <span class="line">kubeadm alpha phase etcd <span class="built_in">local</span> --config kubeadm-master.config</span> <span class="line"><span class="comment"># 提前拉取镜像</span></span> <span class="line"><span class="comment"># 如果执行失败 可以多次执行</span></span> <span class="line">kubeadm config images pull --config kubeadm-master.config</span> <span class="line"><span class="comment"># 部署</span></span> <span class="line">kubeadm alpha phase kubeconfig all --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase controlplane all --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase mark-master --config kubeadm-master.config</span>
配置第三个master节点
如下操作在
lab3
节点操作
<span class="line"><span class="comment"># 1.11 版本 centos 下使用 ipvs 模式会出问题</span></span> <span class="line"><span class="comment"># 参考 https://github.com/kubernetes/kubernetes/issues/65461</span></span> <span class="line"><span class="comment"># 生成配置文件</span></span> <span class="line">CP0_IP=<span class="string">"11.11.11.111"</span></span> <span class="line">CP0_HOSTNAME=<span class="string">"lab1"</span></span> <span class="line">CP1_IP=<span class="string">"11.11.11.112"</span></span> <span class="line">CP1_HOSTNAME=<span class="string">"lab2"</span></span> <span class="line">CP2_IP=<span class="string">"11.11.11.113"</span></span> <span class="line">CP2_HOSTNAME=<span class="string">"lab3"</span></span> <span class="line">cat >kubeadm-master.config<<EOF</span> <span class="line">apiVersion: kubeadm.k8s.io/v1alpha2</span> <span class="line">kind: MasterConfiguration</span> <span class="line">kubernetesVersion: v1.11.0</span> <span class="line">imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers</span> <span class="line">apiServerCertSANs:</span> <span class="line">- <span class="string">"lab1"</span></span> <span class="line">- <span class="string">"lab2"</span></span> <span class="line">- <span class="string">"lab3"</span></span> <span class="line">- <span class="string">"11.11.11.111"</span></span> <span class="line">- <span class="string">"11.11.11.112"</span></span> <span class="line">- <span class="string">"11.11.11.113"</span></span> <span class="line">- <span class="string">"11.11.11.110"</span></span> <span class="line">- <span class="string">"127.0.0.1"</span></span> <span class="line">api:</span> <span class="line"> advertiseAddress: <span class="variable">$CP2_IP</span></span> <span class="line"> controlPlaneEndpoint: 11.11.11.110:8443</span> <span class="line">etcd:</span> <span class="line"> <span class="built_in">local</span>:</span> <span class="line"> extraArgs:</span> <span class="line"> listen-client-urls: <span class="string">"https://127.0.0.1:2379,https://<span class="variable">$CP2_IP</span>:2379"</span></span> <span class="line"> advertise-client-urls: <span class="string">"https://<span class="variable">$CP2_IP</span>:2379"</span></span> <span class="line"> listen-peer-urls: <span class="string">"https://<span class="variable">$CP2_IP</span>:2380"</span></span> <span class="line"> initial-advertise-peer-urls: <span class="string">"https://<span class="variable">$CP2_IP</span>:2380"</span></span> <span class="line"> initial-cluster: <span class="string">"<span class="variable">$CP0_HOSTNAME</span>=https://<span class="variable">$CP0_IP</span>:2380,<span class="variable">$CP1_HOSTNAME</span>=https://<span class="variable">$CP1_IP</span>:2380,<span class="variable">$CP2_HOSTNAME</span>=https://<span class="variable">$CP2_IP</span>:2380"</span></span> <span class="line"> initial-cluster-state: existing</span> <span class="line"> serverCertSANs:</span> <span class="line"> - <span class="variable">$CP2_HOSTNAME</span></span> <span class="line"> - <span class="variable">$CP2_IP</span></span> <span class="line"> peerCertSANs:</span> <span class="line"> - <span class="variable">$CP2_HOSTNAME</span></span> <span class="line"> - <span class="variable">$CP2_IP</span></span> <span class="line">controllerManagerExtraArgs:</span> <span class="line"> node-monitor-grace-period: 10s</span> <span class="line"> pod-eviction-timeout: 10s</span> <span class="line">networking:</span> <span class="line"> podSubnet: 10.244.0.0/16</span> <span class="line">kubeProxy:</span> <span class="line"> config:</span> <span class="line"> <span class="comment"># mode: ipvs</span></span> <span class="line"> mode: iptables</span> <span class="line">EOF</span> <span class="line"><span class="comment"># 配置kubelet</span></span> <span class="line">kubeadm alpha phase certs all --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase kubelet config write-to-disk --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase kubelet write-env-file --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase kubeconfig kubelet --config kubeadm-master.config</span> <span class="line">systemctl restart kubelet</span> <span class="line"><span class="comment"># 添加etcd到集群中</span></span> <span class="line">CP0_IP=<span class="string">"11.11.11.111"</span></span> <span class="line">CP0_HOSTNAME=<span class="string">"lab1"</span></span> <span class="line">CP2_IP=<span class="string">"11.11.11.113"</span></span> <span class="line">CP2_HOSTNAME=<span class="string">"lab3"</span></span> <span class="line">KUBECONFIG=/etc/kubernetes/admin.conf kubectl <span class="built_in">exec</span> -n kube-system etcd-<span class="variable">${CP0_HOSTNAME}</span> -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://<span class="variable">${CP0_IP}</span>:2379 member add <span class="variable">${CP2_HOSTNAME}</span> https://<span class="variable">${CP2_IP}</span>:2380</span> <span class="line">kubeadm alpha phase etcd <span class="built_in">local</span> --config kubeadm-master.config</span> <span class="line"><span class="comment"># 提前拉取镜像</span></span> <span class="line"><span class="comment"># 如果执行失败 可以多次执行</span></span> <span class="line">kubeadm config images pull --config kubeadm-master.config</span> <span class="line"><span class="comment"># 部署</span></span> <span class="line">kubeadm alpha phase kubeconfig all --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase controlplane all --config kubeadm-master.config</span> <span class="line">kubeadm alpha phase mark-master --config kubeadm-master.config</span>
配置使用kubectl
如下操作在任意
master
节点操作
<span class="line">rm -rf <span class="variable">$HOME</span>/.kube</span> <span class="line">mkdir -p <span class="variable">$HOME</span>/.kube</span> <span class="line">sudo cp -i /etc/kubernetes/admin.conf <span class="variable">$HOME</span>/.kube/config</span> <span class="line">sudo chown $(id -u):$(id -g) <span class="variable">$HOME</span>/.kube/config</span> <span class="line"><span class="comment"># 查看node节点</span></span> <span class="line">kubectl get nodes</span> <span class="line"><span class="comment"># 只有网络插件也安装配置完成之后,才能会显示为ready状态</span></span> <span class="line"><span class="comment"># 设置master允许部署应用pod,参与工作负载,现在可以部署其他系统组件</span></span> <span class="line"><span class="comment"># 如 dashboard, heapster, efk等</span></span> <span class="line">kubectl taint nodes --all node-role.kubernetes.io/master-</span>
配置使用网络插件
如下操作在任意
master
节点操作
<span class="line"><span class="comment"># 下载配置</span></span> <span class="line">mkdir flannel && <span class="built_in">cd</span> flannel</span> <span class="line">wget https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml</span> <span class="line"><span class="comment"># 修改配置</span></span> <span class="line"><span class="comment"># 此处的ip配置要与上面kubeadm的pod-network一致</span></span> <span class="line"> net-conf.json: |</span> <span class="line"> {</span> <span class="line"> <span class="string">"Network"</span>: <span class="string">"10.244.0.0/16"</span>,</span> <span class="line"> <span class="string">"Backend"</span>: {</span> <span class="line"> <span class="string">"Type"</span>: <span class="string">"vxlan"</span></span> <span class="line"> }</span> <span class="line"> }</span> <span class="line"><span class="comment"># 修改镜像</span></span> <span class="line">image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64</span> <span class="line"><span class="comment"># 如果Node有多个网卡的话,参考flannel issues 39701,</span></span> <span class="line"><span class="comment"># https://github.com/kubernetes/kubernetes/issues/39701</span></span> <span class="line"><span class="comment"># 目前需要在kube-flannel.yml中使用--iface参数指定集群主机内网网卡的名称,</span></span> <span class="line"><span class="comment"># 否则可能会出现dns无法解析。容器无法通信的情况,需要将kube-flannel.yml下载到本地,</span></span> <span class="line"><span class="comment"># flanneld启动参数加上--iface=<iface-name></span></span> <span class="line"> containers:</span> <span class="line"> - name: kube-flannel</span> <span class="line"> image: registry.cn-shanghai.aliyuncs.com/gcr-k8s/flannel:v0.10.0-amd64</span> <span class="line"> <span class="built_in">command</span>:</span> <span class="line"> - /opt/bin/flanneld</span> <span class="line"> args:</span> <span class="line"> - --ip-masq</span> <span class="line"> - --kube-subnet-mgr</span> <span class="line"> - --iface=eth1</span> <span class="line"><span class="comment"># 启动</span></span> <span class="line">kubectl apply -f kube-flannel.yml</span> <span class="line"><span class="comment"># 查看</span></span> <span class="line">kubectl get pods --namespace kube-system</span> <span class="line">kubectl get svc --namespace kube-system</span>
配置node节点加入集群
如下操作在所有
node
节点操作
<span class="line"><span class="comment"># 此命令为初始化master成功后返回的结果</span></span> <span class="line">kubeadm join 11.11.11.110:8443 --token yzb7v7.dy40mhlljt1d48i9 --discovery-token-ca-cert-hash sha256:61ec309e6f942305006e6622dcadedcc64420e361231eff23cb535a183c0e77a</span>
基础测试
测试容器间的通信和DNS
配置好网络之后,kubeadm会自动部署coredns
如下测试可以在配置kubectl的节点上操作
启动
<span class="line">kubectl run nginx --replicas=2 --image=nginx:alpine --port=80</span> <span class="line">kubectl expose deployment nginx --<span class="built_in">type</span>=NodePort --name=example-service-nodeport</span> <span class="line">kubectl expose deployment nginx --name=example-service</span>
查看状态
<span class="line">kubectl get deploy</span> <span class="line">kubectl get pods</span> <span class="line">kubectl get svc</span> <span class="line">kubectl describe svc example-service</span>
DNS解析
<span class="line">kubectl run curl --image=radial/busyboxplus:curl -i --tty</span> <span class="line">nslookup kubernetes</span> <span class="line">nslookup example-service</span> <span class="line">curl example-service</span>
访问测试
<span class="line"><span class="comment"># 10.96.59.56 为查看svc时获取到的clusterip</span></span> <span class="line">curl <span class="string">"10.96.59.56:80"</span></span> <span class="line"><span class="comment"># 32223 为查看svc时获取到的 nodeport</span></span> <span class="line">http://11.11.11.112:32223/</span> <span class="line">http://11.11.11.113:32223/</span>
清理删除
<span class="line">kubectl delete svc example-service example-service-nodeport</span> <span class="line">kubectl delete deploy nginx curl</span>
高可用测试
关闭任一master
节点测试集群是能否正常执行上一步的基础测试
,查看相关信息,不能同时关闭两个节点,因为3个节点组成的etcd
集群,最多只能有一个当机。
<span class="line"><span class="comment"># 查看组件状态</span></span> <span class="line">kubectl get pod --all-namespaces -o wide</span> <span class="line">kubectl get pod --all-namespaces -o wide | grep lab1</span> <span class="line">kubectl get pod --all-namespaces -o wide | grep lab2</span> <span class="line">kubectl get pod --all-namespaces -o wide | grep lab3</span> <span class="line">kubectl get nodes -o wide</span> <span class="line">kubectl get deploy</span> <span class="line">kubectl get pods</span> <span class="line">kubectl get svc</span> <span class="line"><span class="comment"># 访问测试</span></span> <span class="line">CURL_POD=$(kubectl get pods | grep curl | grep Running | cut -d <span class="string">" "</span> -f1)</span> <span class="line">kubectl <span class="built_in">exec</span> -ti <span class="variable">$CURL_POD</span> -- sh --tty</span> <span class="line">nslookup kubernetes</span> <span class="line">nslookup example-service</span> <span class="line">curl example-service</span>
小技巧
忘记初始master节点时的node节点加入集群命令怎么办
<span class="line"><span class="comment"># 简单方法</span></span> <span class="line">kubeadm token create --<span class="built_in">print</span>-join-command</span> <span class="line"><span class="comment"># 第二种方法</span></span> <span class="line">token=$(kubeadm token generate)</span> <span class="line">kubeadm token create <span class="variable">$token</span> --<span class="built_in">print</span>-join-command --ttl=0</span>
参考文档
- https://kubernetes.io/docs/setup/independent/install-kubeadm/
- https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
- https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
- https://kubernetes.io/docs/setup/independent/high-availability/
- https://sealyun.com/post/k8s-ipvs/
- http://www.maogx.win/posts/33/