摘要:前言達到了生產可用,利用部署一個高可用集群簡單不少。部署高可用集群為創建負載均衡這里選擇的。執行加入操作驗證部署部署的創建為打標簽標記子網以允許入口控制器自動發現用于的子網。
前言
kubeadm1.13達到了生產可用,利用kubeadm部署一個高可用集群簡單不少。但是竟然部署在aws上,就要啟用cloud-provider=aws,深度結合iaas層資源。主要是利用aws的elb和ebs等。相關的資料還是比較少的,已經有的一些文檔要不是out了,要不就是內容不全,還有很多文章只是弄了一個demo的水平,完全沒法上生產,部署過程破費周折。
組件版本和集群環境 集群組件和版本Kubernetes 1.13.1
Docker 18.06.0-ce
Etcd 3.2.24
Calico 3.4.0 網絡
集群機器master:
172.31.22.208
172.31.17.44
172.31.22.135
node:
172.31.29.58
PSetcd集群非容器部署,systemd守護
三臺master主機配置ssh免密登錄
主機設置 關閉防火墻systemctl stop firewalld systemctl disable firewalld禁用selinux
# Set SELinux in permissive mode (effectively disabling it) setenforce 0 sed -i "s/^SELINUX=enforcing$/SELINUX=permissive/" /etc/selinux/config啟用net.bridge.bridge-nf-call-ip6tables和net.bridge.bridge-nf-call-iptables
cat <禁用swap/etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF sysctl --system
swapoff -a
修改/etc/fstab 文件,注釋掉 SWAP 的自動掛載.
使用free -m確認swap已經關閉。
at > /etc/sysconfig/modules/ipvs.modules <上面腳本創建了的/etc/sysconfig/modules/ipvs.modules文件,保證在節點重啟后能自動加載所需模塊。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已經正確加載所需的內核模塊。
接下來還需要確保各個節點上已經安裝了ipset軟件包yum install ipset。 為了便于查看ipvs的代理規則,最好安裝一下管理工具ipvsadm yum install ipvsadm。
賦予IAM權限Master Policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeLaunchConfigurations", "autoscaling:DescribeTags", "ec2:DescribeInstances", "ec2:DescribeRegions", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVolumes", "ec2:CreateSecurityGroup", "ec2:CreateTags", "ec2:CreateVolume", "ec2:ModifyInstanceAttribute", "ec2:ModifyVolume", "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateRoute", "ec2:DeleteRoute", "ec2:DeleteSecurityGroup", "ec2:DeleteVolume", "ec2:DetachVolume", "ec2:RevokeSecurityGroupIngress", "ec2:DescribeVpcs", "elasticloadbalancing:AddTags", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerPolicy", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteLoadBalancerListeners", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DetachLoadBalancerFromSubnets", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", "elasticloadbalancing:AddTags", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteListener", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerPolicies", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:ModifyListener", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", "iam:CreateServiceLinkedRole", "kms:DescribeKey" ], "Resource": [ "*" ] }, ] }Node Policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeRegions", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:BatchGetImage", "sts:AssumeRole" ], "Resource": "*" } ] }tag標簽需要為ec2實例, route table, subnet,安全組 打下面的標簽:
kubernetes.io/cluster/="owned" cluster-name命名規范:
k8s-{region}-{env}-{num}
安裝docker和kubeadm 安裝指定版本docker 安裝docker
例如:
k8s-usa-west-2-test-1yum install docker-18.06.1ce-5.amzn2 -y systemctl enable docker更改docker Root Dir 目錄將/var/lib/dokcer 配置到/data/docker,確保/data是另外掛載的數據盤
更改 ‘/etc/sysconfig/docker’ 文件:
OPTIONS="--default-ulimit nofile=1024:4096 -g /data/docker"更改 /etc/docker/daemon.json:
cat > /etc/docker/daemon.json <驗證
[root@ip-172-31-22-208 ~]# ls -lrt /var/lib/docker 總用量 0 [root@ip-172-31-22-208 ~]# ls -lrt /data/docker/ 總用量 0 drwx------ 3 root root 20 12月 11 10:44 containerd drwx------ 2 root root 6 12月 11 10:44 tmp drwx------ 2 root root 6 12月 11 10:44 runtimes drwx------ 4 root root 32 12月 11 10:44 plugins drwx------ 2 root root 6 12月 11 10:44 containers drwx------ 2 root root 25 12月 11 10:44 volumes drwx------ 3 root root 22 12月 11 10:44 image drwx------ 2 root root 6 12月 11 10:44 trust drwxr-x--- 3 root root 19 12月 11 10:44 network drwx------ 3 root root 40 12月 11 10:44 overlay2 drwx------ 2 root root 6 12月 11 10:44 swarm drwx------ 2 root root 24 12月 11 10:44 builder drwx------ 4 root root 92 12月 11 10:44 buildkit重啟docker 服務
systemctl start docker驗證docker:
root@ip-172-31-22-208 ~]# docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e runc version: 69663f0bd4b60df09991c08812a60108003fa340 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.14.70-72.55.amzn2.x86_64 Operating System: Amazon Linux 2 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 15.67GiB Name: ip-172-31-22-208.us-west-2.compute.internal ID: CG7S:P5XD:FLU6:MULI:2TSI:OLRY:A6EX:SM3D:FXNB:CMEQ:MU6R:XSCW Docker Root Dir: /data/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false安裝kubeadm等 增加k8s repocat <安裝kubeadm, kubelet, kubectl/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable kubelet && systemctl start kubelet驗證kubeadm版本
[root@ip-172-31-22-208 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}更新 kubelet config設置節點預留資源 ,同時為了支持 cloud-provider, 首先需要在 kubelet 的配置里做相應修改,為 /etc/sysconfig/kubelet 添加 KUBELET_EXTRA_ARGS:
KUBELET_EXTRA_ARGS=--cloud-provider=aws預留資源設置cgroups
mkdir -p /sys/fs/cgroup/cpu/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/cpuacct/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/devices/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/blkio/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/systemd/system.slice/kubelet.service在/var/lib/kubelet/config.yaml中添加如下:
enforceNodeAllocatable: - pods - kube-reserved - system-reserved kubeReservedCgroup: /system.slice/kubelet.service systemReservedCgroup: /system.slice systemReserved: cpu: 500m memory: 1Gi ephemeral-storage: 5Gi kubeReserved: cpu: 500m memory: 1Gi ephemeral-storage: 5Gi部署高可用 etcd 集群kuberntes 系統使用 etcd 存儲所有數據,本文檔介紹部署一個三節點高可用 etcd 集群的步驟,這三個節點復用 kubernetes master 機器,分別命名為etcd-host0、etcd-host1、etcd-host2:
infra0: 172.31.22.208
infra1: 172.31.17.44
infra2: 172.31.22.135
使用的變量本文檔用到的變量定義如下:
export NODE_NAME=infra0 # 當前部署的機器名稱(隨便定義,只要能區分不同機器即可) export NODE_IP=172.31.22.208 # 當前部署的機器 IP export NODE_IPS="172.31.22.208 172.31.17.44 172.31.22.135" # etcd 集群所有機器 IP # etcd 集群間通信的IP和端口 export ETCD_NODES=infra0=https://172.31.22.208:2380,infra1=https://172.31.17.44:2380,infra2=https://172.31.22.135:2380export NODE_NAME=infra1 # 當前部署的機器名稱(隨便定義,只要能區分不同機器即可) export NODE_IP=172.31.17.44 # 當前部署的機器 IP export NODE_IPS="172.31.22.208 172.31.17.44 172.31.22.135" # etcd 集群所有機器 IP # etcd 集群間通信的IP和端口 export ETCD_NODES=infra0=https://172.31.22.208:2380,infra1=https://172.31.17.44:2380,infra2=https://172.31.22.135:2380export NODE_NAME=infra2 # 當前部署的機器名稱(隨便定義,只要能區分不同機器即可) export NODE_IP=172.31.22.135 # 當前部署的機器 IP export NODE_IPS="172.31.22.208 172.31.17.44 172.31.22.135" # etcd 集群所有機器 IP # etcd 集群間通信的IP和端口 export ETCD_NODES=infra0=https://172.31.22.208:2380,infra1=https://172.31.17.44:2380,infra2=https://172.31.22.135:2380下載二進制文件到 https://github.com/coreos/etcd/releases 頁面下載最新版本的二進制文件:
wget https://github.com/coreos/etcd/releases/download/v3.2.24/etcd-v3.2.24-linux-amd64.tar.gz tar -xvf etcd-v3.2.24-linux-amd64.tar.gz mv etcd-v3.2.24-linux-amd64/etcd* /usr/bin利用kubeadm創建秘鑰和證書 為kubeadm創建配置文件使用以下腳本為每個將在其上運行etcd成員的主機生成一個kubeadm配置文件。
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts export HOST0=172.31.22.208 export HOST1=172.31.17.44 export HOST2=172.31.22.135 # Create temp directories to store files that will end up on other hosts. mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2}) NAMES=("infra0" "infra1" "infra2") for i in "${!ETCDHOSTS[@]}"; do HOST=${ETCDHOSTS[$i]} NAME=${NAMES[$i]} cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml apiVersion: "kubeadm.k8s.io/v1beta1" kind: ClusterConfiguration etcd: local: serverCertSANs: - "${HOST}" peerCertSANs: - "${HOST}" extraArgs: initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380 initial-cluster-state: new name: ${NAME} listen-peer-urls: https://${HOST}:2380 listen-client-urls: https://${HOST}:2379 advertise-client-urls: https://${HOST}:2379 initial-advertise-peer-urls: https://${HOST}:2380 EOF done生成證書頒發機構執行如下命令:
kubeadm init phase certs etcd-ca生成下面兩個文件:
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
為每個成員創建證書kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST2}/ # cleanup non-reusable certificates find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST1}/ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml # No need to move the certs because they are for HOST0 # clean up certs that should not be copied off this host find /tmp/${HOST2} -name ca.key -type f -delete find /tmp/${HOST1} -name ca.key -type f -delete拷貝證書到對應的主機上USER=root CONTROL_PLANE_IPS="172.31.17.44 172.31.22.135" for host in ${CONTROL_PLANE_IPS}; do scp -r /tmp/${host}/pki "${USER}"@$host: done$ 例如HOST0上所需文件的完整列表是:
/etc/kubernetes/pki ├── apiserver-etcd-client.crt ├── apiserver-etcd-client.key └── etcd ├── ca.crt ├── ca.key ├── healthcheck-client.crt ├── healthcheck-client.key ├── peer.crt ├── peer.key ├── server.crt └── server.key其他兩臺主機如上。
創建 etcd 的 systemd unit 文件mkdir -p /var/lib/etcd # 必須先創建工作目錄 cat > etcd.service <指定 etcd 的工作目錄和數據目錄為 /var/lib/etcd,需在啟動服務前創建這個目錄;
為了保證通信安全,需要指定 etcd 的公私鑰(cert-file和key-file)、Peers 通信的公私鑰和 CA 證書(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客戶端的CA證書(trusted-ca-file);
--initial-cluster-state 值為 new 時,--name 的參數值必須位于 --initial-cluster 列表中;
啟動 etcd 服務mv etcd.service /etc/systemd/system/ systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd $最先啟動的 etcd 進程會卡住一段時間,等待其它節點上的 etcd 進程加入集群,為正?,F象。
在所有的 etcd 節點重復上面的步驟,直到所有機器的 etcd 服務都已啟動。
驗證服務部署完 etcd 集群后,在任一 etcd 集群節點上執行如下命令:
for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint health; done預期結果:
https://172.31.22.208:2379 is healthy: successfully committed proposal: took = 1.543275ms https://172.31.17.44:2379 is healthy: successfully committed proposal: took = 1.883033ms https://172.31.22.135:2379 is healthy: successfully committed proposal: took = 2.026367ms三臺 etcd 的輸出均為 healthy 時表示集群服務正常(忽略 warning 信息)。
部署高可用 master集群 為kube-apiserver創建tcp負載均衡這里選擇aws的nlb。具體創建過程不再敘述。
添加到變量
創建結果nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com。export LOAD_BALANCER_DNS=nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com export ETCD_0_IP=172.31.22.208 export ETCD_1_IP=172.31.17.44 export ETCD_2_IP=172.31.22.135創建 啟用aws cloud-providercat > kubeadm-config.yaml <創建 不啟用aws cloud-provider cat > kubeadm-config.yaml <創建第一個master 執行
kubeadm init --config=kubeadm-config.yaml出現以下:
設置訪問證書:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config創建剩余master復制證書
USER=root # customizable CONTROL_PLANE_IPS="172.31.17.44 172.31.22.135" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/admin.conf "${USER}"@$host: done在剩余主機執行:
USER=root # customizable mv /${USER}/ca.crt /etc/kubernetes/pki/ mv /${USER}/ca.key /etc/kubernetes/pki/ mv /${USER}/sa.pub /etc/kubernetes/pki/ mv /${USER}/sa.key /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /${USER}/admin.conf /etc/kubernetes/admin.conf加入控制節點:
kubeadm join nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com:6443 --token u9hmb3.gwfozvsz90k3yt9g --discovery-token-ca-cert-hash sha256:24c354cce46de9c1eb1a8358b9ba064166e87cf6c011fecaae3350c3910c215a --experimental-control-plane忘記discovery-token-ca-cert-hash?
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed "s/^.* //"部署calico網絡 檢查aws ec2是否關閉了src/dst checks?
配置calicoctl 下載calicoctlcurl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.4.0/calicoctl chmod +x calicoctl mv calicoctl /usr/bin/配置calico config 文件cat > /etc/calico/calicoctl.cfg <使用到的變量 export ETCD_KEY=$(cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d " ") export ETCD_CERT=$(cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d " ") export ETCD_CA=$(cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d " ")創建calico.ymlcat > calico.yml <部署calico| base64 -w 0 etcd-key: ${ETCD_KEY} etcd-cert: ${ETCD_CERT} etcd-ca: ${ETCD_CA} --- # This manifest installs the calico/node container, as well # as the Calico CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: "" spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 initContainers: # This container installs the Calico CNI binaries # and CNI network config file on each node. - name: install-cni image: quay.io/calico/cni:v3.4.0 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir - mountPath: /calico-secrets name: etcd-certs containers: # Runs calico/node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: quay.io/calico/node:v3.4.0 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Set noderef for node controller. - name: CALICO_K8S_NODE_REF valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within --cluster-cidr. - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: httpGet: path: /liveness port: 9099 host: localhost periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -bird-ready - -felix-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - mountPath: /calico-secrets name: etcd-certs volumes: # Used by calico/node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the etcd TLS secrets with mode 400. # See https://kubernetes.io/docs/concepts/configuration/secret/ - name: etcd-certs secret: secretName: calico-etcd-secrets defaultMode: 0400 --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # This manifest deploys the Calico Kubernetes controllers. # See https://github.com/projectcalico/kube-controllers apiVersion: extensions/v1beta1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: scheduler.alpha.kubernetes.io/critical-pod: "" spec: # The controllers can only have a single active instance. replicas: 1 strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: nodeSelector: beta.kubernetes.io/os: linux # The controllers must run in the host network namespace so that # it isn"t governed by policy that would prevent it from working. hostNetwork: true tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers containers: - name: calico-kube-controllers image: quay.io/calico/kube-controllers:v3.4.0 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: policy,namespace,serviceaccount,workloadendpoint,node volumeMounts: # Mount in the etcd TLS secrets. - mountPath: /calico-secrets name: etcd-certs readinessProbe: exec: command: - /usr/bin/check-status - -r volumes: # Mount in the etcd TLS secrets with mode 400. # See https://kubernetes.io/docs/concepts/configuration/secret/ - name: etcd-certs secret: secretName: calico-etcd-secrets defaultMode: 0400 --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-kube-controllers rules: # Pods are monitored for changing labels. # The node controller monitors Kubernetes nodes. # Namespace and serviceaccount labels are used for policy. - apiGroups: - "" resources: - pods - nodes - namespaces - serviceaccounts verbs: - watch - list # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: - networking.k8s.io resources: - networkpolicies verbs: - watch - list --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- EOF kubectl apply -f calico.yml設置ippool執行:
calicoctl apply -f - << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: 192.168.0.0/16 ipipMode: CrossSubnet natOutgoing: true EOF部署node節點執行主機設置的所有項。
執行加入操作:
kubeadm join nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com:6443 --token u9hmb3.gwfozvsz90k3yt9g --discovery-token-ca-cert-hash sha256:24c354cce46de9c1eb1a8358b9ba064166e87cf6c011fecaae3350c3910c215a驗證:
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-17-44.us-west-2.compute.internal Ready master 4m2s v1.13.0 ip-172-31-22-135.us-west-2.compute.internal Ready master 3m59s v1.13.0 ip-172-31-22-208.us-west-2.compute.internal Ready master 16h v1.13.0 ip-172-31-29-58.us-west-2.compute.internal Ready部署addon 部署aws的sts14h v1.13.0 kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/storage-class/aws/default.yaml創建alb-ingress-controller 為subnet打標簽標記AWS子網以允許入口控制器自動發現用于ALB的子網。
kubernetes.io/cluster/${cluster-name} must be set to owned or shared
kubernetes.io/role/internal-elb must be set to 1 or `` for internal LoadBalancers
kubernetes.io/role/elb must be set to 1 or `` for internet-facing LoadBalancers
rbackubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/rbac-role.yaml按照如下yaml創建# Application Load Balancer (ALB) Ingress Controller Deployment Manifest. # This manifest details sensible defaults for deploying an ALB Ingress Controller. # GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller apiVersion: apps/v1 kind: Deployment metadata: labels: app: alb-ingress-controller name: alb-ingress-controller # Namespace the ALB Ingress Controller should run in. Does not impact which # namespaces it"s able to resolve ingress resource for. For limiting ingress # namespace scope, see --watch-namespace. namespace: kube-system annotations: scheduler.alpha.kubernetes.io/critical-pod: "" spec: replicas: 1 selector: matchLabels: app: alb-ingress-controller strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: iam.amazonaws.com/role: arn:aws:iam::1234567:role/Role-KubernetesIngressController-test labels: app: alb-ingress-controller spec: containers: - args: # Limit the namespace where this ALB Ingress Controller deployment will # resolve ingress resources. If left commented, all namespaces are used. # - --watch-namespace=your-k8s-namespace # Setting the ingress-class flag below ensures that only ingress resources with the # annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may # choose any class you"d like for this controller to respect. - --ingress-class=alb # Name of your cluster. Used when naming resources created # by the ALB Ingress Controller, providing distinction between # clusters. - --cluster-name=k8s-us-west-test-1 # AWS VPC ID this ingress controller will use to create AWS resources. # If unspecified, it will be discovered from ec2metadata. # - --aws-vpc-id=vpc-xxxxxx # AWS region this ingress controller will operate in. # If unspecified, it will be discovered from ec2metadata. # List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region # - --aws-region=us-west-1 # Enables logging on all outbound requests sent to the AWS API. # If logging is desired, set to true. # - ---aws-api-debug # Maximum number of times to retry the aws calls. # defaults to 10. # - --aws-max-retries=10 env: # AWS key id for authenticating with the AWS API. # This is only here for examples. It"s recommended you instead use # a project like kube2iam for granting access. #- name: AWS_ACCESS_KEY_ID # value: KEYVALUE # AWS key secret for authenticating with the AWS API. # This is only here for examples. It"s recommended you instead use # a project like kube2iam for granting access. #- name: AWS_SECRET_ACCESS_KEY # value: SECRETVALUE # Repository location of the ALB Ingress Controller. image: 894847497797.dkr.ecr.us-west-2.amazonaws.com/aws-alb-ingress-controller:v1.0.1 imagePullPolicy: Always name: server resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 serviceAccountName: alb-ingress serviceAccount: alb-ingress注意cluster-name 指定集群name。
創建dashbordkubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml需要創建一個admin用戶并授予admin角色綁定,使用下面的yaml文件創建admin用戶并賦予他管理員權限,然后可以通過token登陸dashbaord,該文件見admin-role.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile獲取token
kubectl -n kube-system get secret|grep admin-token admin-token-cs4gs kubernetes.io/service-account-token 3 10m kubectl describe secret admin-token-cs4gs -n kube-system重新部署操作kubeadm reset iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ipvsadm --clear ifconfig tunl0 down ip link delete tunl0升級kubeadm等升級kubeadm
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version export ARCH=amd64 # or: arm, arm64, ppc64le, s390x curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm chmod a+rx /usr/bin/kubeadm升級kubectl
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version export ARCH=amd64 # or: arm, arm64, ppc64le, s390x curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubectl > /usr/bin/kubectl chmod a+rx /usr/bin/kubectl升級kubelet
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version export ARCH=amd64 # or: arm, arm64, ppc64le, s390x curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > /usr/bin/kubelet chmod a+rx /usr/bin/kubelet
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/33106.html
摘要:前言達到了生產可用,利用部署一個高可用集群簡單不少。部署高可用集群為創建負載均衡這里選擇的。執行加入操作驗證部署部署的創建為打標簽標記子網以允許入口控制器自動發現用于的子網。 前言 kubeadm1.13達到了生產可用,利用kubeadm部署一個高可用集群簡單不少。但是竟然部署在aws上,就要啟用cloud-provider=aws,深度結合iaas層資源。主要是利用aws的elb和e...
摘要:如何在啟用的集群中設置的為關于的和兩個值,在之前的文章中,我們已經講過。首先保證啟動參數里加入設置然后需要設置的為。參考資料當您創建時,我們會自動創建選項集,并將它們與相關聯。 如何在啟用cloud-provider=aws的k8s集群中設置service 的externalTrafficPolicy為local 關于externalTrafficPolicy的local和cluste...
摘要:如何在啟用的集群中設置的為關于的和兩個值,在之前的文章中,我們已經講過。首先保證啟動參數里加入設置然后需要設置的為。參考資料當您創建時,我們會自動創建選項集,并將它們與相關聯。 如何在啟用cloud-provider=aws的k8s集群中設置service 的externalTrafficPolicy為local 關于externalTrafficPolicy的local和cluste...
摘要:本教程適用范圍本教程適用范圍在上使用服務部署,并通過訪問集群計算節點采用托管,并使用啟動模板。到此,完成集群的搭建,部署,部署,并實現了外網訪問。本教程適用范圍在AWS上使用EKS服務部署k8s Dashboard,并通過ALB訪問EKS集群計算節點采用托管EC2,并使用啟動模板。使用AWS海外賬號,us-west-2區域使用賬號默認vpc(172.31.0.0/16)和子網使用awscli...
摘要:集群概述整個集群包括大部分集群節點節點集群主要作為集群和網絡的數據存儲。集群組件版本集群機器主從從后續計劃用替換。 前言 k8s部署的方式多種多樣,除去各家云廠商提供的工具,在bare metal中,也有二進制部署和一系列的自動化部署工具(kubeadm,kubespary,rke等)。具體二進制部署大家可以參考宋總的系列文章。而rke是由rancher提供的工具,由于剛剛出來,有不少...
閱讀 2907·2021-11-15 18:02
閱讀 3801·2021-10-14 09:43
閱讀 3733·2021-09-08 10:41
閱讀 2522·2019-08-30 15:53
閱讀 1804·2019-08-30 14:14
閱讀 1943·2019-08-29 16:12
閱讀 3139·2019-08-29 14:03
閱讀 1280·2019-08-29 13:46