国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

etcd管理,證書配置,擴展,遷移恢復,帶證書擴展節點

張漢慶 / 3706人閱讀

摘要:廣告各版本離線安裝包證書配置生產環境中給配置證書相當重要,如果沒有證書,那么集群很容易被黑客利用而去挖礦什么的。細節問題非常多,一個端口,一個都不要填錯,否則就會各種錯誤包括新加節點要清數據這些小細節問題。

廣告 | kubernetes各版本離線安裝包
etcd 證書配置

生產環境中給etcd配置證書相當重要,如果沒有證書,那么k8s集群很容易被黑客利用而去挖礦什么的。做法非常簡單,比如你下了一個不安全的鏡像,通過程序掃描到etcd的ip和端口,那么黑客就可以繞開apiserver的認證直接寫數據,寫一些deployment pod等等,apiserver就會讀到這些,從而去部署黑客的程序。 我們就有一個集群這樣被利用去挖礦了,安全無小事,如果黑客惡意攻擊也可輕松刪除你的所有數據,所以證書與定期備份都很重要,即便有多個etcd節點,本文深入探討etcd管理的重要的幾個東西。

證書生成

cfssl安裝:

mkdir ~/bin
curl -s -L -o ~/bin/cfssl https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
curl -s -L -o ~/bin/cfssljson https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x ~/bin/{cfssl,cfssljson}
export PATH=$PATH:~/bin
mkdir ~/cfssl
cd ~/cfssl

寫入如下json文件,ip替換成自己的

root@dev-86-201 cfssl]# cat ca-config.json
{
    "signing": {
        "default": {
            "expiry": "43800h"
        },
        "profiles": {
            "server": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            },
            "client": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "client auth"
                ]
            },
            "peer": {
                "expiry": "43800h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            }
        }
    }
}
[root@dev-86-201 cfssl]# cat ca-csr.json
{
    "CN": "My own CA",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "O": "My Company Name",
            "ST": "San Francisco",
            "OU": "Org Unit 1",
            "OU": "Org Unit 2"
        }
    ]
}
[root@dev-86-201 cfssl]# cat server.json
{
    "CN": "etcd0",
    "hosts": [
        "127.0.0.1",
        "0.0.0.0",
        "10.1.86.201",
        "10.1.86.203",
        "10.1.86.202"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

[root@dev-86-201 cfssl]# cat member1.json  # 填本機IP
{
    "CN": "etcd0",
    "hosts": [
        "10.1.86.201"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

[root@dev-86-201 cfssl]# cat client.json
{
    "CN": "client",
    "hosts": [
       ""
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

生成證書:

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server server.json | cfssljson -bare server
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client.json | cfssljson -bare client
啟動etcd

cfssl目錄拷貝到/etc/kubernetes/pki/cfssl 目錄

[root@dev-86-201 manifests]# cat etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.201:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.201:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.201:2379
    - --listen-peer-urls=https://10.1.86.201:2380
    - --name=etcd0
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
   #livenessProbe:
   #  exec:
   #    command:
   #    - /bin/sh
   #    - -ec
   #    - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.pem
   #      --cert=/etc/kubernetes/pki/etcd/client.pem --key=/etc/kubernetes/pki/etcd/client-key.pem
   #      get foo
   #  failureThreshold: 8
   #  initialDelaySeconds: 15
   #  timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
status: {}

進入etcd容器執行:

alias etcdv3="ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/client.pem --key=/etc/kubernetes/pki/etcd/client-key.pem"
etcdv3 member add etcd1 --peer-urls="https://10.1.86.202:2380"
增加節點

拷貝etcd0(10.1.86.201)節點上的證書到etcd1(10.1.86.202)節點上
修改member1.json:

{
    "CN": "etcd1",
    "hosts": [
        "10.1.86.202"
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

重新生成在etcd1上生成member1證書:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1

啟動etcd1:

[root@dev-86-202 manifests]# cat etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.202:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.202:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380,etcd1=https://10.1.86.202:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.202:2379
    - --listen-peer-urls=https://10.1.86.202:2380
    - --name=etcd1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --initial-cluster-state=existing  # 千萬別加雙引號,被坑死
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
  # livenessProbe:
  #   exec:
  #     command:
  #     - /bin/sh
  #     - -ec
  #     - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.202]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
  #       --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
  #       get foo
  #   failureThreshold: 8
  #   initialDelaySeconds: 15
  #   timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
status: {}

或者用docker起先測試一下:

docker run --net=host -v /etc/kubernetes/pki/cfssl:/etc/kubernetes/pki/etcd k8s.gcr.io/etcd-amd64:3.2.18 etcd 
--advertise-client-urls=https://10.1.86.202:2379 
--cert-file=/etc/kubernetes/pki/etcd/server.pem 
--data-dir=/var/lib/etcd 
--initial-advertise-peer-urls=https://10.1.86.202:2380 
--initial-cluster=etcd0=https://10.1.86.201:2380,etcd1=https://10.1.86.202:2380 
--key-file=/etc/kubernetes/pki/etcd/server-key.pem  
--listen-client-urls=https://10.1.86.202:2379 
--listen-peer-urls=https://10.1.86.202:2380 --name=etcd1 
--peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem 
--peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem 
--peer-client-cert-auth=true 
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem --snapshot-count=10000 
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem --initial-cluster-state="existing"

etcd0上檢查集群健康:

# etcdctl --endpoints=https://[10.1.86.201]:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/client.pem --key-file=/etc/kubernetes/pki/etcd/client-key.pem cluster-heal
th
member 5856099674401300 is healthy: got healthy result from https://10.1.86.201:2379
member df99f445ac908d15 is healthy: got healthy result from https://10.1.86.202:2379
cluster is healthy

etcd2增加同理,略

apiserver etcd證書 配置:

- --etcd-cafile=/etc/kubernetes/pki/cfssl/ca.pem
- --etcd-certfile=/etc/kubernetes/pki/cfssl/client.pem
- --etcd-keyfile=/etc/kubernetes/pki/cfssl/client-key.pem
快照與擴展節點 etcd快照恢復

說明:
有證書集群以下所有命令需帶上如下證書參數,否則訪問不了

--cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key

endpoints默認為127.0.0.1:2379,若需指定遠程etcd地址,可通過如下參數指定

--endpoints 172.16.154.81:2379

1、獲取數據快照

ETCDCTL_API=3 etcdctl snapshot save snapshot.db

2、從快照恢復數據

ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --data-dir=/var/lib/etcd/

3、啟動新etcd節點,指定--data-dir=/var/lib/etcd/

etcd節點擴展
節點名 IP 備注
infra0 172.16.154.81 初始節點,k8s的master節點,kubeadm所部署的單節點etcd所在機器
infra1 172.16.154.82 待添加節點,k8s的node節點
infra2 172.16.154.83 待添加節點,k8s的node節點

1、從初始etcd節點獲取數據快照

ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --endpoints=https://127.0.0.1:2379 snapshot save snapshot.db

2、將快照文件snapshot.db復制到infra1節點,并執行數據恢復命令

數據恢復命令

ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --data-dir=/var/lib/etcd/

注:執行上述命令需要機器上有etcdctl

上述命令執行成功會將快照中的數據存放到/var/lib/etcd目錄中

3、在infra1節點啟動etcd
將如下yaml放入/etc/kubernetes/manifests

apiVersion: v1
kind: Pod
metadata:
  labels:
    component: etcd
    tier: control-plane
  name: etcd-172.16.154.82
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --name=infra0
    - --initial-advertise-peer-urls=http://172.16.154.82:2380
    - --listen-peer-urls=http://172.16.154.82:2380
    - --listen-client-urls=http://172.16.154.82:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://172.16.154.82:2379
    - --data-dir=/var/lib/etcd
    - --initial-cluster-token=etcd-cluster-1
    - --initial-cluster=infra0=http://172.16.154.82:2380
    - --initial-cluster-state=new
    image: hub.xfyun.cn/k8s/etcd-amd64:3.1.12
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    volumeMounts:
    - name: etcd-data
      mountPath: /var/lib/etcd
  hostNetwork: true
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data

4、infra2節點加入etcd集群中
在infra1中etcd容器中執行

ETCDCTL_API=3 etcdctl member add infra2 --peer-urls="http://172.16.154.83:2380"

將如下yaml放入/etc/kubernetes/manifests,由kubelet啟動etcd容器

apiVersion: v1
kind: Pod
metadata:
  labels:
    component: etcd
    tier: control-plane
  name: etcd-172.16.154.83
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --name=infra1
    - --initial-advertise-peer-urls=http://172.16.154.83:2380
    - --listen-peer-urls=http://172.16.154.83:2380
    - --listen-client-urls=http://172.16.154.83:2379,http://127.0.0.1:2379
    - --advertise-client-urls=http://172.16.154.83:2379
    - --data-dir=/var/lib/etcd
    - --initial-cluster-token=etcd-cluster-1
    - --initial-cluster=infra1=http://172.16.154.82:2380,infra2=http://172.16.154.83:2380
    - --initial-cluster-state=existing
    image: hub.xfyun.cn/k8s/etcd-amd64:3.1.12
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2379
        scheme: HTTP
      failureThreshold: 8
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    volumeMounts:
    - name: etcd-data
      mountPath: /var/lib/etcd
  hostNetwork: true
  volumes:
  - hostPath:
      path: /home/etcd
      type: DirectoryOrCreate
    name: etcd-data

infra0節點加入集群重復上述操作;注意在加入集群之前,將之前/var/lib/etcd/的數據刪除。

實踐 - 給kubeadm單etcd增加etcd節點 環境介紹

10.1.86.201 單點etcd etcd0

10.1.86.202 擴展節點 etcd1

10.1.86.203 擴展節點 etcd2

安裝k8s

先在etcd0節點上啟動k8s,當然是使用sealyun的安裝包 三步安裝不多說

修改證書

按照上述生成證書的方法生成證書并拷貝到對應目錄下

cp -r cfssl/ /etc/kubernetes/pki/
修改etcd配置:
cd /etc/kubernetes/manifests/
mv etcd.yaml ..   # 不要直接修改,防止k8s去讀swap文件
vim ../etcd.yaml

vim里面全局替換,把127.0.0.1替換成ip地址

:%s/127.0.0.1/10.1.86.201/g

注釋掉健康檢測探針,否則加節點時健康檢測會導致etcd0跪掉

#   livenessProbe:
#     exec:
#       command:
#       - /bin/sh
#       - -ec
#       - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
#         --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
#         get foo
#     failureThreshold: 8
#     initialDelaySeconds: 15
#     timeoutSeconds: 15

修改證書掛載配置目錄

  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs

修改證書配置,全改完長這樣:

[root@dev-86-201 manifests]# cat ../etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.201:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.201:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.201:2379
    - --listen-peer-urls=https://10.1.86.201:2380
    - --name=dev-86-201
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
#   livenessProbe:
#     exec:
#       command:
#       - /bin/sh
#       - -ec
#       - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
#         --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
#         get foo
#     failureThreshold: 8
#     initialDelaySeconds: 15
#     timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

啟動etcd, 把yaml文件移回來:

mv ../etcd.yaml .

修改APIserver參數:

mv kube-apiserver.yaml ..
vim ../kube-apiserver.yaml
    - --etcd-cafile=/etc/kubernetes/pki/cfssl/ca.pem
    - --etcd-certfile=/etc/kubernetes/pki/cfssl/client.pem
    - --etcd-keyfile=/etc/kubernetes/pki/cfssl/client-key.pem
    - --etcd-servers=https://10.1.86.201:2379

啟動apiserver:

mv ../kube-apiserver.yaml .

驗證:

kubectl get pod -n kube-system  # 能正常返回pod標志成功

到此etcd0上的操作完成

增加新節點, 進入到etcd容器內:

[root@dev-86-201 ~]# docker exec -it a7001397e1e5 sh
/ # alias etcdv3="ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.201]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/client.pem --key=/etc/kubernetes/pki/etcd/client-key
.pem"
/ # etcdv3 member update a874c87fd42044f  --peer-urls="https://10.1.86.201:2380" # 更新peer url 很重要
/ # etcdv3 member add etcd1 --peer-urls="https://10.1.86.202:2380"
Member 20c2a99381581958 added to cluster c9be114fc2da2776

ETCD_NAME="etcd1"
ETCD_INITIAL_CLUSTER="dev-86-201=https://127.0.0.1:2380,etcd1=https://10.1.86.202:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"

/ # alias etcdv2="ETCDCTL_API=2 etcdctl --endpoints=https://[10.1.86.201]:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/client.pem --key-file=/etc/kubernetes/pki/etcd/client-key.pem"
/ # etcdv2 cluster-health
etcd1上增加一個etcd節點

同樣先在etcd1(10.1.86.202) 上安裝k8s,同etcd0上的安裝

把etcd0的cfssl證書目錄拷貝到etcd1上備用

scp -r root@10.1.86.201:/etc/kubernetes/pki/cfssl /etc/kubernetes/pki

修改member1.json:

[root@dev-86-202 cfssl]# cat member1.json
{
    "CN": "etcd1",      # CN 改一下
    "hosts": [
        "10.1.86.202"   # 主要改成自身ip
    ],
    "key": {
        "algo": "ecdsa",
        "size": 256
    },
    "names": [
        {
            "C": "US",
            "L": "CA",
            "ST": "San Francisco"
        }
    ]
}

重新生成member1證書:

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer member1.json | cfssljson -bare member1

驗證證書:

openssl x509 -in member1.pem -text -noout

修改etcd1的etcd配置:

mv etcd.yaml ..
rm /var/lib/etcd/ -rf # 因為這是個擴展節點,需要同步etcd0的數據,所以把它自己數據刪掉
vim ../etcd.yaml

修改后yaml文件u

apiVersion: v1
kind: Pod
metadata:
  annotations:
    scheduler.alpha.kubernetes.io/critical-pod: ""
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://10.1.86.202:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.pem
    - --data-dir=/var/lib/etcd
    - --initial-advertise-peer-urls=https://10.1.86.202:2380
    - --initial-cluster=etcd0=https://10.1.86.201:2380,etcd1=https://10.1.86.202:2380
    - --key-file=/etc/kubernetes/pki/etcd/server-key.pem
    - --listen-client-urls=https://10.1.86.202:2379
    - --listen-peer-urls=https://10.1.86.202:2380
    - --name=etcd1
    - --peer-cert-file=/etc/kubernetes/pki/etcd/member1.pem
    - --peer-client-cert-auth=true
    - --peer-key-file=/etc/kubernetes/pki/etcd/member1-key.pem
    - --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --snapshot-count=10000
    - --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.pem
    - --initial-cluster-state=existing  # 千萬別加雙引號,被坑死
    image: k8s.gcr.io/etcd-amd64:3.2.18
    imagePullPolicy: IfNotPresent
  # livenessProbe:
  #   exec:
  #     command:
  #     - /bin/sh
  #     - -ec
  #     - ETCDCTL_API=3 etcdctl --endpoints=https://[10.1.86.202]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt
  #       --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key
  #       get foo
  #   failureThreshold: 8
  #   initialDelaySeconds: 15
  #   timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/etcd
      name: etcd-data
    - mountPath: /etc/kubernetes/pki/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/etcd
      type: DirectoryOrCreate
    name: etcd-data
  - hostPath:
      path: /etc/kubernetes/pki/cfssl
      type: DirectoryOrCreate
    name: etcd-certs
status: {}

在容器內查看集群已經健康運行了:

/ # alias etcdv2="ETCDCTL_API=2 etcdctl --endpoints=https://[10.1.86.201]:2379 --ca-file=/etc/kubernetes/pki/etcd/ca.pem --cert-file=/etc/kubernetes/pki/etcd/client.pem --key-file=/etc/kubernetes/pki/etcd/client-key.pem"
/ # etcdv2 cluster-health
member a874c87fd42044f is healthy: got healthy result from https://10.1.86.201:2379
member bbbbf223ec75e000 is healthy: got healthy result from https://10.1.86.202:2379
cluster is healthy

然后就可以把apiserver啟動參數再加一個etcd1:

    - --etcd-servers=https://10.1.86.201:2379
    - --etcd-servers=https://10.1.86.202:2379

第三個節點同第二個,不再贅述。

細節問題非常多,一個端口,一個IP都不要填錯,否則就會各種錯誤, 包括新加節點要清etcd數據這些小細節問題。
大功告成!

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/27406.html

相關文章

  • etcd管理證書配置擴展遷移恢復證書擴展節點

    摘要:廣告各版本離線安裝包證書配置生產環境中給配置證書相當重要,如果沒有證書,那么集群很容易被黑客利用而去挖礦什么的。細節問題非常多,一個端口,一個都不要填錯,否則就會各種錯誤包括新加節點要清數據這些小細節問題。 廣告 | kubernetes各版本離線安裝包 etcd 證書配置 生產環境中給etcd配置證書相當重要,如果沒有證書,那么k8s集群很容易被黑客利用而去挖礦什么的。做法非常簡單...

    mengbo 評論0 收藏0
  • etcd 集群運維實踐

    摘要:是集群的數據核心,最嚴重的情況是,當出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞相關的運維知識就比較重要,可以容器化部署,也可以在宿主機自行搭建,以下內容是通用的。 etcd 是 Kubernetes 集群的數據核心,最嚴重的情況是,當 etcd 出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞 etcd 相關的運維知識就比較重要...

    pcChao 評論0 收藏0
  • etcd 集群運維實踐

    摘要:是集群的數據核心,最嚴重的情況是,當出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞相關的運維知識就比較重要,可以容器化部署,也可以在宿主機自行搭建,以下內容是通用的。 etcd 是 Kubernetes 集群的數據核心,最嚴重的情況是,當 etcd 出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞 etcd 相關的運維知識就比較重要...

    Noodles 評論0 收藏0
  • Kubernetes在宜信落地實踐

    摘要:容器云的背景伴隨著微服務的架構的普及,結合開源的和等微服務框架,宜信內部很多業務線逐漸了從原來的單體架構逐漸轉移到微服務架構。 容器云的背景 伴隨著微服務的架構的普及,結合開源的Dubbo和Spring Cloud等微服務框架,宜信內部很多業務線逐漸了從原來的單體架構逐漸轉移到微服務架構。應用從有狀態到無狀態,具體來說將業務狀態數據如:會話、用戶數據等存儲到中間件中服務中。 showI...

    fxp 評論0 收藏0
  • Kubernetes在宜信落地實踐

    摘要:容器云的背景伴隨著微服務的架構的普及,結合開源的和等微服務框架,宜信內部很多業務線逐漸了從原來的單體架構逐漸轉移到微服務架構。 容器云的背景 伴隨著微服務的架構的普及,結合開源的Dubbo和Spring Cloud等微服務框架,宜信內部很多業務線逐漸了從原來的單體架構逐漸轉移到微服務架構。應用從有狀態到無狀態,具體來說將業務狀態數據如:會話、用戶數據等存儲到中間件中服務中。 showI...

    Labradors 評論0 收藏0

發表評論

0條評論

最新活動
閱讀需要支付1元查看
<