国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

利用 Kubeadm部署 Kubernetes 1.13.1 集群實踐錄

IntMain / 4460人閱讀

摘要:雖然這種方法有利于我們理解集群,但卻過于繁瑣。該參數使用依賴于使用的網絡方案,本文將使用經典的網絡方案。因此我們接下來安裝版本的,用于集群可視化的管理。


概 述

Kubernetes集群的搭建方法其實有多種,比如我在之前的文章《利用K8S技術棧打造個人私有云(連載之:K8S集群搭建)》中使用的就是二進制的安裝方法。雖然這種方法有利于我們理解 k8s集群,但卻過于繁瑣。而 kubeadm是 Kubernetes官方提供的用于快速部署Kubernetes集群的工具,其歷經發展如今已經比較成熟了,利用其來部署 Kubernetes集群可以說是非常好上手,操作起來也簡便了許多,因此本文詳細敘述之。

注: 本文首發于  My Personal Blog:CodeSheep·程序羊,歡迎光臨 小站

節點規劃

本文準備部署一個 一主兩從三節點 Kubernetes集群,整體節點規劃如下表所示:

主機名 IP 角色
k8s-master 192.168.39.79 k8s主節點
k8s-node-1 192.168.39.77 k8s從節點
k8s-node-2 192.168.39.78 k8s從節點

下面介紹一下各個節點的軟件版本:

操作系統:CentOS-7.4-64Bit

Docker版本:1.13.1

Kubernetes版本:1.13.1

所有節點都需要安裝以下組件:

Docker:不用多說了吧

kubelet:運行于所有 Node上,負責啟動容器和 Pod

kubeadm:負責初始化集群

kubectl: k8s命令行工具,通過其可以部署/管理應用 以及CRUD各種資源


準備工作

所有節點關閉防火墻

systemctl disable firewalld.service 
systemctl stop firewalld.service

禁用SELINUX

setenforce 0

vi /etc/selinux/config
SELINUX=disabled

所有節點關閉 swap

swapoff -a

設置所有節點主機名

hostnamectl --static set-hostname  k8s-master
hostnamectl --static set-hostname  k8s-node-1
hostnamectl --static set-hostname  k8s-node-2

所有節點 主機名/IP加入 hosts解析

編輯 /etc/hosts文件,加入以下內容:

192.168.39.79 k8s-master
192.168.39.77 k8s-node-1
192.168.39.78 k8s-node-2

組件安裝 0x01. Docker安裝(所有節點)
不贅述 ! ! !
0x02. kubelet、kubeadm、kubectl安裝(所有節點)

首先準備repo

cat>>/etc/yum.repos.d/kubrenetes.repo<

然后執行如下指令來進行安裝

setenforce 0
sed -i "s/^SELINUX=enforcing$/SELINUX= disabled/" /etc/selinux/config

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet


Master節點配置 0x01. 初始化 k8s集群

為了應對網絡不暢通的問題,我們國內網絡環境只能提前手動下載相關鏡像并重新打 tag :

docker pull mirrorgooglecontainers/kube-apiserver:v1.13.1
docker pull mirrorgooglecontainers/kube-controller-manager:v1.13.1
docker pull mirrorgooglecontainers/kube-scheduler:v1.13.1
docker pull mirrorgooglecontainers/kube-proxy:v1.13.1
docker pull mirrorgooglecontainers/pause:3.1
docker pull mirrorgooglecontainers/etcd:3.2.24
docker pull coredns/coredns:1.2.6
docker pull registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

docker tag mirrorgooglecontainers/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
docker tag mirrorgooglecontainers/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
docker tag mirrorgooglecontainers/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
docker tag mirrorgooglecontainers/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag mirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

docker rmi mirrorgooglecontainers/kube-apiserver:v1.13.1 ? ? ? ? ??
docker rmi mirrorgooglecontainers/kube-controller-manager:v1.13.1 ?
docker rmi mirrorgooglecontainers/kube-scheduler:v1.13.1 ? ? ? ? ??
docker rmi mirrorgooglecontainers/kube-proxy:v1.13.1 ? ? ? ? ? ? ??
docker rmi mirrorgooglecontainers/pause:3.1 ? ? ? ? ? ? ? ? ? ? ? ?
docker rmi mirrorgooglecontainers/etcd:3.2.24 ? ? ? ? ? ? ? ? ? ? ?
docker rmi coredns/coredns:1.2.6
docker rmi registry.cn-shenzhen.aliyuncs.com/cp_m/flannel:v0.10.0-amd64

然后再在 Master節點上執行如下命令初始化 k8s集群:

kubeadm init --kubernetes-version=v1.13.1 --apiserver-advertise-address 192.168.39.79 --pod-network-cidr=10.244.0.0/16

--kubernetes-version: 用于指定 k8s版本

--apiserver-advertise-address:用于指定使用 Master的哪個network interface進行通信,若不指定,則 kubeadm會自動選擇具有默認網關的 interface

--pod-network-cidr:用于指定Pod的網絡范圍。該參數使用依賴于使用的網絡方案,本文將使用經典的flannel網絡方案。

執行命令后,控制臺給出了如下所示的詳細集群初始化過程:

[root@localhost ~]# kubeadm init --config kubeadm-config.yaml
W1224 11:01:25.408209   10137 strict.go:54] error unmarshaling configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1beta1", Kind:"ClusterConfiguration"}: error unmarshaling JSON: while decoding JSON: json: unknown field "u00a0 podSubnet”
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using "kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki”
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [192.168.39.79 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.39.79]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes”
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for "kube-apiserver”
[control-plane] Creating static Pod manifest for "kube-controller-manager”
[control-plane] Creating static Pod manifest for "kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005638 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system” Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=""”
[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 26uprk.t7vpbwxojest0tvq
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.39.79:6443 --token 26uprk.t7vpbwxojest0tvq --discovery-token-ca-cert-hash sha256:028727c0c21f22dd29d119b080dcbebb37f5545e7da1968800140ffe225b0123

[root@localhost ~]#
0x02. 配置 kubectl

在 Master上用 root用戶執行下列命令來配置 kubectl:

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile?
echo $KUBECONFIG
0x03. 安裝Pod網絡

安裝 Pod網絡是 Pod之間進行通信的必要條件,k8s支持眾多網絡方案,這里我們依然選用經典的 flannel方案

首先設置系統參數:

sysctl net.bridge.bridge-nf-call-iptables=1

然后在 Master節點上執行如下命令:

kubectl apply -f kube-flannel.yaml
kube-flannel.yaml 文件在此

一旦 Pod網絡安裝完成,可以執行如下命令檢查一下 CoreDNS Pod此刻是否正常運行起來了,一旦其正常運行起來,則可以繼續后續步驟

kubectl get pods --all-namespaces -o wide

同時我們可以看到主節點已經就緒:kubectl get nodes


添加 Slave節點

在兩個 Slave節點上分別執行如下命令來讓其加入Master上已經就緒了的 k8s集群:

kubeadm join --token  : --discovery-token-ca-cert-hash sha256:

如果 token忘記,則可以去 Master上執行如下命令來獲取:

kubeadm token list

上述kubectl join命令的執行結果如下:

[root@localhost ~]# kubeadm join 192.168.39.79:6443 --token ynffffdp.oamgloerxuune80q --discovery-token-ca-cert-hash sha256:7a45c40b5302aba7d8b9cbd3afc6d25c6bb8536dd6317aebcd2909b0427677c8
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.39.79:6443”
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.39.79:6443”
[discovery] Requesting info from "https://192.168.39.79:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.39.79:6443”
[discovery] Successfully established connection with API Server "192.168.39.79:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with "kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "localhost.localdomain" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run "kubectl get nodes" on the master to see this node join the cluster.

效果驗證

查看節點狀態

kubectl get nodes

查看所有 Pod狀態

kubectl get pods --all-namespaces -o wide

好了,集群現在已經正常運行了,接下來看看如何正常的拆卸集群。


拆卸集群

首先處理各節點:

kubectl drain  --delete-local-data --force --ignore-daemonsets
kubectl delete node 

一旦節點移除之后,則可以執行如下命令來重置集群:

kubeadm reset

安裝 dashboard

就像給elasticsearch配一個可視化的管理工具一樣,我們最好也給 k8s集群配一個可視化的管理工具,便于管理集群。

因此我們接下來安裝 v1.10.0版本的 kubernetes-dashboard,用于集群可視化的管理。

首先手動下載鏡像并重新打標簽:(所有節點)

docker pull registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0
docker tag registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
docker image rm registry.cn-qingdao.aliyuncs.com/wangxiaoke/kubernetes-dashboard-amd64:v1.10.0

安裝 dashboard:

kubectl create -f dashboard.yaml

dashboard.yaml 文件在此

查看 dashboard的 pod是否正常啟動,如果正常說明安裝成功:

 kubectl get pods --namespace=kube-system
[root@k8s-master ~]# kubectl get pods --namespace=kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-86c58d9df4-4rds2                1/1     Running   0          81m
coredns-86c58d9df4-rhtgq                1/1     Running   0          81m
etcd-k8s-master                         1/1     Running   0          80m
kube-apiserver-k8s-master               1/1     Running   0          80m
kube-controller-manager-k8s-master      1/1     Running   0          80m
kube-flannel-ds-amd64-8qzpx             1/1     Running   0          78m
kube-flannel-ds-amd64-jvp59             1/1     Running   0          77m
kube-flannel-ds-amd64-wztbk             1/1     Running   0          78m
kube-proxy-crr7k                        1/1     Running   0          81m
kube-proxy-gk5vf                        1/1     Running   0          78m
kube-proxy-ktr27                        1/1     Running   0          77m
kube-scheduler-k8s-master               1/1     Running   0          80m
kubernetes-dashboard-79ff88449c-v2jnc   1/1     Running   0          21s

查看 dashboard的外網暴露端口

kubectl get service --namespace=kube-system
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
kube-dns               ClusterIP   10.96.0.10              53/UDP,53/TCP   5h38m
kubernetes-dashboard   NodePort    10.99.242.186           443:31234/TCP   14

生成私鑰和證書簽名:

openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
rm dashboard.pass.key
openssl req -new -key dashboard.key -out dashboard.csr【如遇輸入,一路回車即可】

生成SSL證書:

openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt

然后將生成的 dashboard.keydashboard.crt置于路徑 /home/share/certs下,該路徑會配置到下面即將要操作的

dashboard-user-role.yaml文件中

創建 dashboard用戶

 kubectl create -f dashboard-user-role.yaml

dashboard-user-role.yaml 文件在此

獲取登陸token

kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk "{print $1}") -nkube-system
[root@k8s-master ~]# kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk "{print $1}") -nkube-system
Name:         admin-token-9d4vl
Namespace:    kube-system
Labels:       
Annotations:  kubernetes.io/service-account.name: admin
              kubernetes.io/service-account.uid: a320b00f-07ed-11e9-93f2-000c2978f207

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi05ZDR2bCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImEzMjBiMDBmLTA3ZWQtMTFlOS05M2YyLTAwMGMyOTc4ZjIwNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbiJ9.WbaHx-BfZEd0SvJwA9V_vGUe8jPMUHjKlkT7MWJ4JcQldRFY8Tdpv5GKCY25JsvT_GM3ob303r0yE6vjQdKna7EfQNO_Wb2j1Yu5UvZnWw52HhNudHNOVL_fFRKxkSVjAILA_C_HvW6aw6TG5h7zHARgl71I0LpW1VESeHeThipQ-pkt-Dr1jWcpPgE39cwxSgi-5qY4ssbyYBc2aPYLsqJibmE-KUhwmyOheF4Lxpg7E3SQEczsig2HjXpNtJizCu0kPyiR4qbbsusulH-kdgjhmD9_XWP9k0BzgutXWteV8Iqe4-uuRGHZAxgutCvaL5qENv4OAlaArlZqSgkNWw

token既然生成成功,接下來就可以打開瀏覽器,輸入 token來登錄進集群管理頁面:


后 記
由于能力有限,若有錯誤或者不當之處,還請大家批評指正,一起學習交流!

My Personal Blog:CodeSheep 程序羊

我的半年技術博客之路



文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/27653.html

相關文章

  • 使用 kubeadm 部署

    摘要:上一章中,我們用去搭建單機集群,并且創建在三章中講解,本篇將介紹利用部署多節點集群,并學會安裝以及使用的命令行工具,快速創建集群實例,完成部署應用的實踐。上一章中,我們用 minikube 去搭建單機集群,并且創建 Deployment、Service(在三章中講解),本篇將介紹利用 kubeadm 部署多節點集群,并學會 安裝以及使用 kubernetes 的命令行工具,快速創建集群實例,...

    番茄西紅柿 評論0 收藏2637
  • CKAD認證中的部署教程

    摘要:以上便是官方的部署方法。如果使用表示讀者可參考本章內容主要介紹了認證中要求掌握的部署配置啟動網絡插件,跟上一篇的內容比較,主要是通過文件去控制創建集群,兩章的部署過程一致,只是網絡插件有所不同。在上一章中,我們已經學會了使用 kubeadm 創建集群和加入新的節點,在本章中,將按照 CKAD 課程的方法重新部署一遍,實際上官方教程的內容不多,筆者寫了兩篇類似的部署方式,如果已經部署了 kub...

    番茄西紅柿 評論0 收藏2637
  • 使用kubeadm創建生產就緒的Kubernetes集群

    摘要:的目標是為集群設置和管理提供基礎實現。穩定的底層實現現在使用不會很快改變的方法創建一個新的集群。是用于在較低級別創建集群的首選工具。調查雖然是,但將繼續致力于改善管理集群的用戶體驗。 作者:LucasK?ldstr?m(CNCF大使)和Luc Perkins(CNCF開發者倡導者) kubeadm是一個工具,使Kubernetes管理員能夠快速,輕松地引導完全符合Certified K...

    voyagelab 評論0 收藏0
  • etcd 集群運維實踐

    摘要:是集群的數據核心,最嚴重的情況是,當出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞相關的運維知識就比較重要,可以容器化部署,也可以在宿主機自行搭建,以下內容是通用的。 etcd 是 Kubernetes 集群的數據核心,最嚴重的情況是,當 etcd 出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞 etcd 相關的運維知識就比較重要...

    pcChao 評論0 收藏0
  • etcd 集群運維實踐

    摘要:是集群的數據核心,最嚴重的情況是,當出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞相關的運維知識就比較重要,可以容器化部署,也可以在宿主機自行搭建,以下內容是通用的。 etcd 是 Kubernetes 集群的數據核心,最嚴重的情況是,當 etcd 出問題徹底無法恢復的時候,解決問題的辦法可能只有重新搭建一個環境。因此圍繞 etcd 相關的運維知識就比較重要...

    Noodles 評論0 收藏0

發表評論

0條評論

最新活動
閱讀需要支付1元查看
<