国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

Centos8安裝在線及離線K8S集群搭建

IT那活兒 / 3750人閱讀
Centos8安裝在線及離線K8S集群搭建
點擊上方藍字關注我們


 1. 配   置  

OS:centos8

kernel:4.18.0-147.8.1.el8_1.x86_64

IP:

192.168.37.128 k8s1

192.168.37.130 k8s2

192.168.37.131 k8s3

注意:安裝K8S需要Linux內核3.10以上,不然會安裝失敗


2.使用kubeadm部署kubernetes集群方法

(主要使用在線安裝)

2.1   配置主機名

hostnamectl set-hostname k8s1

hostnamectl set-hostname k8s2

hostnamectl set-hostname k8s3


2.2   配置IP地址

DEVICE=eth0

TYPE=Ethernet

ONBOOT=yes

BOOTPROTO=static

IPADDR=192.168.37.XXX

NETMASK=255.255.255.0

GATEWAY=192.168.37.2


2.3  主機名稱解析

cat /etc/hosts

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4

::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

#install_add

192.168.37.129 k8s1

192.168.37.130 k8s2

192.168.37.131 k8s3


2.4  主機安全配置

關閉firewalld

systemctl stop firewalld

systemctl disable firewalld

firewall-cmd --state


SELINUX配置(需要重啟主機)

sed -ri s/SELINUX=enforcing/SELINUX=disabled/ /etc/selinux/config


永久關閉swap分區(使用kubeadm部署必須關閉swap分區,修改配置文件后需要重啟操作系統)

cat /etc/fstab


#

# /etc/fstab

# Created by anaconda on Sun May 10 07:55:21 2020

#

# Accessible filesystems, by reference, are maintained under /dev/disk/.

# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.

#

# After editing this file, run systemctl daemon-reload to update systemd

# units generated from this file.

#

/dev/mapper/cl-root / xfs defaults 0 0

UUID=ed5f7f26-6aef-4bb2-b4df-27e46ee612bf /boot ext4 defaults 1 2

/dev/mapper/cl-home /home xfs defaults 0 0

#/dev/mapper/cl-swap swap swap defaults 0 0

在swap文件系統對應的行,行首添加#表示注釋

#free -m

total used free shared buff/cache available

Mem: 1965 1049 85 9 830 771

Swap: 0 0 0


2.5  添加網橋過濾

添加網橋過濾及地址轉發

cat /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_

forward = 1

vm.swappiness = 0


加載br_netfilter模塊

modprobe br_netfilter


查看模塊

lsmod | grep br_netfilter


使配置文件生效

sysctl -p /etc/sysctl.d/k8s.conf


2.6   開啟ipvs

安裝ipset及ipvsadm

yum -y install ipset ipvsadm


在所有節點添加ipvs模塊(所有節點執行)

cat > /etc/sysconfig/modules/ipvs.modules <

#!/bin/bash

modprobe -- ip_vs

modprobe -- ip_vs_rr

modprobe -- ip_vs_wrr

modprobe -- ip_vs_sh

modprobe -- nf_conntrack_ipv4

EOF


加載并檢查模塊

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod

| grep -e ip_vs -e nf_conntrack_ipv4


2.7   安裝docker-ce版本

配置docker yum源

wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/dockerce/linux/centos/docker-ce.repo


查看合適的docker版本,本次安裝最新的版本

yum list docker-ce.x86_64 --showduplicates | sort -r


安裝docker

yum -y install docker


2.8   修改docker配置文件

1.主要修改ExecStart位置,修改默認docker存儲位置

cat /usr/lib/systemd/system/docker.service


[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

BindsTo=containerd.service

After=network-online.target firewalld.service containerd.service

Wants=network-online.target

Requires=docker.socket


[Service]

Type=notify

# the default is not to use systemd for cgroups because the delegate issues still

# exists and systemd currently does not support the cgroup feature set required

# for containers run by docker

ExecStart=/usr/bin/dockerd --graph /data/docker

ExecReload=/bin/kill -s HUP $MAINPID

TimeoutSec=0

RestartSec=2

Restart=always


# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.

# Both the old, and new location are accepted by systemd 229 and up, so using the old location

# to make them work for either version of systemd.

StartLimitBurst=3


# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.

# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make

# this option work for either version of systemd.

StartLimitInterval=60s


# Having non-zero Limit*s causes performance problems due to accounting overhead

# in the kernel. We recommend using cgroups to do container-local accounting.

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity


# Comment TasksMax if your systemd version does not support it.

# Only systemd 226 and above support this option.

TasksMax=infinity


# set delegate yes so that systemd does not reset the cgroups of docker containers

Delegate=yes


# kill only the docker process, not all processes in the cgroup

KillMode=process


[Install]

WantedBy=multi-user.target


2.添加修改daemon.json文件,修改默認存儲驅動及國內鏡像

cat /etc/docker/daemon.json

{

"exec-opts": ["native.cgroupdriver=systemd"],

"log-driver": "json-file",

"log-opts": {

"max-size": "100m"

},

"storage-driver": "overlay2",

"storage-opts": [

"overlay2.override_kernel_check=true"

],

"registry-mirrors": [

"https://registry.docker-cn.com",

"http://hub-mirror.c.163.com",

"https://docker.mirrors.ustc.edu.cn"

]

}


3.配置完后,重新reload json文件及重啟docker

systemctl daemon-reload

systemctl restart docker


使用docker info查看Registry Mirrors是不是修改成功


2.9  安裝kubectl,kubeadm,kubelet軟件

配置阿里云的yum K8S源(注意gpgkey位置要https對齊,不然源加載不出來)

cat kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg


執行安裝

yum -y install kubectl kubeadm kubelet


2.10   軟件設置

主要配置kubelet,如果不配置可能會導致k8s集群無法啟動。為了實現docker使用的cgroupdriver與kubelet使用的

cgroup的一致性,建議修改如下文件內容。

vim /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"


設置為開機自啟動即可,由于沒有生成配置文件,集群初始化后自動啟動

systemctl enable kubelet


2.11   k8s集群容器鏡像準備

1.執行kubeadm config images list 查看K8S集群需要的docker鏡像

kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.18.2

k8s.gcr.io/kube-controller-manager:v1.18.2

k8s.gcr.io/kube-scheduler:v1.18.2

k8s.gcr.io/kube-proxy:v1.18.2

k8s.gcr.io/pause:3.2

k8s.gcr.io/etcd:3.4.3-0

k8s.gcr.io/coredns:1.6.7


2.使用docker pull方式拉取以上鏡像(拉取阿里云鏡像)

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.3-0

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.6.7


3.查看已下載的鏡像

docker iamges

REPOSITORY TAG IMAGE ID CREATED SIZE

calico/node latest 7695a13607d9 7 days ago 263MB

calico/cni latest c6f3d2c436a7 7 days ago 225MB

haproxy latest c033852569f1 3 weeks ago 92.4MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.18.2 0d40868643c6 4 weeks ago 117MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver v1.18.2 6ed75ad404bd 4 weeks ago 173MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler v1.18.2 a3099161e137 4 weeks ago 95.3MB

registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager v1.18.2 ace0a8c17ba9 4 weeks ago 162MB

osixia/keepalived latest d04966a100a7 2 months ago 72.9MB

registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 3 months ago 683kB

registry.cn-hangzhou.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 3 months ago 43.8MB

registry.cn-hangzhou.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 6 months ago 288MB

calico/pod2daemon-flexvol v3.9.0 aa79ce3237eb 8 months ago 9.78MB

calico/cni v3.9.0 56c7969ed8e6 8 months ago 160MB

calico/kube-controllers v3.9.0 f5cc48269a09 8 months ago 50.4MB


2.12   拉取worker節點docker鏡像

worker節點只要kube-proxy/pause這兩個鏡像則可(其他worker節點執行以下命令)

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.18.2

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2


拉取后執行 docker images查看


2.13   初始化K8S集群

kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --pod-network-cidr=172.16.0.0/16 --apiserver-advertise-address=192.168.37.128


輸出日志如下:

I0920 13:31:38.444013 59901 version.go:252] remote version is much newer: v1.19.2; falling back to: stable-1.18

W0920 13:31:40.534993 59901 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

[init] Using Kubernetes version: v1.18.9

[preflight] Running pre-flight checks

   [WARNING FileExisting-tc]: tc not found in system path

[preflight] Pulling images required for setting up a Kubernetes cluster

[preflight] This might take a minute or two, depending on the speed of your internet connection

[preflight] You can also perform this action in beforehand using kubeadm config images pull


[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Starting the kubelet

[certs] Using certificateDir folder "/etc/kubernetes/pki"

[certs] Generating "ca" certificate and key

[certs] Generating "apiserver" certificate and key

[certs] apiserver serving cert is signed for DNS names [k8s1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.37.128]

[certs] Generating "apiserver-kubelet-client" certificate and key

[certs] Generating "front-proxy-ca" certificate and key

[certs] Generating "front-proxy-client" certificate and key

[certs] Generating "etcd/ca" certificate and key

[certs] Generating "etcd/server" certificate and key

[certs] etcd/server serving cert is signed for DNS names [k8s1 localhost] and IPs [192.168.37.128 127.0.0.1 ::1]

[certs] Generating "etcd/peer" certificate and key

[certs] etcd/peer serving cert is signed for DNS names [k8s1 localhost] and IPs [192.168.37.128 127.0.0.1 ::1]

[certs] Generating "etcd/healthcheck-client" certificate and key

[certs] Generating "apiserver-etcd-client" certificate and key

[certs] Generating "sa" key and public key

[kubeconfig] Using kubeconfig folder "/etc/kubernetes"

[kubeconfig] Writing "admin.conf" kubeconfig file

[kubeconfig] Writing "kubelet.conf" kubeconfig file

[kubeconfig] Writing "controller-manager.conf" kubeconfig file

[kubeconfig] Writing "scheduler.conf" kubeconfig file

[control-plane] Using manifest folder "/etc/kubernetes/manifests"

[control-plane] Creating static Pod manifest for "kube-apiserver"

[control-plane] Creating static Pod manifest for "kube-controller-manager"

W0920 13:33:01.598426 59901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[control-plane] Creating static Pod manifest for "kube-scheduler"

W0920 13:33:01.606176 59901 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"

[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s

[apiclient] All control plane components are healthy after 19.504561 seconds

[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster

[upload-certs] Skipping phase. Please see --upload-certs

[mark-control-plane] Marking the node k8s1 as control-plane by adding the label "node-role.kubernetes.io/master="

[mark-control-plane] Marking the node k8s1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

[bootstrap-token] Using token: alu9wy.79pfunrsnxgvle0b

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy


Your Kubernetes control-plane has initialized successfully!


To start using your cluster, you need to run the following as a regular user:


mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/


Then you can join any number of worker nodes by running the following on each as root:


kubeadm join 192.168.37.128:6443 --token alu9wy.79pfunrsnxgvle0b

--discovery-token-ca-cert-hash sha256:8bc468f16a049ea94b4659bc2c58a6ddb5b4a2a53eff98051442363d585e3358


參數解釋:

--image-repository 因為是從阿里云拉取的docker鏡像,需要指定倉庫來啟動

--pod-network-cidr 指定pod內部的tcp網絡

--apiserver-advertise-address 本機綁定的IP地址


執行完后,根據提示信息執行步驟

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


2.14   拉取calico鏡像及配置文件

通過docker pull拉取calico鏡像

docker pull calico/node

docker pull calico/cni

docker pull calico/pod2daemon-flexvol

docker pull calico/kube-controllers


下載calico.yml文件

wget https://docs.projectcalico.org/manifests/calico.yaml


2.15 修改calico.yml文件

在配置文件中autodetect標簽下添加以下(一定要注意使用空格,不能使用tab,yml是強格式文件)

- name: IP_AUTODETECTION_METHOD

value: "interface=ens.*" ---對應本機IP地址的網卡名稱

修改cidr的地址為172(K8S初始化時指定的pod網絡地址,如初始化為其他IP,則修改對應IP)

- name: CALICO_IPV4POOL_CIDR

value: "172.16.0.0/16"


修改完后,應用

kubectl apply -f calico.yml


2.16添加其他worker節點到master節點

kubeadm join 192.168.37.128:6443 --token alu9wy.79pfunrsnxgvle0b

--discovery-token-ca-cert-hash sha256:8bc468f16a049ea94b4659bc2c58a6ddb5b4a2a53eff98051442363d585e3358


執行完后,在master節點使用kubectl get nodes查看K8S集群狀態

NAME STATUS ROLES AGE VERSION

k8s1 Ready master 3d6h v1.18.2

k8s2 Ready 3d6h v1.18.2

k8s3 Ready 23h v1.18.2


查看集群信息

kubectl get cs

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-0 Healthy {"health":"true"}


這樣一個K8S集群就搭建完了


下面來說說離線安裝,一般生產庫是沒有連接外網的,則需要通過離線方式進行安裝

3.K8S離線安裝


1.離線安裝主要通過保存上面的docker鏡像,然后上傳到沒有網絡的地方進行加載

保存docker鏡像,主要為docker save -o命令

docker save -o calico_node.tar calico/node:latest


加載docker鏡像,主要為docker load -i命令

docker load -i calico-node.tar


2.而離線K8S二進制包可以使用如下方式保存在本地,把所有下載的都上傳至內網中進行安裝,避免缺少依賴包而從裝失敗

yumdownloader --resolve kubelet kubeadm kubectl


3.離線安裝步驟

離線安裝步驟與在線安裝初始化K8S一致,不再贅述.


4.K8S安裝遇到的問題匯總解決


1. 執行kubectl命令報錯

Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

解決:

這個為admin.conf文件不一致導致,可把$HOME/.kube文件刪除,再從/etc/kubernetes/admin.conf拷貝到該目錄即可

rm -rf $HOME/.kube

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config


2.kubelet日志里面報錯

Failed to get system container stats for “/system.slice/docker.service”: failed to get cgroup stats for “/system.slice/docker.service”: failed to get container info for “/system.slice/docker.service”: unknown container “/system.slice/docker.service”

解決:

受低版本的操作系統影響,cgroup-driver參數應該通過kubelet 的配置指定配置文件來配置

編輯kubelet文件

vim /etc/sysconfig/kubelet

添加參數

--kubelet-cgroups=/systemd/system.slice

重啟kubelet

systemctl restart kubelet


END




文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/130035.html

相關文章

  • Centos8 部署 ElasticSearch 集群搭建 ELK,基于Logstash同步MyS

    摘要:如果這不起作用,請將驅動程序移到下方,并且不要在配置文件中提供任何驅動程序路徑參考鏈接多數原因是文件渠道文件配置出錯,檢查一下管道配置文件里面的建議不要使用,會報錯誤重新配置了權限還是報錯,暫時沒找到原因,所以換了個用戶就行了Centos8安裝Docker 1.更新一下yum [root@VM-24-9-centos ~]# yum -y update 2.安裝cont...

    bang590 評論0 收藏0
  • 在Node.js下運用MQTT協議實現即時通訊離線推送

    摘要:前言前些日子了解到這樣一個協議,可以在上達到即時通訊的效果,但網上并不能很方便地找到一篇目前版本的在下正確實現這個協議的博客。 前言 前些日子了解到mqtt這樣一個協議,可以在web上達到即時通訊的效果,但網上并不能很方便地找到一篇目前版本的在node下正確實現這個協議的博客。 自己搗鼓了一段時間,理解不深刻,但也算是基本能夠達到使用目的。 本文目的為對MQTT有需求的學習者提供一定程...

    jlanglang 評論0 收藏0
  • UCloud UK8S虛擬節點 讓用戶不再擔心集群沒有資源

    摘要:節點資源預留導致的浪費。虛擬節點實現了和這兩大容器產品的無縫對接,豐富了集群的彈性能力。單個虛擬節點計算資源理論無上限,無需擔心容量問題。通過虛擬節點及,可以用最小的資源成本,來應對高峰期的任務壓力,提升整體任務效率。隨著云原生概念的深入人心,越來越多的企業開始著手基于容器相關技術來部署其應用,Serverless也開始在企業IT基礎構建中發揮出越來越重要的作用。UCloud 先后推出了開箱...

    Tecode 評論0 收藏0

發表評論

0條評論

IT那活兒

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<