国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

干貨 | TiDB Operator實踐

jhhfft / 1663人閱讀

摘要:一環境二安裝配置免密登錄,配置節點所需鏡像的文件由于某些鏡像國內無法訪問需要現將鏡像通過代理下載到本地然后上傳到本地鏡像倉庫或,同時修改配置文件,個別組件存放位置,需要新建服務器分發文件。文章轉載自公眾號北京爺們兒

K8s和TiDB都是目前開源社區中活躍的開源產品,TiDB
Operator項目是一個在K8s上編排管理TiDB集群的項目。本文詳細記錄了部署K8s及install TiDB
Operator的詳細實施過程,希望能對剛"入坑"的同學有所幫助。
一、環境

Ubuntu 16.04
K8s 1.14.1

二、Kubespray安裝K8s 配置免密登錄
1 yum -y install expect

vi /tmp/autocopy.exp

 1 #!/usr/bin/expect
 2
 3 set timeout
 4 set user_hostname [lindex $argv ]
 5 set password [lindex $argv ]
 6 spawn ssh-copy-id $user_hostname
 7    expect {
 8        "(yes/no)?"
 9        {
10            send "yes
"
11            expect "*assword:" { send "$password
"}
12        }
13        "*assword:"
14        {
15            send "$password
"
16        }
17    }
18 expect eof
 1 ssh-keyscan addedip  >> ~/.ssh/known_hosts
 2
 3 ssh-keygen -t rsa -P ""
 4
 5 for i in 10.0.0.{31,32,33,40,10,20,50}; do  ssh-keyscan $i  >> ~/.ssh/known_hosts ; done
 6
 7 /tmp/autocopy.exp root@addeip
 8 ssh-copy-id addedip
 9
10 /tmp/autocopy.exp root@10.0.0.31
11 /tmp/autocopy.exp root@10.0.0.32
12 /tmp/autocopy.exp root@10.0.0.33
13 /tmp/autocopy.exp root@10.0.0.40
14 /tmp/autocopy.exp root@10.0.0.10
15 /tmp/autocopy.exp root@10.0.0.20
16 /tmp/autocopy.exp root@10.0.0.50
配置Kubespray
1 pip install -r requirements.txt
2 cp -rfp inventory/sample inventory/mycluster

inventory/mycluster/inventory.ini

inventory/mycluster/inventory.ini

 1 # ## Configure "ip" variable to bind kubernetes services on a
 2 # ## different ip than the default iface
 3 # ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
 4 [all]
 5 # node1 ansible_host=95.54.0.12  # ip=10.3.0.1 etcd_member_name=etcd1
 6 # node2 ansible_host=95.54.0.13  # ip=10.3.0.2 etcd_member_name=etcd2
 7 # node3 ansible_host=95.54.0.14  # ip=10.3.0.3 etcd_member_name=etcd3
 8 # node4 ansible_host=95.54.0.15  # ip=10.3.0.4 etcd_member_name=etcd4
 9 # node5 ansible_host=95.54.0.16  # ip=10.3.0.5 etcd_member_name=etcd5
10 # node6 ansible_host=95.54.0.17  # ip=10.3.0.6 etcd_member_name=etcd6
11 etcd1 ansible_host=10.0.0.31 etcd_member_name=etcd1
12 etcd2 ansible_host=10.0.0.32 etcd_member_name=etcd2
13 etcd3 ansible_host=10.0.0.33 etcd_member_name=etcd3
14 master1 ansible_host=10.0.0.40
15 node1 ansible_host=10.0.0.10
16 node2 ansible_host=10.0.0.20
17 node3 ansible_host=10.0.0.50
18
19 # ## configure a bastion host if your nodes are not directly reachable
20 # bastion ansible_host=x.x.x.x ansible_user=some_user
21
22 [kube-master]
23 # node1
24 # node2
25 master1
26 [etcd]
27 # node1
28 # node2
29 # node3
30 etcd1
31 etcd2
32 etcd3
33
34 [kube-node]
35 # node2
36 # node3
37 # node4
38 # node5
39 # node6
40 node1
41 node2
42 node3
43
44 [k8s-cluster:children]
45 kube-master
46 kube-node
節點所需鏡像的文件

由于某些鏡像國內無法訪問需要現將鏡像通過代理下載到本地然后上傳到本地鏡像倉庫或DockerHub,同時修改配置文件,個別組件存放位置https://storage.googleapis.com,需要新建Nginx服務器分發文件。

建立Nginx服務器

~/distribution/docker-compose.yml

創建文件目錄及Nginx配置文件目錄

~/distribution/conf.d/open_distribute.conf

啟動

下載并上傳所需文件 具體版本號參考roles/download/defaults/main.yml文件中kubeadm_version、kube_version、image_arch參數

安裝Docker及Docker-Compose

 1 apt-get install 
 2 apt-transport-https 
 3 ca-certificates 
 4 curl 
 5 gnupg-agent 
 6 software-properties-common
 7
 8 curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
 9
10 add-apt-repository 
11 "deb [arch=amd64] https://download.docker.com/linux/ubuntu 
12 $(lsb_release -cs) 
13 stable"
14
15 apt-get update
16
17 apt-get install docker-ce docker-ce-cli containerd.io
18
19 chmod +x /usr/local/bin/docker-compose
20 sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

新建Nginx docker-compose.yml

1 mkdir ~/distribution
2 vi ~/distribution/docker-compose.yml
 1 #  distribute
 2 version: "2"
 3 services:    
 4    distribute:
 5        image: nginx:1.15.12
 6        volumes:
 7            - ./conf.d:/etc/nginx/conf.d
 8            - ./distributedfiles:/usr/share/nginx/html
 9        network_mode: "host"
10        container_name: nginx_distribute 
1 mkdir ~/distribution/distributedfiles
2 mkdir ~/distribution/
3 mkdir ~/distribution/conf.d
4 vi ~/distribution/conf.d/open_distribute.conf
 1 #open_distribute.conf
 2
 3 server {
 4    #server_name distribute.search.leju.com;
 5        listen 8888;
 6
 7    root /usr/share/nginx/html;
 8
 9    add_header Access-Control-Allow-Origin *;  
10    add_header Access-Control-Allow-Headers X-Requested-With;  
11    add_header Access-Control-Allow-Methods GET,POST,OPTIONS;  
12
13    location / {
14    #    index index.html;
15                autoindex on;        
16    }
17    expires off;
18    location ~ .*.(gif|jpg|jpeg|png|bmp|swf|eot|ttf|woff|woff2|svg)$ {
19        expires -1;
20    }
21
22    location ~ .*.(js|css)?$ {
23        expires -1 ;
24    }
25 } # end of public static files domain : [ distribute.search.leju.com ]
1 docker-compose up -d
1 wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm
2
3 scp /tmp/kubeadm  10.0.0.60:/root/distribution/distributedfiles
4
5 wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/hyperkube

需要下載并上傳到私有倉庫的鏡像

 1 docker pull k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0
 2 docker tag k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0
 3 docker push jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0
 4
 5 docker pull k8s.gcr.io/k8s-dns-node-cache:1.15.1
 6 docker tag k8s.gcr.io/k8s-dns-node-cache:1.15.1 jiashiwen/k8s-dns-node-cache:1.15.1
 7 docker push jiashiwen/k8s-dns-node-cache:1.15.1
 8
 9 docker pull gcr.io/google_containers/pause-amd64:3.1
10 docker tag gcr.io/google_containers/pause-amd64:3.1 jiashiwen/pause-amd64:3.1
11 docker push jiashiwen/pause-amd64:3.1
12
13 docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1
14 docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 jiashiwen/kubernetes-dashboard-amd64:v1.10.1
15 docker push jiashiwen/kubernetes-dashboard-amd64:v1.10.1
16
17 docker pull gcr.io/google_containers/kube-apiserver:v1.14.1
18 docker tag gcr.io/google_containers/kube-apiserver:v1.14.1 jiashiwen/kube-apiserver:v1.14.1
19 docker push jiashiwen/kube-apiserver:v1.14.1
20
21 docker pull gcr.io/google_containers/kube-controller-manager:v1.14.1
22 docker tag gcr.io/google_containers/kube-controller-manager:v1.14.1 jiashiwen/kube-controller-manager:v1.14.1
23 docker push jiashiwen/kube-controller-manager:v1.14.1
24
25 docker pull gcr.io/google_containers/kube-scheduler:v1.14.1
26 docker tag gcr.io/google_containers/kube-scheduler:v1.14.1 jiashiwen/kube-scheduler:v1.14.1
27 docker push jiashiwen/kube-scheduler:v1.14.1
28
29 docker pull gcr.io/google_containers/kube-proxy:v1.14.1
30 docker tag gcr.io/google_containers/kube-proxy:v1.14.1 jiashiwen/kube-proxy:v1.14.1
31 docker push jiashiwen/kube-proxy:v1.14.1
32
33 docker pull gcr.io/google_containers/pause:3.1
34 docker tag gcr.io/google_containers/pause:3.1 jiashiwen/pause:3.1
35 docker push jiashiwen/pause:3.1
36
37 docker pull gcr.io/google_containers/coredns:1.3.1
38 docker tag gcr.io/google_containers/coredns:1.3.1 jiashiwen/coredns:1.3.1
39 docker push  jiashiwen/coredns:1.3.1

用于下載上傳鏡像的腳本

 1 #!/bin/bash
 2
 3 privaterepo=jiashiwen
 4
 5 k8sgcrimages=(
 6 cluster-proportional-autoscaler-amd64:1.4.0
 7 k8s-dns-node-cache:1.15.1
 8 )
 9
10 gcrimages=(
11 pause-amd64:3.1
12 kubernetes-dashboard-amd64:v1.10.1
13 kube-apiserver:v1.14.1
14 kube-controller-manager:v1.14.1
15 kube-scheduler:v1.14.1
16 kube-proxy:v1.14.1
17 pause:3.1
18 coredns:1.3.1
19 )
20
21
22 for k8sgcrimageName in ${k8sgcrimages[@]} ; do
23 echo $k8sgcrimageName
24 docker pull k8s.gcr.io/$k8sgcrimageName
25 docker tag k8s.gcr.io/$k8sgcrimageName $privaterepo/$k8sgcrimageName
26 docker push $privaterepo/$k8sgcrimageName
27 done
28
29
30 for gcrimageName in ${gcrimages[@]} ; do
31 echo $gcrimageName
32 docker pull gcr.io/google_containers/$gcrimageName
33 docker tag gcr.io/google_containers/$gcrimageName $privaterepo/$gcrimageName
34 docker push $privaterepo/$gcrimageName
35 done

修改文件inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml,修改K8s鏡像倉庫

1 # kube_image_repo: "gcr.io/google-containers"
2 kube_image_repo: "jiashiwen"

修改roles/download/defaults/main.yml

 1 #dnsautoscaler_image_repo: "k8s.gcr.io/cluster-proportional-autoscaler-{{   image_arch }}"
 2 dnsautoscaler_image_repo: "jiashiwen/cluster-proportional-autoscaler-{{   image_arch }}"
 3
 4 #kube_image_repo: "gcr.io/google-containers"
 5 kube_image_repo: "jiashiwen"
 6
 7 #pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}"
 8 pod_infra_image_repo: "jiashiwen/pause-{{ image_arch }}"
 9
10 #dashboard_image_repo: "gcr.io/google_containers/kubernetes-dashboard-{{   image_arch }}"
11 dashboard_image_repo: "jiashiwen/kubernetes-dashboard-{{ image_arch }}"
12
13 #nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache"
14 nodelocaldns_image_repo: "jiashiwen/k8s-dns-node-cache"
15
16 #kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/  release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm"
17 kubeadm_download_url: "http://10.0.0.60:8888/kubeadm"
18
19 #hyperkube_download_url: "https://storage.googleapis.com/  kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/  hyperkube"
20 hyperkube_download_url: "http://10.0.0.60:8888/hyperkube"
三、執行安裝

安裝命令

1 ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml

重置命令

1 ansible-playbook -i inventory/mycluster/inventory.ini reset.yml
四、驗證K8s集群

安裝Kubectl

本地瀏覽器打開https://storage.googleapis.co...

用上一步得到的最新版本號v1.7.1替換下載地址中的$(curl -s https://storage.googleapis.co...:// storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubectl

上傳下載好的kubectl

1 scp /tmp/kubectl root@xxx:/root

修改屬性

1 chmod +x ./kubectl
2 mv ./kubectl /usr/local/bin/kubectl

Ubuntu

1 sudo snap install kubectl --classic

CentOS

將master節點上的~/.kube/config 文件復制到你需要訪問集群的客戶端上即可

1 scp 10.0.0.40:/root/.kube/config ~/.kube/config

執行命令驗證集群

1 kubectl get nodes
2 kubectl cluster-info
五、TiDB-Operaor部署

安裝helm

https://blog.csdn.net/bbwangj...

安裝helm

1 curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
2 chmod 700 get_helm.sh
3 ./get_helm.sh

查看helm版本

1 helm version

初始化

1 helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
為K8s提供 local volumes

參考文檔https://github.com/kubernetes...
tidb-operator啟動會為pd和tikv綁定pv,需要在discovery directory下創建多個目錄

格式化并掛載磁盤

1 mkfs.ext4 /dev/vdb
2 DISK_UUID=$(blkid -s UUID -o value /dev/vdb) 
3 mkdir /mnt/$DISK_UUID
4 mount -t ext4 /dev/vdb /mnt/$DISK_UUID

/etc/fstab持久化mount

1 echo UUID=`sudo blkid -s UUID -o value /dev/vdb` /mnt/$DISK_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab

創建多個目錄并mount到discovery directory

1 for i in $(seq 1 10); do
2 sudo mkdir -p /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}
3 sudo mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i}
4 done

/etc/fstab持久化mount

1 for i in $(seq 1 10); do
2 echo /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab
3 done

為tidb-operator創建local-volume-provisioner

1 $ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
2 $ kubectl get po -n kube-system -l app=local-volume-provisioner
3 $ kubectl get pv --all-namespaces | grep local-storage 
六、Install TiDB Operator

項目中使用了gcr.io/google-containers/hyperkube,國內訪問不了,簡單的辦法是把鏡像重新push到dockerhub然后修改charts/tidb-operator/values.yaml

 1 scheduler:
 2  # With rbac.create=false, the user is responsible for creating this   account
 3  # With rbac.create=true, this service account will be created
 4  # Also see rbac.create and clusterScoped
 5  serviceAccount: tidb-scheduler
 6  logLevel: 2
 7  replicas: 1
 8  schedulerName: tidb-scheduler
 9  resources:
10    limits:
11      cpu: 250m
12      memory: 150Mi
13    requests:
14      cpu: 80m
15      memory: 50Mi
16  # kubeSchedulerImageName: gcr.io/google-containers/hyperkube
17  kubeSchedulerImageName: yourrepo/hyperkube
18  # This will default to matching your kubernetes version
19  # kubeSchedulerImageTag: latest

TiDB Operator使用CRD擴展Kubernetes,因此要使用TiDB Operator,首先應該創建TidbCluster自定義資源類型。

1 kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml
2 kubectl get crd tidbclusters.pingcap.com

安裝TiDB-Operator

1 $ git clone https://github.com/pingcap/tidb-operator.git
2 $ cd tidb-operator
3 $ helm install charts/tidb-operator --name=tidb-operator   --namespace=tidb-admin
4 $ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/  instance=tidb-operator
七、部署TiDB
1 helm install charts/tidb-cluster --name=demo --namespace=tidb
2 watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
八、驗證 安裝MySQL客戶端

參考文檔https://dev.mysql.com/doc/ref...

CentOS安裝

1 wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm
2 yum localinstall mysql80-community-release-el7-3.noarch.rpm -y
3 yum repolist all | grep mysql
4 yum-config-manager --disable mysql80-community
5 yum-config-manager --enable mysql57-community
6 yum install mysql-community-client

Ubuntu安裝

1 wget https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb
2 dpkg -i mysql-apt-config_0.8.13-1_all.deb
3 apt update
4
5 # 選擇MySQL版本
6 dpkg-reconfigure mysql-apt-config
7 apt install mysql-client -y
九、映射TiDB端口

查看TiDB Service

1 kubectl get svc --all-namespaces

映射TiDB端口

1 # 僅本地訪問
2 kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb
3
4 # 其他主機訪問
5 kubectl port-forward --address 0.0.0.0 svc/demo-tidb 4000:4000 --namespace=tidb

首次登錄MySQL

1 mysql -h 127.0.0.1 -P 4000 -u root -D test

修改TiDB密碼

1 SET PASSWORD FOR "root"@"%" = "wD3cLpyO5M"; FLUSH PRIVILEGES;

趟坑小記

1、K8s國內安裝

K8s鏡像多在gcr.io國內訪問不到,基本做法是把鏡像導入DockerHub或者私有鏡像,這一點在K8s部署章節有詳細過程就不累述了。

2、TiDB-Operator 本地存儲配置

Operator在啟動集群時pd和TiKV需要綁定本地存儲如果掛載點不足會導致pod啟動過程中找不到可已bond的pv始終處于pending或createing狀態,詳細配請參閱https://github.com/kubernetes...“Sharing a disk filesystem by multiple filesystem PVs”一節,同一塊磁盤綁定多個掛載目錄,為Operator提供足夠的bond

3、MySQL客戶端版本問題

目前TiDB只支持MySQL5.7版本客戶端8.0會報ERROR 1105 (HY000): Unknown charset id 255


點擊"K8s"了解更多詳情。

文章轉載自公眾號"北京IT爺們兒"

文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。

轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/33185.html

相關文章

  • 干貨 | TiDB Operator實踐

    摘要:一環境二安裝配置免密登錄,配置節點所需鏡像的文件由于某些鏡像國內無法訪問需要現將鏡像通過代理下載到本地然后上傳到本地鏡像倉庫或,同時修改配置文件,個別組件存放位置,需要新建服務器分發文件。文章轉載自公眾號北京爺們兒 K8s和TiDB都是目前開源社區中活躍的開源產品,TiDBOperator項目是一個在K8s上編排管理TiDB集群的項目。本文詳細記錄了部署K8s及install TiDB...

    piglei 評論0 收藏0
  • 黃東旭:When TiDB Meets Kubernetes

    摘要:本文是我司黃東旭同學在上的演講實錄,主要分享了關于與整合的一些工作。同時也是特別喜歡開源,基本上做的所有東西都是開源,包括像這些項目。其實嚴格來說,他們只是做運維的創業公司。背后的系統的前身就是的。支持事務的前提之下還支持的。 本文是我司 CTO 黃東旭同學在 DTCC2017 上的《When TiDB Meets Kubernetes》演講實錄,主要分享了關于 TiDB 與 Kube...

    fobnn 評論0 收藏0
  • TiDB 社區成長足跡與小紅花 | TiDB DevCon 2019

    摘要:在上,我司聯合創始人崔秋帶大家一起回顧了年社區成長足跡,在社區榮譽時刻環節,我們為新晉授予了證書,并為年度最佳貢獻個人團隊頒發了榮譽獎杯。同時,我們也為新晉授予了證書,并為年最佳社區貢獻個人最佳社區貢獻團隊頒發了榮譽獎杯。 2018 年 TiDB 產品變得更加成熟和穩定,同時 TiDB 社區力量也在發展壯大。在 TiDB DevCon 2019 上,我司聯合創始人崔秋帶大家一起回顧了 ...

    hlcfan 評論0 收藏0

發表評論

0條評論

jhhfft

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<