知識點描述
標準指導操作
主機名 | ip | 擔任角色 | k8s版本 |
master | 192.168.158.136 | master | v1.17.4 |
node01 | 192.168.158.137 | node | v1.17.4 |
node02 | 192.168.158.138 | node | v1.17.4 |
1)默認以安裝好k8s集群就不做敘述,需要準備nfs單機環境,若k8s為多master集群,可以準備nfs集群環境。
# 在master上安裝nfs服務,設置開機自啟動。
[root@nfs ~]# yum install nfs-utils -y
[root@nfs ~]# systemctl restart nfs
[root@nfs ~]# systemctl enable nfs
# 在node上安裝nfs服務,注意不需要啟動。
[root@k8s-master01 ~]# yum install nfs-utils -y
2)準備共享目錄,將目錄以讀寫權限暴露給192.168.158.0/24網段中的所有主機(在生產環境應該只將共享目錄暴露給集群所在機器);必須暴漏共享目錄,否則在創建pod時會出現報錯無法找到pv!
# 創建共享目錄。
for x in $(seq 1 6);
> do
> mkdir -p /data/redis-cluster/pv${x}
> done
# 將共享目錄暴露,暴露給指定主機將“192.168.158.0/24”改成主機ip即可。
[root@master ~]# vim /etc/exports
/data/redis-cluster/pv1 192.168.158.0/24(rw,no_root_squash)
/data/redis-cluster/pv2 192.168.158.0/24(rw,no_root_squash)
/data/redis-cluster/pv3 192.168.158.0/24(rw,no_root_squash)
/data/redis-cluster/pv4 192.168.158.0/24(rw,no_root_squash)
/data/redis-cluster/pv5 192.168.158.0/24(rw,no_root_squash)
/data/redis-cluster/pv6 192.168.158.0/24(rw,no_root_squash)
3)創建6個pv,kubectl apply -f redis-pv.yaml,下面為redis-pv.yaml內容
apiVersion: v1
kind: PersistentVolume # 創建pv
metadata:
name: redis-pv1 # 名稱
spec:
capacity:
storage: 3Gi # 3G磁盤空間
accessModes:
- ReadWriteOnce # 權限讀寫
persistentVolumeReclaimPolicy: Recycle # 回收策略,清除數據
storageClassName: "redis-cluster" # 存儲類別為“redis-cluster”,只能允許存儲類別相同的pvc使用
nfs:
path: /data/redis-cluster/pv1
server: 192.168.158.136
# ...
# 中間省略一直到redis-pv6 修改metadata:name 和 spec:nfs:path即可。
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv6
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: "redis-cluster"
nfs:
path: /data/redis-cluster/pv6
server: 192.168.158.136
[root@master ~]# kubectl apply -f redis-pv.yaml
# 創建成功可以通過kubectl命令看到下列6個pv:
[root@master ~]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
redis-pv1 3Gi RWO Recycle Bound default/data-redis-cluster-0 redis-cluster 115m
redis-pv2 3Gi RWO Recycle Bound default/data-redis-cluster-2 redis-cluster 115m
redis-pv3 3Gi RWO Recycle Bound default/data-redis-cluster-3 redis-cluster 115m
redis-pv4 3Gi RWO Recycle Bound default/data-redis-cluster-4 redis-cluster 115m
redis-pv5 3Gi RWO Recycle Bound default/data-redis-cluster-1 redis-cluster 115m
redis-pv6 3Gi RWO Recycle Bound default/data-redis-cluster-5 redis-cluster 115m
4)創建ConfigMap和StatefulSet:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-cluster
data:
update-node.sh: |
#!/bin/sh
REDIS_NODES="/data/nodes.conf"
sed -i -e "/myself/ s/[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}/${POD_IP}/" ${REDIS_NODES}
exec "$@"
redis.conf: |+
cluster-enabled yes
cluster-require-full-coverage no
cluster-node-timeout 15000
cluster-config-file /data/nodes
cluster-migration-barrier 1
Maxmemory 3GB
port 6379
appendonly yes
protected-mode no
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis-cluster
spec:
serviceName: redis-cluster
replicas: 6
selector: # 標簽選擇器
matchLabels:
app: redis-cluster
template: # pod創建模板
metadata:
labels:
app: redis-cluster
spec:
containers:
- name: redis
image: redis:5.0.5-alpine
ports:
- containerPort: 6379
name: client
- containerPort: 16379
name: gossip
command: ["/conf/update-node.sh", "redis-server", "/conf/redis.conf"]
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- name: conf
mountPath: /conf
readOnly: false
- name: data
mountPath: /data
readOnly: false
volumes:
- name: conf
configMap:
name: redis-cluster
defaultMode: 0755
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 3Gi
storageClassName: redis-cluster
[root@master ~]# kubectl apply -f redis-StatefulSets.yaml
# 查看
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-58777cc9fd-cwj77 1/1 Running 1 29h
redis-cluster-0 1/1 Running 0 76m
redis-cluster-1 1/1 Running 0 76m
redis-cluster-2 1/1 Running 0 76m
redis-cluster-3 1/1 Running 0 75m
redis-cluster-4 1/1 Running 0 74m
redis-cluster-5 1/1 Running 0 74m
5)創建service對外暴露端口,kubectl apply -f redis-svc.yaml,若需要集群外機器訪問,將下列yaml中type值改為NodePort即可。
---
apiVersion: v1
kind: Service
metadata:
name: redis-cluster
spec:
type: ClusterIP # 集群內部機器可訪問,如需要集群外部訪問,將類型改為NodePort即可
clusterIP: 10.96.97.97 # 不寫會自己分配
ports:
- port: 6379
targetPort: 6379
name: client
- port: 16379
targetPort: 16379
name: gossip
selector:
app: redis-cluster
[root@master ~]# kubectl apply -f redis-svc.yaml
[root@master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-cluster ClusterIP 10.96.97.97 6379/TCP,16379/TCP 124m
6)初始化redis,創建redis集群:
kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath={range.items[*]}{.status.podIP}:6379 )
[root@master ~]# kubectl exec -it redis-cluster-0 -- redis-cli --cluster create --cluster-replicas 1 $(kubectl get pods -l app=redis-cluster -o jsonpath={range.items[*]}{.status.podIP}:6379 )
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 10.244.2.10:6379 to 10.244.2.8:6379
Adding replica 10.244.1.10:6379 to 10.244.1.8:6379
Adding replica 10.244.1.9:6379 to 10.244.2.9:6379
M: aaf12abf3906e40d7c1084fa7228e99a49fc02df 10.244.2.8:6379
slots:[0-5460] (5461 slots) master
M: 547619c817623c71502e52413e46bf33bfb307bc 10.244.1.8:6379
slots:[5461-10922] (5462 slots) master
M: cd3abc406759315a814820dc0a6ce53b93a919a8 10.244.2.9:6379
slots:[10923-16383] (5461 slots) master
S: b0e40a1d30b397bacefd0f4c4d8584246ce52fc6 10.244.1.9:6379
replicates cd3abc406759315a814820dc0a6ce53b93a919a8
S: d8f1a35fc156598c4d9871a607f2639206072782 10.244.2.10:6379
replicates aaf12abf3906e40d7c1084fa7228e99a49fc02df
S: 372b0086cccdcdea81be571df557b958712529a5 10.244.1.10:6379
replicates 547619c817623c71502e52413e46bf33bfb307bc
Can I set the above configuration? (type yes to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
......
>>> Performing Cluster Check (using node 10.244.2.8:6379)
M: aaf12abf3906e40d7c1084fa7228e99a49fc02df 10.244.2.8:6379
slots:[0-5460] (5461 slots) master
1 additional replica(s)
S: 372b0086cccdcdea81be571df557b958712529a5 10.244.1.10:6379
slots: (0 slots) slave
replicates 547619c817623c71502e52413e46bf33bfb307bc
S: d8f1a35fc156598c4d9871a607f2639206072782 10.244.2.10:6379
slots: (0 slots) slave
replicates aaf12abf3906e40d7c1084fa7228e99a49fc02df
M: 547619c817623c71502e52413e46bf33bfb307bc 10.244.1.8:6379
slots:[5461-10922] (5462 slots) master
1 additional replica(s)
S: b0e40a1d30b397bacefd0f4c4d8584246ce52fc6 10.244.1.9:6379
slots: (0 slots) slave
replicates cd3abc406759315a814820dc0a6ce53b93a919a8
M: cd3abc406759315a814820dc0a6ce53b93a919a8 10.244.2.9:6379
slots:[10923-16383] (5461 slots) master
1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
7)驗證集群 redis-cli cluster info,可以看到集群有6個節點,創建成功。
[root@master ~]# kubectl exec -it redis-cluster-0 -- redis-cli cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:6454
cluster_stats_messages_pong_sent:6664
cluster_stats_messages_sent:13118
cluster_stats_messages_ping_received:6664
cluster_stats_messages_pong_received:6438
cluster_stats_messages_received:13102
8)配置修改,將configmap中的redis.conf修改即可,重啟pod可將配置生效。
[root@master ~]# kubectl delete pods redis-cluster-0
[root@master ~]# kubectl delete pods redis-cluster-1
[root@master ~]# kubectl delete pods redis-cluster-2
[root@master ~]# kubectl delete pods redis-cluster-3
[root@master ~]# kubectl delete pods redis-cluster-4
[root@master ~]# kubectl delete pods redis-cluster-5
文章版權歸作者所有,未經允許請勿轉載,若此文章存在違規行為,您可以聯系管理員刪除。
轉載請注明本文地址:http://specialneedsforspecialkids.com/yun/129266.html
摘要:計劃通過解決持久化的問題通過帶起個實例,它們將有穩定的主機名線上一個部署單元是個實例通過和注入配置文件和敏感信息因為線上系統的特性,我們底層的實例是不需要順序啟動或停止的,將采用創建集群參考文章,快速創建一個集群。 緣起 線上有一個 redis 集群,因為當時 redis 自帶的集群還不成熟,而且我們項目上的需求和應用的場景比較簡單,redis 集群是通過 twemproxy + re...
摘要:常見的和等都是屬于某一個的默認是,而等則不屬于任何。其實其的命令和上面都差不多,這里不一一列出了創建查看啟動情況是一個定義了一組的策略的抽象,可以理解為抽象到用戶層的一個宏觀服務。其實這個概念在集群里也有,可以參照理解。 showImg(https://segmentfault.com/img/remote/1460000013229549); 【利用K8S技術棧打造個人私有云系列文...
摘要:常見的和等都是屬于某一個的默認是,而等則不屬于任何。其實其的命令和上面都差不多,這里不一一列出了創建查看啟動情況是一個定義了一組的策略的抽象,可以理解為抽象到用戶層的一個宏觀服務。其實這個概念在集群里也有,可以參照理解。 showImg(https://segmentfault.com/img/remote/1460000013229549); 【利用K8S技術棧打造個人私有云系列文...
摘要:但此功能目前并不直接可用相關也已經創建。根源在于參數的獲取實現上。省略輸出可以看到,這個名稱可以在這個中重復使用了。比如省略輸出支持將推送至鏡像倉庫中簡而言之就是使用鏡像倉庫同時存儲鏡像和不過這個功能我暫時還沒驗證。 經過了長時間的開發,Helm 3 終于在今天發布了第一個 alpha 版本。本文將簡單介紹 Helm 3 新特性。 移除 Tiller Helm 2 是 C/S 架構,主...
摘要:部署環境及架構操作系統版本版本版本服務器信息在詳細介紹部署集群前,先給大家展示下集群的邏輯架構。其他操作更新刪除查看刪除除此之外,你可以刪除,如刪除上的格式為服務名字,不必關心從哪個上刪除了。 本文通過實際操作來演示Kubernetes的使用,因為環境有限,集群部署在本地3個ubuntu上,主要包括如下內容: 部署環境介紹,以及Kubernetes集群邏輯架構 安裝部署Open v...
閱讀 1346·2023-01-11 13:20
閱讀 1684·2023-01-11 13:20
閱讀 1132·2023-01-11 13:20
閱讀 1858·2023-01-11 13:20
閱讀 4100·2023-01-11 13:20
閱讀 2704·2023-01-11 13:20
閱讀 1385·2023-01-11 13:20
閱讀 3597·2023-01-11 13:20