国产xxxx99真实实拍_久久不雅视频_高清韩国a级特黄毛片_嗯老师别我我受不了了小说

資訊專欄INFORMATION COLUMN

k8s與HPA--通過 Prometheus adaptor 來自定義監(jiān)控指標(biāo)

ityouknow / 2356人閱讀

摘要:與通過來自定義監(jiān)控指標(biāo)自動擴(kuò)展是一種根據(jù)資源使用情況自動擴(kuò)展或縮小工作負(fù)載的方法。適配器刪除后綴并將度量標(biāo)記為計數(shù)器度量標(biāo)準(zhǔn)。負(fù)載測試完成后,會將部署縮到其初始副本您可能已經(jīng)注意到自動縮放器不會立即對使用峰值做出反應(yīng)。

k8s與HPA--通過 Prometheus adaptor 來自定義監(jiān)控指標(biāo)

自動擴(kuò)展是一種根據(jù)資源使用情況自動擴(kuò)展或縮小工作負(fù)載的方法。 Kubernetes中的自動縮放有兩個維度:Cluster Autoscaler處理節(jié)點(diǎn)擴(kuò)展操作,Horizo??ntal Pod Autoscaler自動擴(kuò)展部署或副本集中的pod數(shù)量。 Cluster Autoscaling與Horizo??ntal Pod Autoscaler一起用于動態(tài)調(diào)整計算能力以及系統(tǒng)滿足SLA所需的并行度。雖然Cluster Autoscaler高度依賴托管您的集群的云提供商的基礎(chǔ)功能,但HPA可以獨(dú)立于您的IaaS / PaaS提供商運(yùn)營。

Horizo??ntal Pod Autoscaler功能最初是在Kubernetes v1.1中引入的,并且從那時起已經(jīng)發(fā)展了很多。 HPA縮放容器的版本1基于觀察到的CPU利用率,后來基于內(nèi)存使用情況。在Kubernetes 1.6中,引入了一個新的API Custom Metrics API,使HPA能夠訪問任意指標(biāo)。 Kubernetes 1.7引入了聚合層,允許第三方應(yīng)用程序通過將自己注冊為API附加組件來擴(kuò)展Kubernetes API。 Custom Metrics API和聚合層使Prometheus等監(jiān)控系統(tǒng)可以向HPA控制器公開特定于應(yīng)用程序的指標(biāo)。

Horizo??ntal Pod Autoscaler實(shí)現(xiàn)為一個控制循環(huán),定期查詢Resource Metrics API以獲取CPU /內(nèi)存等核心指標(biāo)和針對特定應(yīng)用程序指標(biāo)的Custom Metrics API。

以下是為Kubernetes 1.9或更高版本配置HPA v2的分步指南。您將安裝提供核心指標(biāo)的Metrics Server附加組件,然后您將使用演示應(yīng)用程序根據(jù)CPU和內(nèi)存使用情況展示pod自動擴(kuò)展。在本指南的第二部分中,您將部署Prometheus和自定義API服務(wù)器。您將使用聚合器層注冊自定義API服務(wù)器,然后使用演示應(yīng)用程序提供的自定義指標(biāo)配置HPA。

在開始之前,您需要安裝Go 1.8或更高版本并在GOPATH中克隆k8s-prom-hpa repo。

cd $GOPATH
git clone https://github.com/stefanprodan/k8s-prom-hpa
部署 Metrics Server

kubernetes Metrics Server是資源使用數(shù)據(jù)的集群范圍聚合器,是Heapster的后繼者。度量服務(wù)器通過匯集來自kubernetes.summary_api的數(shù)據(jù)來收集節(jié)點(diǎn)和pod的CPU和內(nèi)存使用情況。摘要API是一種內(nèi)存高效的API,用于將數(shù)據(jù)從Kubelet / cAdvisor傳遞到度量服務(wù)器。

在HPA的第一個版本中,您需要Heapster來提供CPU和內(nèi)存指標(biāo),在HPA v2和Kubernetes 1.8中,只有在啟用horizo??ntal-pod-autoscaler-use-rest-clients時才需要指標(biāo)服務(wù)器。默認(rèn)情況下,Kubernetes 1.9中啟用了HPA rest客戶端。 GKE 1.9附帶預(yù)安裝的Metrics Server。

在kube-system命名空間中部署Metrics Server:

kubectl create -f ./metrics-server

一分鐘后,度量服務(wù)器開始報告節(jié)點(diǎn)和pod的CPU和內(nèi)存使用情況。

查看nodes metrics:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/nodes" | jq .

結(jié)果如下:

{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "ip-10-1-50-61.ec2.internal",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-10-1-50-61.ec2.internal",
        "creationTimestamp": "2019-02-13T08:34:05Z"
      },
      "timestamp": "2019-02-13T08:33:38Z",
      "window": "30s",
      "usage": {
        "cpu": "78322168n",
        "memory": "563180Ki"
      }
    },
    {
      "metadata": {
        "name": "ip-10-1-57-40.ec2.internal",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-10-1-57-40.ec2.internal",
        "creationTimestamp": "2019-02-13T08:34:05Z"
      },
      "timestamp": "2019-02-13T08:33:42Z",
      "window": "30s",
      "usage": {
        "cpu": "48926263n",
        "memory": "554472Ki"
      }
    },
    {
      "metadata": {
        "name": "ip-10-1-62-29.ec2.internal",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/ip-10-1-62-29.ec2.internal",
        "creationTimestamp": "2019-02-13T08:34:05Z"
      },
      "timestamp": "2019-02-13T08:33:36Z",
      "window": "30s",
      "usage": {
        "cpu": "36700681n",
        "memory": "326088Ki"
      }
    }
  ]
}

查看pods metrics:

kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods" | jq .

結(jié)果如下:

{
  "kind": "PodMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/pods"
  },
  "items": [
    {
      "metadata": {
        "name": "kube-proxy-77nt2",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-77nt2",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:00Z",
      "window": "30s",
      "containers": [
        {
          "name": "kube-proxy",
          "usage": {
            "cpu": "2370555n",
            "memory": "13184Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "cluster-autoscaler-n2xsl",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/cluster-autoscaler-n2xsl",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:12Z",
      "window": "30s",
      "containers": [
        {
          "name": "cluster-autoscaler",
          "usage": {
            "cpu": "1477997n",
            "memory": "54584Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "core-dns-autoscaler-b4785d4d7-j64xd",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/core-dns-autoscaler-b4785d4d7-j64xd",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:08Z",
      "window": "30s",
      "containers": [
        {
          "name": "autoscaler",
          "usage": {
            "cpu": "191293n",
            "memory": "7956Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "spot-interrupt-handler-8t2xk",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/spot-interrupt-handler-8t2xk",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:04Z",
      "window": "30s",
      "containers": [
        {
          "name": "spot-interrupt-handler",
          "usage": {
            "cpu": "844907n",
            "memory": "4608Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube-proxy-t5kqm",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-t5kqm",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:08Z",
      "window": "30s",
      "containers": [
        {
          "name": "kube-proxy",
          "usage": {
            "cpu": "1194766n",
            "memory": "12204Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube-proxy-zxmqb",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-proxy-zxmqb",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:06Z",
      "window": "30s",
      "containers": [
        {
          "name": "kube-proxy",
          "usage": {
            "cpu": "3021117n",
            "memory": "13628Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "aws-node-rcz5c",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/aws-node-rcz5c",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:15Z",
      "window": "30s",
      "containers": [
        {
          "name": "aws-node",
          "usage": {
            "cpu": "1217989n",
            "memory": "24976Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "aws-node-z2qxs",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/aws-node-z2qxs",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:15Z",
      "window": "30s",
      "containers": [
        {
          "name": "aws-node",
          "usage": {
            "cpu": "1025780n",
            "memory": "46424Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "php-apache-899d75b96-8ppk4",
        "namespace": "default",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/php-apache-899d75b96-8ppk4",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:08Z",
      "window": "30s",
      "containers": [
        {
          "name": "php-apache",
          "usage": {
            "cpu": "24612n",
            "memory": "27556Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "load-generator-779c5f458c-9sglg",
        "namespace": "default",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/load-generator-779c5f458c-9sglg",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:34:56Z",
      "window": "30s",
      "containers": [
        {
          "name": "load-generator",
          "usage": {
            "cpu": "0",
            "memory": "336Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "aws-node-v9jxs",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/aws-node-v9jxs",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:00Z",
      "window": "30s",
      "containers": [
        {
          "name": "aws-node",
          "usage": {
            "cpu": "1303458n",
            "memory": "28020Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube2iam-m2ktt",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube2iam-m2ktt",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:11Z",
      "window": "30s",
      "containers": [
        {
          "name": "kube2iam",
          "usage": {
            "cpu": "1328864n",
            "memory": "9724Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube2iam-w9cqf",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube2iam-w9cqf",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:03Z",
      "window": "30s",
      "containers": [
        {
          "name": "kube2iam",
          "usage": {
            "cpu": "1294379n",
            "memory": "8812Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "custom-metrics-apiserver-657644489c-pk8rb",
        "namespace": "monitoring",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/monitoring/pods/custom-metrics-apiserver-657644489c-pk8rb",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:04Z",
      "window": "30s",
      "containers": [
        {
          "name": "custom-metrics-apiserver",
          "usage": {
            "cpu": "22409370n",
            "memory": "42468Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube2iam-qghgt",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube2iam-qghgt",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:11Z",
      "window": "30s",
      "containers": [
        {
          "name": "kube2iam",
          "usage": {
            "cpu": "2078992n",
            "memory": "16356Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "spot-interrupt-handler-ps745",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/spot-interrupt-handler-ps745",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:10Z",
      "window": "30s",
      "containers": [
        {
          "name": "spot-interrupt-handler",
          "usage": {
            "cpu": "611566n",
            "memory": "4336Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "coredns-68fb7946fb-2xnpp",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-68fb7946fb-2xnpp",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:12Z",
      "window": "30s",
      "containers": [
        {
          "name": "coredns",
          "usage": {
            "cpu": "1610381n",
            "memory": "10480Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "coredns-68fb7946fb-9ctjf",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-68fb7946fb-9ctjf",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:13Z",
      "window": "30s",
      "containers": [
        {
          "name": "coredns",
          "usage": {
            "cpu": "1418850n",
            "memory": "9852Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "prometheus-7d4f6d4454-v4fnd",
        "namespace": "monitoring",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/monitoring/pods/prometheus-7d4f6d4454-v4fnd",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:00Z",
      "window": "30s",
      "containers": [
        {
          "name": "prometheus",
          "usage": {
            "cpu": "17951807n",
            "memory": "202316Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "metrics-server-7cdd54ccb4-k2x7m",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/metrics-server-7cdd54ccb4-k2x7m",
        "creationTimestamp": "2019-02-13T08:35:19Z"
      },
      "timestamp": "2019-02-13T08:35:04Z",
      "window": "30s",
      "containers": [
        {
          "name": "metrics-server-nanny",
          "usage": {
            "cpu": "144656n",
            "memory": "5716Ki"
          }
        },
        {
          "name": "metrics-server",
          "usage": {
            "cpu": "568327n",
            "memory": "16268Ki"
          }
        }
      ]
    }
  ]
}
基于CPU和內(nèi)存使用情況的Auto Scaling

您將使用基于Golang的小型Web應(yīng)用程序來測試Horizo??ntal Pod Autoscaler(HPA)。

將podinfo部署到默認(rèn)命名空間:

kubectl create -f ./podinfo/podinfo-svc.yaml,./podinfo/podinfo-dep.yaml

使用NodePort服務(wù)訪問podinfo,地址為http:// :31198。

接下來定義一個至少維護(hù)兩個副本的HPA,如果CPU平均值超過80%或內(nèi)存超過200Mi,則最多可擴(kuò)展到10個:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      targetAverageUtilization: 80
  - type: Resource
    resource:
      name: memory
      targetAverageValue: 200Mi

創(chuàng)建這個hpa:

kubectl create -f ./podinfo/podinfo-hpa.yaml

幾秒鐘后,HPA控制器聯(lián)系度量服務(wù)器,然后獲取CPU和內(nèi)存使用情況:

kubectl get hpa

NAME      REFERENCE            TARGETS                      MINPODS   MAXPODS   REPLICAS   AGE
podinfo   Deployment/podinfo   2826240 / 200Mi, 15% / 80%   2         10        2          5m

為了增加CPU使用率,請使用rakyll / hey運(yùn)行負(fù)載測試:

#install hey
go get -u github.com/rakyll/hey

#do 10K requests
hey -n 10000 -q 10 -c 5 http://:31198/

您可以使用以下方式監(jiān)控HPA事件:

$ kubectl describe hpa

Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  7m    horizontal-pod-autoscaler  New size: 4; reason: cpu resource utilization (percentage of request) above target
  Normal  SuccessfulRescale  3m    horizontal-pod-autoscaler  New size: 8; reason: cpu resource utilization (percentage of request) above target   

暫時刪除podinfo。稍后將在本教程中再次部署它:

kubectl delete -f ./podinfo/podinfo-hpa.yaml,./podinfo/podinfo-dep.yaml,./podinfo/podinfo-svc.yaml
部署 Custom Metrics Server

要根據(jù)自定義指標(biāo)進(jìn)行擴(kuò)展,您需要擁有兩個組件。一個組件,用于從應(yīng)用程序收集指標(biāo)并將其存儲在Prometheus時間序列數(shù)據(jù)庫中。第二個組件使用collect(k8s-prometheus-adapter)提供的指標(biāo)擴(kuò)展了Kubernetes自定義指標(biāo)API。

您將在專用命名空間中部署Prometheus和適配器。

創(chuàng)建monitoring命名空間:

kubectl create -f ./namespaces.yaml

monitoring命名空間中部署Prometheus v2:

kubectl create -f ./prometheus

生成Prometheus適配器所需的TLS證書:

make certs

生成以下幾個文件:

# ls output
apiserver.csr  apiserver-key.pem  apiserver.pem

部署Prometheus自定義指標(biāo)API適配器:

kubectl create -f ./custom-metrics-api

列出Prometheus提供的自定義指標(biāo):

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq .

獲取monitoring命名空間中所有pod的FS使用情況:

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/*/fs_usage_bytes" | jq .

查詢結(jié)果如下:

{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/monitoring/pods/%2A/fs_usage_bytes"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "custom-metrics-apiserver-657644489c-pk8rb",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-02-13T08:52:30Z",
      "value": "94253056"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "monitoring",
        "name": "prometheus-7d4f6d4454-v4fnd",
        "apiVersion": "/v1"
      },
      "metricName": "fs_usage_bytes",
      "timestamp": "2019-02-13T08:52:30Z",
      "value": "24576"
    }
  ]
}
基于custom metrics 自動伸縮

在默認(rèn)命名空間中創(chuàng)建podinfo NodePort服務(wù)和部署:

kubectl create -f ./podinfo/podinfo-svc.yaml,./podinfo/podinfo-dep.yaml

podinfo應(yīng)用程序公開名為http_requests_total的自定義指標(biāo)。 Prometheus適配器刪除_total后綴并將度量標(biāo)記為計數(shù)器度量標(biāo)準(zhǔn)。

從自定義指標(biāo)API獲取每秒的總請求數(shù):

kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/*/http_requests" | jq .
{
  "kind": "MetricValueList",
  "apiVersion": "custom.metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/custom.metrics.k8s.io/v1beta1/namespaces/default/pods/%2A/http_requests"
  },
  "items": [
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-6b86c8ccc9-kv5g9",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-10T16:49:07Z",
      "value": "901m"
    },
    {
      "describedObject": {
        "kind": "Pod",
        "namespace": "default",
        "name": "podinfo-6b86c8ccc9-nm7bl",
        "apiVersion": "/__internal"
      },
      "metricName": "http_requests",
      "timestamp": "2018-01-10T16:49:07Z",
      "value": "898m"
    }
  ]
}

建一個HPA,如果請求數(shù)超過每秒10個,將擴(kuò)展podinfo部署:

apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: podinfo
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: podinfo
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: http_requests
      targetAverageValue: 10
    

在默認(rèn)命名空間中部署podinfo HPA:

kubectl create -f ./podinfo/podinfo-hpa-custom.yaml

幾秒鐘后,HPA從指標(biāo)API獲取http_requests值:

kubectl get hpa

NAME      REFERENCE            TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
podinfo   Deployment/podinfo   899m / 10   2         10        2          1m

在podinfo服務(wù)上應(yīng)用一些負(fù)載,每秒25個請求:

#install hey
go get -u github.com/rakyll/hey

#do 10K requests rate limited at 25 QPS
hey -n 10000 -q 5 -c 5 http://:31198/healthz

幾分鐘后,HPA開始擴(kuò)展部署:

kubectl describe hpa

Name:                       podinfo
Namespace:                  default
Reference:                  Deployment/podinfo
Metrics:                    ( current / target )
  "http_requests" on pods:  9059m / 10
Min replicas:               2
Max replicas:               10

Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  2m    horizontal-pod-autoscaler  New size: 3; reason: pods metric http_requests above target

按照當(dāng)前的每秒請求速率,部署永遠(yuǎn)不會達(dá)到10個pod的最大值。三個復(fù)制品足以使每個吊艙的RPS保持在10以下。

負(fù)載測試完成后,HPA會將部署縮到其初始副本:

Events:
  Type    Reason             Age   From                       Message
  ----    ------             ----  ----                       -------
  Normal  SuccessfulRescale  5m    horizontal-pod-autoscaler  New size: 3; reason: pods metric http_requests above target
  Normal  SuccessfulRescale  21s   horizontal-pod-autoscaler  New size: 2; reason: All metrics below target

您可能已經(jīng)注意到自動縮放器不會立即對使用峰值做出反應(yīng)。默認(rèn)情況下,度量標(biāo)準(zhǔn)同步每30秒發(fā)生一次,只有在最后3-5分鐘內(nèi)沒有重新縮放時才能進(jìn)行擴(kuò)展/縮小。通過這種方式,HPA可以防止快速執(zhí)行沖突的決策,并為Cluster Autoscaler提供時間。

結(jié)論

并非所有系統(tǒng)都可以通過多帶帶依賴CPU /內(nèi)存使用指標(biāo)來滿足其SLA,大多數(shù)Web和移動后端需要基于每秒請求進(jìn)行自動擴(kuò)展以處理任何流量突發(fā)。對于ETL應(yīng)用程序,可以通過作業(yè)隊列長度超過某個閾值等來觸發(fā)自動縮放。通過使用Prometheus檢測應(yīng)用程序并公開正確的自動縮放指標(biāo),您可以對應(yīng)用程序進(jìn)行微調(diào),以更好地處理突發(fā)并確保高可用性。

文章版權(quán)歸作者所有,未經(jīng)允許請勿轉(zhuǎn)載,若此文章存在違規(guī)行為,您可以聯(lián)系管理員刪除。

轉(zhuǎn)載請注明本文地址:http://specialneedsforspecialkids.com/yun/28085.html

相關(guān)文章

  • k8sHPA--通過 Prometheus adaptor 來自定義監(jiān)控指標(biāo)

    摘要:與通過來自定義監(jiān)控指標(biāo)自動擴(kuò)展是一種根據(jù)資源使用情況自動擴(kuò)展或縮小工作負(fù)載的方法。適配器刪除后綴并將度量標(biāo)記為計數(shù)器度量標(biāo)準(zhǔn)。負(fù)載測試完成后,會將部署縮到其初始副本您可能已經(jīng)注意到自動縮放器不會立即對使用峰值做出反應(yīng)。 k8s與HPA--通過 Prometheus adaptor 來自定義監(jiān)控指標(biāo) 自動擴(kuò)展是一種根據(jù)資源使用情況自動擴(kuò)展或縮小工作負(fù)載的方法。 Kubernetes中的自...

    孫吉亮 評論0 收藏0
  • k8sHPA--通過 Prometheus adaptor 來自定義監(jiān)控指標(biāo)

    摘要:與通過來自定義監(jiān)控指標(biāo)自動擴(kuò)展是一種根據(jù)資源使用情況自動擴(kuò)展或縮小工作負(fù)載的方法。適配器刪除后綴并將度量標(biāo)記為計數(shù)器度量標(biāo)準(zhǔn)。負(fù)載測試完成后,會將部署縮到其初始副本您可能已經(jīng)注意到自動縮放器不會立即對使用峰值做出反應(yīng)。 k8s與HPA--通過 Prometheus adaptor 來自定義監(jiān)控指標(biāo) 自動擴(kuò)展是一種根據(jù)資源使用情況自動擴(kuò)展或縮小工作負(fù)載的方法。 Kubernetes中的自...

    HollisChuang 評論0 收藏0
  • 容器監(jiān)控實(shí)踐—Custom Metrics

    摘要:自定義指標(biāo)由提供,由此可支持任意采集到的指標(biāo)。文件清單的,收集級別的監(jiān)控數(shù)據(jù)監(jiān)控服務(wù)端,從拉數(shù)據(jù)并存儲為時序數(shù)據(jù)。本文為容器監(jiān)控實(shí)踐系列文章,完整內(nèi)容見 概述 上文metric-server提到,kubernetes的監(jiān)控指標(biāo)分為兩種: Core metrics(核心指標(biāo)):從 Kubelet、cAdvisor 等獲取度量數(shù)據(jù),再由metrics-server提供給 Dashboar...

    laznrbfe 評論0 收藏0
  • 容器監(jiān)控實(shí)踐—Custom Metrics

    摘要:自定義指標(biāo)由提供,由此可支持任意采集到的指標(biāo)。文件清單的,收集級別的監(jiān)控數(shù)據(jù)監(jiān)控服務(wù)端,從拉數(shù)據(jù)并存儲為時序數(shù)據(jù)。本文為容器監(jiān)控實(shí)踐系列文章,完整內(nèi)容見 概述 上文metric-server提到,kubernetes的監(jiān)控指標(biāo)分為兩種: Core metrics(核心指標(biāo)):從 Kubelet、cAdvisor 等獲取度量數(shù)據(jù),再由metrics-server提供給 Dashboar...

    DangoSky 評論0 收藏0
  • 容器監(jiān)控實(shí)踐—Heapster

    摘要:還可以把數(shù)據(jù)導(dǎo)入到第三方工具展示或使用場景共同組成了一個流行的監(jiān)控解決方案原生的監(jiān)控圖表信息來自在中也用到了,將作為,向其獲取,作為水平擴(kuò)縮容的監(jiān)控依據(jù)監(jiān)控指標(biāo)流程首先從獲取集群中所有的信息。 概述 該項目將被廢棄(RETIRED) Heapster是Kubernetes旗下的一個項目,Heapster是一個收集者,并不是采集 1.Heapster可以收集Node節(jié)點(diǎn)上的cAdvis...

    DDreach 評論0 收藏0

發(fā)表評論

0條評論

ityouknow

|高級講師

TA的文章

閱讀更多
最新活動
閱讀需要支付1元查看
<