Горизонтальное автоматическое масштабирование модуля не работает: `невозможно получить показатели для процессора ресурсов: показатели не возвращаются из heapster`

Я пытаюсь создать горизонтальное автоматическое масштабирование модуля после установки Kubernetes с помощью kubeadm.

Основным симптомом является то, что kubectl get hpa возвращает показатель ЦП в столбце TARGETS как "неопределенный":

$ kubectl get hpa
NAME        REFERENCE              TARGETS           MINPODS   MAXPODS   REPLICAS   AGE
fibonacci   Deployment/fibonacci   <unknown> / 50%   1         3         1          1h

При дальнейшем расследовании выясняется, что hpa пытается получить метрику ЦП от Heapster - но в моей конфигурации метрика ЦП предоставляется cAdvisor.

Я делаю это предположение на основе вывода kubectl describe hpa fibonacci:

Name:                           fibonacci
Namespace:                      default
Labels:                         <none>
Annotations:                        <none>
CreationTimestamp:                  Sun, 14 May 2017 18:08:53 +0000
Reference:                      Deployment/fibonacci
Metrics:                        ( current / target )
  resource cpu on pods  (as a percentage of request):   <unknown> / 50%
Min replicas:                       1
Max replicas:                       3
Events:
  FirstSeen LastSeen    Count   From                SubObjectPath   Type        Reason              Message
  --------- --------    -----   ----                -------------   --------    ------              -------
  1h        3s      148 horizontal-pod-autoscaler           Warning     FailedGetResourceMetric     unable to get metrics for resource cpu: no metrics returned from heapster
  1h        3s      148 horizontal-pod-autoscaler           Warning     FailedComputeMetricsReplicas    failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster

Почему hpa попытаться получить этот показатель из кучи, а не cAdvisor?

Как я могу это исправить?

Пожалуйста, найдите ниже мое развертывание, вместе с содержанием /var/log/container/kube-controller-manager.log и выход kubectl get pods --namespace=kube-system а также kubectl describe pods

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: fibonacci
  labels:
    app: fibonacci
spec:
  template:
    metadata:
      labels:
        app: fibonacci
    spec:
      containers:
      - name: fibonacci
        image: oghma/fibonacci
        ports:
          - containerPort: 8088
        resources:
          requests:
            memory: "64Mi"
            cpu: "75m"
          limits:
            memory: "128Mi"
            cpu: "100m"

---
kind: Service
apiVersion: v1
metadata:
  name: fibonacci
spec:
  selector:
    app: fibonacci
  ports:
    - protocol: TCP
      port: 8088
      targetPort: 8088
  externalIPs: 
    - 192.168.66.103

---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: fibonacci
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: fibonacci
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 50

$ kubectl describe pods
Name:       fibonacci-1503002127-3k755
Namespace:  default
Node:       kubernetesnode1/192.168.66.101
Start Time: Sun, 14 May 2017 17:47:08 +0000
Labels:     app=fibonacci
        pod-template-hash=1503002127
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"fibonacci-1503002127","uid":"59ea64bb-38cd-11e7-b345-fa163edb1ca...
Status:     Running
IP:     192.168.202.1
Controllers:    ReplicaSet/fibonacci-1503002127
Containers:
  fibonacci:
    Container ID:   docker://315375c6a978fd689f4ba61919c15f15035deb9139982844cefcd46092fbec14
    Image:      oghma/fibonacci
    Image ID:       docker://sha256:26f9b6b2c0073c766b472ec476fbcd2599969b6e5e7f564c3c0a03f8355ba9f6
    Port:       8088/TCP
    State:      Running
      Started:      Sun, 14 May 2017 17:47:16 +0000
    Ready:      True
    Restart Count:  0
    Limits:
      cpu:  100m
      memory:   128Mi
    Requests:
      cpu:      75m
      memory:       64Mi
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-45kp8 (ro)
Conditions:
  Type      Status
  Initialized   True 
  Ready     True 
  PodScheduled  True 
Volumes:
  default-token-45kp8:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-45kp8
    Optional:   false
QoS Class:  Burstable
Node-Selectors: <none>
Tolerations:    node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
        node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:     <none>

$ kubectl get pods --namespace=kube-system

NAME                                        READY     STATUS    RESTARTS   AGE
calico-etcd-k1g53                           1/1       Running   0          2h
calico-node-6n4gp                           2/2       Running   1          2h
calico-node-nhmz7                           2/2       Running   0          2h
calico-policy-controller-1324707180-65m78   1/1       Running   0          2h
etcd-kubernetesmaster                       1/1       Running   0          2h
heapster-1428305041-zjzd1                   1/1       Running   0          1h
kube-apiserver-kubernetesmaster             1/1       Running   0          2h
kube-controller-manager-kubernetesmaster    1/1       Running   0          2h
kube-dns-3913472980-gbg5h                   3/3       Running   0          2h
kube-proxy-1dt3c                            1/1       Running   0          2h
kube-proxy-tfhr9                            1/1       Running   0          2h
kube-scheduler-kubernetesmaster             1/1       Running   0          2h
monitoring-grafana-3975459543-9q189         1/1       Running   0          1h
monitoring-influxdb-3480804314-7bvr3        1/1       Running   0          1h

$ cat /var/log/container/kube-controller-manager.log

"log":"I0514 17:47:08.631314       1 event.go:217] Event(v1.ObjectReference{Kind:\"Deployment\", Namespace:\"default\", Name:\"fibonacci\", UID:\"59e980d9-38cd-11e7-b345-fa163edb1ca6\", APIVersion:\"extensions\", ResourceVersion:\"1303\", FieldPath:\"\"}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set fibonacci-1503002127 to 1\n","stream":"stderr","time":"2017-05-14T17:47:08.63177467Z"}
{"log":"I0514 17:47:08.650662       1 event.go:217] Event(v1.ObjectReference{Kind:\"ReplicaSet\", Namespace:\"default\", Name:\"fibonacci-1503002127\", UID:\"59ea64bb-38cd-11e7-b345-fa163edb1ca6\", APIVersion:\"extensions\", ResourceVersion:\"1304\", FieldPath:\"\"}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: fibonacci-1503002127-3k755\n","stream":"stderr","time":"2017-05-14T17:47:08.650826398Z"}
{"log":"E0514 17:49:00.873703       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:49:00.874034952Z"}
{"log":"E0514 17:49:30.884078       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:49:30.884546461Z"}
{"log":"E0514 17:50:00.896563       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:50:00.89688734Z"}
{"log":"E0514 17:50:30.906293       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:50:30.906825794Z"}
{"log":"E0514 17:51:00.915996       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:51:00.916348218Z"}
{"log":"E0514 17:51:30.926043       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:51:30.926367623Z"}
{"log":"E0514 17:52:00.936574       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:52:00.936903072Z"}
{"log":"E0514 17:52:30.944724       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:52:30.945120508Z"}
{"log":"E0514 17:53:00.954785       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:53:00.955126309Z"}
{"log":"E0514 17:53:30.970454       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:53:30.972996568Z"}
{"log":"E0514 17:54:00.980735       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:54:00.981098832Z"}
{"log":"E0514 17:54:30.993176       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:54:30.993538841Z"}
{"log":"E0514 17:55:01.002941       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:55:01.003265908Z"}
{"log":"W0514 17:55:06.511756       1 reflector.go:323] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:192: watch of \u003cnil\u003e ended with: etcdserver: mvcc: required revision has been compacted\n","stream":"stderr","time":"2017-05-14T17:55:06.511957851Z"}
{"log":"E0514 17:55:31.013415       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:55:31.013776243Z"}
{"log":"E0514 17:56:01.024507       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:56:01.0248332Z"}
{"log":"E0514 17:56:31.036191       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:56:31.036606698Z"}
{"log":"E0514 17:57:01.049277       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:57:01.049616359Z"}
{"log":"E0514 17:57:31.064104       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:57:31.064489485Z"}
{"log":"E0514 17:58:01.073988       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:58:01.074339488Z"}
{"log":"E0514 17:58:31.084511       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:58:31.084839352Z"}
{"log":"E0514 17:59:01.096507       1 horizontal.go:201] failed to compute desired number of replicas based on listed metrics for Deployment/default/fibonacci: failed to get cpu utilization: unable to get metrics for resource cpu: failed to get pod resource metrics: the server could not find the requested resource (get services http:heapster:)\n","stream":"stderr","time":"2017-05-14T17:59:01.096896254Z"}

7 ответов

Есть возможность включить автоматическое масштабирование в кластерном пуле, сначала убедитесь, что он включен.

а затем примените свой hpa, и не забудьте установить процессор, запросы памяти, ограничения на контроллеры k8s

Стоит отметить, что если у вас есть несколько контейнеров на вашем модуле, вам следует указать процессор, запросы памяти, ограничения для каждого контейнера.

Если в вашем развертывании у вас более одного контейнера, убедитесь, что вы указали ограничения ресурсов для всех из них.

Tl;dr: Если вы используете AWS EKS и указываете .spec.templates.spec.containers.<resources|limits>не сработало, возможно, проблема в том, что у вас не установлен Kubernetes Metrics Server .

Я столкнулся с этой проблемой с Kubernetes HPA при использовании AWS EKS. В поисках решений я столкнулся с приведенной ниже командой и решил запустить ее, чтобы проверить, установлен ли у меня сервер метрик:

kubectl get pods -n kube-system

Я не установил его. И оказывается, что у AWS есть этот документ, в котором говорится, что по умолчанию Metrics Server не установлен на кластерах EKS. Итак, я выполнил шаги, рекомендованные в документе для установки сервера:

      - Deploy the Metrics Server with the following command:

    kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

- Verify that the metrics-server deployment is running the desired number of pods with the following command.

    kubectl get deployment metrics-server -n kube-system

Output

NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1 

Это было для меня решением. Когда Metric Server был в моем кластере, мне удалось создать HPA, которые могли получать информацию об использовании своих целевых модулей / ресурсов.

PS: можно бегать kubectl get pods -n kube-system снова тоже, чтобы подтвердить установку.

Я видел это и в других приложениях: Кажется, есть ошибка в API HPA.

Решением может быть использование контроллера репликации scaleref вместо:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: fibonacci
  namespace: ....
spec:
  scaleRef:
    kind: ReplicationController
    name: fibonacci
    subresource: scale
  minReplicas: 1
  maxReplicas: 3
  targetCPUUtilizationPercentage: 50

Не проверено, поэтому может потребоваться некоторое редактирование scaleRef (Ты использовал scaleTargetRef)

Вы можете удалить ОГРАНИЧЕНИЯ из ваших развертываний и попробовать это. В моем развертывании я использовал только ЗАПРОСЫ для РЕСУРСОВ, и это работало. Если вы видите, что работает Горизонтальный стручковый автоскейлер (HPA), то позже вы также можете поиграть с LIMITS. Это обсуждение говорит вам, что только использование ЗАПРОСОВ достаточно для выполнения HPA.

В случае, если вы используете GKE 1.9.x

Есть некоторая ошибка, нужно сначала отключить автоматическое масштабирование, а затем снова включить его. Это обеспечит текущую стоимость вместо неизвестного

Попробуйте обновить до последней доступной GKE.

Я столкнулся с аналогичной проблемой, надеюсь, это поможет:

  1. убедитесь, что ApiVersion HPA верен, так как синтаксис немного меняется от версии к версии
  2. Do kubectl autoscale deploy -n --cpu-percent= --min= --max= --dry-run -o yaml

Теперь это даст вам точный синтаксис HPA в соответствии с ApiVersion кластера. Внесите изменения в файл helm hpa.yaml в соответствии с выводом, и это должно помочь.

Другие вопросы по тегам