• 创建 HPA
  • 查看 HPA 相关信息
  • 删除 HPA
  • 配置 HPA 以根据 CPU 或内存利用率进行弹性扩缩容
  • 配置 HPA 以使用自定义指标进行扩展,例如使用 Prometheus 之类的第三方工具

在 Rancher v2.3.x 版本中,您可以从 Rancher UI 中创建,查看和删除 HPA。您还可以配置它们以根据 Rancher UI 中的 CPU 或内存使用量进行扩展。有关更多信息,请参阅。如果需要基于 CPU 或内存以外的其他指标扩展 HPA,您仍然需要使用 kubectl 工具。

v2.0.7 之前的 Rancher 的说明

使用较早版本的 Rancher 创建的集群不会自动满足创建 HPA 的所有要求。要在这些集群上安装 HPA,请参考 在 Rancher v2.0.7 之前创建的集群的手动 HPA 安装

如果您有 HPA manifest 文件,则可以使用 kubectl 创建,管理和删除 HPA:

  • 创建 HPA

    • 有 manifest 文件: kubectl create -f <HPA_MANIFEST>

    • 没有 manifest 文件 (仅仅支持 CPU): kubectl autoscale deployment hello-world --min=2 --max=5 --cpu-percent=50

  • 获取 HPA 信息

    • 基本信息: kubectl get hpa hello-world

  • 删除 HPA

HPA manifest 定义示例

HPA manifest 是用于通过 kubectl 管理 HPA 的配置文件。

以下代码段演示了 HPA manifest 中不同指令的使用。请参阅示例下面的列表以了解每个指令的目的。

在 Rancher v2.0.7 及更高版本中创建的集群具有使用 Horizontal Pod Autoscaler 所需的全部要求(metrics-server 和 Kubernetes 集群配置)。运行以下命令以检查 mecrics 组件是否安装成功:

  1. $ kubectl top nodes
  2. NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
  3. node-controlplane 196m 9% 1623Mi 42%
  4. node-worker 64m 3% 1146Mi 29%
  5. $ kubectl -n kube-system top pods
  6. NAME CPU(cores) MEMORY(bytes)
  7. canal-pgldr 18m 46Mi
  8. canal-vhkgr 20m 45Mi
  9. canal-x5q5v 17m 37Mi
  10. canal-xknnz 20m 37Mi
  11. kube-dns-7588d5b5f5-298j2 0m 22Mi
  12. kube-dns-autoscaler-5db9bbb766-t24hw 0m 5Mi
  13. metrics-server-97bc649d5-jxrlt 0m 12Mi
  14. $ kubectl -n kube-system logs -l k8s-app=metrics-server
  15. I1002 12:55:32.172841 1 heapster.go:71] /metrics-server --source=kubernetes.summary_api:https://kubernetes.default.svc?kubeletHttps=true&kubeletPort=10250&useServiceAccount=true&insecure=true
  16. I1002 12:55:32.172994 1 heapster.go:72] Metrics Server version v0.2.1
  17. I1002 12:55:32.173378 1 configs.go:61] Using Kubernetes client with master "https://kubernetes.default.svc" and version
  18. I1002 12:55:32.173401 1 configs.go:62] Using kubelet port 10250
  19. I1002 12:55:32.173946 1 heapster.go:128] Starting with Metric Sink
  20. I1002 12:55:32.592703 1 serving.go:308] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
  21. I1002 12:55:32.925630 1 heapster.go:101] Starting Heapster API server...
  22. I1002 12:55:32.928597 1 serve.go:85] Serving securely on 0.0.0.0:443

如果您是在 Rancher v2.0.6 或更早版本中创建的集群,请参阅

配置 HPA 以使用 Prometheus 的自定义指标进行弹性扩缩容

您可以将 HPA 配置为根据第三方软件提供的自定义指标自动扩缩容。使用第三方软件进行自动扩缩容的最常见用例是基于应用程序级别的指标(即每秒 HTTP 请求)。HPA 使用 custom.metrics.k8s.io API 来使用这些指标。通过为指标收集解决方案部署自定义指标适配器来启用此 API。

在这个例子中,我们将使用 Prometheus。我们从以下假设开始:

  • Prometheus 部署在集群中。
  • Prometheus 的配置正确,并且可以从 pod,节点,命名空间等收集适当的指标。
  • Prometheus 服务暴露的 URL 及端口为: http://prometheus.mycompany.io:80

Prometheus 可在 Rancher v2.0 应用商店进行部署。如果您的集群中尚未运行它,请从 Rancher 应用商店中进行部署。

  1. # kubectl -n kube-system create serviceaccount tiller
  2. # kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
  3. # helm init --service-account tiller
  1. # git clone https://github.com/banzaicloud/banzai-charts
  1. 检查 pod 是否正在运行。输入以下命令。

    1. # kubectl get pods -n kube-system

    从结果输出中,查找状态为“running”。

    1. NAME READY STATUS RESTARTS AGE
    2. ...
    3. prometheus-adapter-prometheus-adapter-568674d97f-hbzfx 1/1 Running 0 7h
    4. ...
  2. 输入以下命令,检查 pod 日志以确保服务正常运行。

    1. # kubectl logs prometheus-adapter-prometheus-adapter-568674d97f-hbzfx -n kube-system

    然后查看 Prometheus Adaptor 日志输出以确认服务正在运行。

  • 如果直接访问集群,请在 kubectl 配置中以以下格式输入服务器 URL: https://<Kubernetes_URL>:6443 .

    1. # kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1

    如果可以访问该 API,则应该收到与以下类似的输出。

    1. {"kind":"APIResourceList", "apiVersion":"v1", "groupVersion":"custom.metrics.k8s.io/v1beta1", "resources":[{"name":"pods/fs_usage_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_rss", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/spec_cpu_period", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_cfs_throttled", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_io_time", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_read", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_sector_writes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_user", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/last_seen", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/tasks_state", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/spec_cpu_quota", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/start_time_seconds", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_limit_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_write", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_cache", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_usage_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_cfs_periods", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_cfs_throttled_periods", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_reads_merged", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_working_set_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/network_udp_usage", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_inodes_free", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_inodes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_io_time_weighted", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_failures", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_swap", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/spec_cpu_shares", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/spec_memory_swap_limit_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_usage", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_io_current", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_writes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_failcnt", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_reads", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_writes_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_writes_merged", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/network_tcp_usage", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/memory_max_usage_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/spec_memory_limit_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/spec_memory_reservation_limit_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_load_average_10s", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/cpu_system", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_reads_bytes", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}, {"name":"pods/fs_sector_reads", "singularName":"", "namespaced":true, "kind":"MetricValueList", "verbs":["get"]}]}
  • 如果要通过 Rancher 访问集群,请在 kubectl 配置中以以下格式输入服务器 URL: https://<RANCHER_URL>/k8s/clusters/<CLUSTER_ID> 。将后缀 添加到 API 路径。

    1. # kubectl get --raw /k8s/clusters/<CLUSTER_ID>/apis/custom.metrics.k8s.io/v1beta1

    若果 API 可以访问,您应该收到与以下类似的输出