目的

修改docker 日志模式

首先修改docker 的日志模式为json,对于CentOS使用的yum安装的的docker-1.12,我们需要使用如下进行配置。

修改日志配置文件。如没有则创建 Shell>#vi /etc/docker/daemon.json

  1. "log-driver": "json-file",
  2. "log-opts": {
  3. "max-size": "10m",
  4. "max-file": "3"
  5. }
  6. }

部署

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: kube-logging

这里使用deployment部署elasticsearch单机应用,同时在k8s群集内发布服务。服务端口为9200和9300。

这里创建kibana启动需要的配置的文件的configmap,在部署kibana时,我们映射到pod中。在配置文件中,定义了kibana的服务名称,服务主机地址,最重要的是,配置elasticsearch的服务路径和端口。注释部分为elasticsearch启用xpack模式后的安全特性,这里不适用。

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: kibana
  5. namespace: kube-logging
  6. data:
  7. kibana.yml: |
  8. server.name: kibana
  9. server.host: "0"
  10. elasticsearch.url: http://elasticsearch.kube-logging.svc.cluster.local:9200
  11. #elasticsearch.username: elastic
  12. #elasticsearch.password: changeme
  13. #xpack.monitoring.ui.container.elasticsearch.enabled: true

创建kibana的deployment和service

  1. ---
  2. kind: Deployment
  3. apiVersion: apps/v1beta2
  4. metadata:
  5. labels:
  6. k8s-app: kubernetes-kibana
  7. name: kubernetes-kibana
  8. namespace: kube-logging
  9. spec:
  10. replicas: 1
  11. revisionHistoryLimit: 10
  12. selector:
  13. k8s-app: kubernetes-kibana
  14. template:
  15. metadata:
  16. labels:
  17. k8s-app: kubernetes-kibana
  18. spec:
  19. containers:
  20. - name: kubernetes-elasticsearch
  21. ports:
  22. - name: kibana-web
  23. containerPort: 5601
  24. protocol: TCP
  25. volumeMounts:
  26. - name: config
  27. mountPath: /usr/share/kibana/config/kibana.yml
  28. subPath: kibana.yml
  29. volumes:
  30. - name: config
  31. configMap:
  32. name: kibana
  33. ---
  34. kind: Service
  35. apiVersion: v1
  36. metadata:
  37. labels:
  38. k8s-app: kubernetes-kibana
  39. name: kibana
  40. namespace: kube-logging
  41. spec:
  42. type: ClusterIP
  43. clusterIP: 10.254.0.203
  44. ports:
  45. - name: kibana-web
  46. port: 5601
  47. targetPort: 5601
  48. selector:
  49. k8s-app: kubernetes-kibana

说明:在执行下面命令前,需要先配置k8s的nginx ingress。可以参照《12-A-接入点-nginx ingress》。我们发布域名为kibana.k8s.com的虚拟主机为kibana的虚拟主机,内部服务为kibana,服务端口为5601.

创建RBAC

创建名为fluentd的服务账户,并赋予账户apiGroups的全部权限和获取pods资源,以及可以执行get、list和watch命令。

  1. apiVersion: v1
  2. kind: ServiceAccount
  3. metadata:
  4. name: fluentd
  5. namespace: kube-logging
  6. ---
  7. apiVersion: rbac.authorization.k8s.io/v1beta1
  8. kind: ClusterRole
  9. metadata:
  10. name: fluentd
  11. rules:
  12. - apiGroups:
  13. - ""
  14. resources:
  15. verbs:
  16. - get
  17. - list
  18. - watch
  19. ---
  20. kind: ClusterRoleBinding
  21. apiVersion: rbac.authorization.k8s.io/v1beta1
  22. metadata:
  23. name: fluentd
  24. roleRef:
  25. kind: ClusterRole
  26. name: fluentd
  27. apiGroup: rbac.authorization.k8s.io
  28. subjects:
  29. - kind: ServiceAccount
  30. name: fluentd
  31. namespace: kube-logging
  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata
  4. name: fluentd
  5. namespace: kube-logging
  6. data:
  7. fluent.conf: |
  8. @include kubernetes.conf
  9. <match **>
  10. type elasticsearch
  11. log_level info
  12. include_tag_key true
  13. host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}"
  14. port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}"
  15. scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}"
  16. user "#{ENV['FLUENT_ELASTICSEARCH_USER']}"
  17. password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD']}"
  18. reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'true'}"
  19. logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'logstash'}"
  20. logstash_format true
  21. buffer_chunk_limit 2M
  22. buffer_queue_limit 32
  23. flush_interval 5s
  24. max_retry_wait 30
  25. disable_retry_limit
  26. num_threads 8

创建DaemonSet