当前位置:   article > 正文

在k8s集群部署ELK

在k8s集群部署ELK

使用kubeadm或者其他方式部署一套k8s集群。

在k8s集群创建一个namespace:halashow

2 ELK部署架构

3.1 准备资源配置清单

 Deployment中存在一个es的业务容器,和一个init容器,init容器主要是配置vm.max_map_count=262144。

service暴露了9200端口,其他服务可通过service name加端口访问es。

3.1 准备资源配置清单

 Deployment中存在一个es的业务容器,和一个init容器,init容器主要是配置vm.max_map_count=262144。

service暴露了9200端口,其他服务可通过service name加端口访问es。

  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. namespace: halashow
  5. name: elasticsearch
  6. labels:
  7. app: elasticsearch-logging
  8. spec:
  9. type: ClusterIP
  10. ports:
  11. - port: 9200
  12. name: elasticsearch
  13. selector:
  14. app: elasticsearch-logging
  15. ---
  16. apiVersion: apps/v1
  17. kind: Deployment
  18. metadata:
  19. generation: 1
  20. labels:
  21. app: elasticsearch-logging
  22. version: v1
  23. name: elasticsearch
  24. namespace: halashow
  25. spec:
  26. serviceName: elasticsearch-logging
  27. minReadySeconds: 10
  28. progressDeadlineSeconds: 600
  29. replicas: 1
  30. revisionHistoryLimit: 10
  31. selector:
  32. matchLabels:
  33. app: elasticsearch-logging
  34. version: v1
  35. strategy:
  36. type: Recreate
  37. template:
  38. metadata:
  39. creationTimestamp: null
  40. labels:
  41. app: elasticsearch-logging
  42. version: v1
  43. spec:
  44. affinity:
  45. nodeAffinity: {}
  46. containers:
  47. - env:
  48. - name: discovery.type
  49. value: single-node
  50. - name: ES_JAVA_OPTS
  51. value: -Xms512m -Xmx512m
  52. - name: MINIMUM_MASTER_NODES
  53. value: "1"
  54. image: docker.elastic.co/elasticsearch/elasticsearch:7.12.0-amd64
  55. imagePullPolicy: IfNotPresent
  56. name: elasticsearch-logging
  57. ports:
  58. - containerPort: 9200
  59. name: db
  60. protocol: TCP
  61. - containerPort: 9300
  62. name: transport
  63. protocol: TCP
  64. resources:
  65. limits:
  66. cpu: "1"
  67. memory: 1Gi
  68. requests:
  69. cpu: "1"
  70. memory: 1Gi
  71. terminationMessagePath: /dev/termination-log
  72. terminationMessagePolicy: File
  73. volumeMounts:
  74. - mountPath: /data
  75. name: es-persistent-storage
  76. dnsPolicy: ClusterFirst
  77. imagePullSecrets:
  78. - name: user-1-registrysecret
  79. initContainers:
  80. - command:
  81. - /sbin/sysctl
  82. - -w
  83. - vm.max_map_count=262144
  84. image: alpine:3.6
  85. imagePullPolicy: IfNotPresent
  86. name: elasticsearch-logging-init
  87. resources: {}
  88. securityContext:
  89. privileged: true
  90. procMount: Default
  91. terminationMessagePath: /dev/termination-log
  92. terminationMessagePolicy: File
  93. restartPolicy: Always
  94. schedulerName: default-scheduler
  95. securityContext: {}
  96. terminationGracePeriodSeconds: 30
  97. volumes:
  98. - hostPath:
  99. path: /data/elk/elasticsearch-logging
  100. type: DirectoryOrCreate
  101. name: es-persistent-storage
  102. nodeSelector:
  103. alibabacloud.com/is-edge-worker: 'false'
  104. beta.kubernetes.io/arch: amd64
  105. beta.kubernetes.io/os: linux
  106. tolerations:
  107. - effect: NoSchedule
  108. key: node-role.alibabacloud.com/addon
  109. operator: Exists

elasticsearch持久化部署,参考资料

https://www.51cto.com/article/673023.html
  1. apiVersion: v1
  2. kind: Service
  3. metadata:
  4. name: es
  5. namespace: default
  6. labels:
  7. k8s-app: es
  8. kubernetes.io/cluster-service: "true"
  9. addonmanager.kubernetes.io/mode: Reconcile
  10. kubernetes.io/name: "Elasticsearch"
  11. spec:
  12. ports:
  13. - port: 9200
  14. protocol: TCP
  15. targetPort: db
  16. selector:
  17. k8s-app: es
  18. ---
  19. # RBAC authn and authz
  20. apiVersion: v1
  21. kind: ServiceAccount
  22. metadata:
  23. name: es
  24. namespace: default
  25. labels:
  26. k8s-app: es
  27. kubernetes.io/cluster-service: "true"
  28. addonmanager.kubernetes.io/mode: Reconcile
  29. ---
  30. kind: ClusterRole
  31. apiVersion: rbac.authorization.k8s.io/v1
  32. metadata:
  33. name: es
  34. labels:
  35. k8s-app: es
  36. kubernetes.io/cluster-service: "true"
  37. addonmanager.kubernetes.io/mode: Reconcile
  38. rules:
  39. - apiGroups:
  40. - ""
  41. resources:
  42. - "services"
  43. - "namespaces"
  44. - "endpoints"
  45. verbs:
  46. - "get"
  47. ---
  48. kind: ClusterRoleBinding
  49. apiVersion: rbac.authorization.k8s.io/v1
  50. metadata:
  51. namespace: default
  52. name: es
  53. labels:
  54. k8s-app: es
  55. kubernetes.io/cluster-service: "true"
  56. addonmanager.kubernetes.io/mode: Reconcile
  57. subjects:
  58. - kind: ServiceAccount
  59. name: es
  60. namespace: default
  61. apiGroup: ""
  62. roleRef:
  63. kind: ClusterRole
  64. name: es
  65. apiGroup: ""
  66. ---
  67. # Elasticsearch deployment itself
  68. apiVersion: apps/v1
  69. kind: StatefulSet #使用statefulset创建Pod
  70. metadata:
  71. name: es #pod名称,使用statefulSet创建的Pod是有序号有顺序的
  72. namespace: default #命名空间
  73. labels:
  74. k8s-app: es
  75. kubernetes.io/cluster-service: "true"
  76. addonmanager.kubernetes.io/mode: Reconcile
  77. srv: srv-elasticsearch
  78. spec:
  79. serviceName: es #与svc相关联,这可以确保使用以下DNS地址访问Statefulset中的每个pod (es-cluster-[0,1,2].elasticsearch.elk.svc.cluster.local)
  80. replicas: 1 #副本数量,单节点
  81. selector:
  82. matchLabels:
  83. k8s-app: es #和pod template配置的labels相匹配
  84. template:
  85. metadata:
  86. labels:
  87. k8s-app: es
  88. kubernetes.io/cluster-service: "true"
  89. spec:
  90. serviceAccountName: es
  91. containers:
  92. - image: docker.io/library/elasticsearch:7.10.1
  93. name: es
  94. resources:
  95. # need more cpu upon initialization, therefore burstable class
  96. limits:
  97. cpu: 1000m
  98. memory: 2Gi
  99. requests:
  100. cpu: 100m
  101. memory: 500Mi
  102. ports:
  103. - containerPort: 9200
  104. name: db
  105. protocol: TCP
  106. - containerPort: 9300
  107. name: transport
  108. protocol: TCP
  109. volumeMounts:
  110. - name: es
  111. mountPath: /usr/share/elasticsearch/data/ #挂载点
  112. env:
  113. - name: "NAMESPACE"
  114. valueFrom:
  115. fieldRef:
  116. fieldPath: metadata.namespace
  117. - name: "discovery.type" #定义单节点类型
  118. value: "single-node"
  119. - name: ES_JAVA_OPTS #设置Java的内存参数,可以适当进行加大调整
  120. value: "-Xms1024m -Xmx4g"
  121. volumes:
  122. - name: es
  123. hostPath:
  124. path: /data/es/
  125. nodeSelector: #如果需要匹配落盘节点可以添加nodeSelect
  126. es: data
  127. tolerations:
  128. - effect: NoSchedule
  129. operator: Exists
  130. # Elasticsearch requires vm.max_map_count to be at least 262144.
  131. # If your OS already sets up this number to a higher value, feel free
  132. # to remove this init container.
  133. initContainers: #容器初始化前的操作
  134. - name: es-init
  135. image: alpine:3.6
  136. command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"] #添加mmap计数限制,太低可能造成内存不足的错误
  137. securityContext: #仅应用到指定的容器上,并且不会影响Volume
  138. privileged: true #运行特权容器
  139. - name: increase-fd-ulimit
  140. image: busybox
  141. imagePullPolicy: IfNotPresent
  142. command: ["sh", "-c", "ulimit -n 65536"] #修改文件描述符最大数量
  143. securityContext:
  144. privileged: true
  145. - name: elasticsearch-volume-init #es数据落盘初始化,加上777权限
  146. image: alpine:3.6
  147. command:
  148. - chmod
  149. - -R
  150. - "777"
  151. - /usr/share/elasticsearch/data/
  152. volumeMounts:
  153. - name: es
  154. mountPath: /usr/share/elasticsearch/data/
  155. ---
  156. apiVersion: v1
  157. kind: Service
  158. metadata:
  159. name: kibana
  160. namespace: default
  161. labels:
  162. k8s-app: kibana
  163. kubernetes.io/cluster-service: "true"
  164. addonmanager.kubernetes.io/mode: Reconcile
  165. kubernetes.io/name: "Kibana"
  166. srv: srv-kibana
  167. spec:
  168. type: NodePort #采用nodeport方式进行暴露,端口默认为25601
  169. ports:
  170. - port: 5601
  171. nodePort: 30561
  172. protocol: TCP
  173. targetPort: ui
  174. selector:
  175. k8s-app: kibana
  176. ---
  177. apiVersion: apps/v1
  178. kind: Deployment
  179. metadata:
  180. name: kibana
  181. namespace: default
  182. labels:
  183. k8s-app: kibana
  184. kubernetes.io/cluster-service: "true"
  185. addonmanager.kubernetes.io/mode: Reconcile
  186. srv: srv-kibana
  187. spec:
  188. replicas: 1
  189. selector:
  190. matchLabels:
  191. k8s-app: kibana
  192. template:
  193. metadata:
  194. labels:
  195. k8s-app: kibana
  196. annotations:
  197. seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
  198. spec:
  199. containers:
  200. - name: kibana
  201. image: docker.io/kubeimages/kibana:7.9.3 #该镜像支持arm64和amd64两种架构
  202. resources:
  203. # need more cpu upon initialization, therefore burstable class
  204. limits:
  205. cpu: 1000m
  206. requests:
  207. cpu: 100m
  208. env:
  209. - name: ELASTICSEARCH_HOSTS
  210. value: http://es:9200
  211. ports:
  212. - containerPort: 5601
  213. name: ui
  214. protocol: TCP
  215. ---
  216. apiVersion: extensions/v1beta1
  217. kind: Ingress
  218. metadata:
  219. name: kibana
  220. namespace: yk-mysql-test
  221. spec:
  222. rules:
  223. - host: kibana.ctnrs.com
  224. http:
  225. paths:
  226. - path: /
  227. backend:
  228. serviceName: kibana
  229. servicePort: 5601

 4 部署logstash

创建configMap定义logstash相关配置项,主要包括一下几项。

  input:定义输入到logstash的源。

  filter:定义过滤条件。

  output:可以定义输出到es,redis,kafka等等。

  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. name: logstash-config
  6. namespace: halashow
  7. data:
  8. logstash.conf: |-
  9. input {
  10. redis {
  11. host => "10.36.21.220"
  12. port => 30079
  13. db => 0
  14. key => "localhost"
  15. password => "123456"
  16. data_type => "list"
  17. threads => 4
  18. batch_count => "1"
  19. #tags => "user.log"
  20. }
  21. }
  22. filter {
  23. dissect {
  24. mapping => { "message" => "[%{Time}] %{LogLevel} %{message}" }
  25. }
  26. }
  27. output {
  28. if "nginx.log" in [tags] {
  29. elasticsearch {
  30. hosts => ["elasticsearch:9200"]
  31. index => "nginx.log"
  32. }
  33. }
  34. if "osale-uc-test" in [tags] {
  35. elasticsearch {
  36. hosts => ["elasticsearch:9200"]
  37. index => "osale-uc-test"
  38. }
  39. }
  40. if "osale-jindi-client-test" in [tags] {
  41. elasticsearch {
  42. hosts => ["elasticsearch:9200"]
  43. index => "osale-jindi-client-test"
  44. }
  45. }
  46. if "osale-admin-weixin" in [tags] {
  47. elasticsearch {
  48. hosts => ["elasticsearch:9200"]
  49. index => "osale-admin-weixin"
  50. }
  51. }
  52. }
  53. ---
  54. apiVersion: apps/v1
  55. kind: Deployment
  56. metadata:
  57. name: logstash
  58. namespace: halashow
  59. labels:
  60. name: logstash
  61. spec:
  62. replicas: 1
  63. selector:
  64. matchLabels:
  65. name: logstash
  66. template:
  67. metadata:
  68. labels:
  69. app: logstash
  70. name: logstash
  71. spec:
  72. containers:
  73. - name: logstash
  74. image: docker.elastic.co/logstash/logstash:7.12.0
  75. ports:
  76. - containerPort: 5044
  77. protocol: TCP
  78. - containerPort: 9600
  79. protocol: TCP
  80. volumeMounts:
  81. - name: logstash-config
  82. #mountPath: /usr/share/logstash/logstash-simple.conf
  83. #mountPath: /usr/share/logstash/config/logstash-sample.conf
  84. mountPath: /usr/share/logstash/pipeline/logstash.conf
  85. subPath: logstash.conf
  86. #ports:
  87. # - containerPort: 80
  88. # protocol: TCP
  89. volumes:
  90. - name: logstash-config
  91. configMap:
  92. #defaultMode: 0644
  93. name: logstash-config
  94. ---
  95. apiVersion: v1
  96. kind: Service
  97. metadata:
  98. namespace: halashow
  99. name: logstash
  100. labels:
  101. app: logstash
  102. spec:
  103. type: ClusterIP
  104. ports:
  105. - port: 5044
  106. name: logstash
  107. selector:
  108. app: logstash

5.部署redis5.0 

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. name: elk-redis
  5. labels:
  6. app: elk-redis
  7. data:
  8. redis.conf: |-
  9. bind 0.0.0.0
  10. daemonize no
  11. pidfile "/var/run/redis.pid"
  12. port 6380
  13. timeout 300
  14. loglevel warning
  15. logfile "redis.log"
  16. databases 16
  17. rdbcompression yes
  18. dbfilename "redis.rdb"
  19. dir "/data"
  20. requirepass "123456"
  21. masterauth "123456"
  22. maxclients 10000
  23. maxmemory 1000mb
  24. maxmemory-policy allkeys-lru
  25. appendonly yes
  26. appendfsync always
  27. ---
  28. apiVersion: apps/v1
  29. kind: StatefulSet
  30. metadata:
  31. name: elk-redis
  32. labels:
  33. app: elk-redis
  34. spec:
  35. replicas: 1
  36. selector:
  37. matchLabels:
  38. app: elk-redis
  39. template:
  40. metadata:
  41. labels:
  42. app: elk-redis
  43. spec:
  44. containers:
  45. - name: redis
  46. image: redis:5.0.7
  47. command:
  48. - "sh"
  49. - "-c"
  50. - "redis-server /usr/local/redis/redis.conf"
  51. ports:
  52. - containerPort: 6379
  53. resources:
  54. limits:
  55. cpu: 1000m
  56. memory: 1024Mi
  57. requests:
  58. cpu: 1000m
  59. memory: 1024Mi
  60. livenessProbe:
  61. tcpSocket:
  62. port: 6379
  63. initialDelaySeconds: 300
  64. timeoutSeconds: 1
  65. periodSeconds: 10
  66. successThreshold: 1
  67. failureThreshold: 3
  68. readinessProbe:
  69. tcpSocket:
  70. port: 6379
  71. initialDelaySeconds: 5
  72. timeoutSeconds: 1
  73. periodSeconds: 10
  74. successThreshold: 1
  75. failureThreshold: 3
  76. volumeMounts:
  77. - name: data
  78. mountPath: /data
  79. # 时区设置
  80. - name: timezone
  81. mountPath: /etc/localtime
  82. - name: config
  83. mountPath: /usr/local/redis/redis.conf
  84. subPath: redis.conf
  85. volumes:
  86. - name: config
  87. configMap:
  88. name: elk-redis
  89. - name: timezone
  90. hostPath:
  91. path: /usr/share/zoneinfo/Asia/Shanghai
  92. - name: data
  93. hostPath:
  94. type: DirectoryOrCreate
  95. path: /data/elk/elk-redis
  96. nodeName: gem-yxyw-t-c02
  97. ---

为了提升redis的性能需要关闭持久化

、redis默认是开启持久化的

2、默认持久化方式为RDB

1、注释掉原来的持久化规则

  1. # save 3600 1 300 100 60 10000

2、把 save 节点设置为空

save ""

3、删除 dump.rdb 转储文件

rm -f dump.rdb

1、设置 appendonly 的值为 no 即可

 

6. 部署filebeat,部署k8s上没有成功,改成源码部署到主机成功了

  1. wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.8.0-linux-x86_64.tar.gz
  2. tar -zxvf filebeat-7.8.0-linux-x86_64.tar.gz
  3. vi /data/elk/filebeat/filebeat-7.8.0-linux-x86_64/filebeat.yml
  4. filebeat.inputs:
  5. - type: log
  6. enabled: true
  7. paths:
  8. - /data/test-logs/osale-uc-test/*.log
  9. fields:
  10. tags: ["osale-uc-test"]
  11. - type: log
  12. enabled: true
  13. paths:
  14. - /data/test-logs/osale-jindi-client-test/*.log
  15. fields:
  16. tags: ["osale-jindi-client-test"]
  17. - type: log
  18. enabled: true
  19. paths:
  20. - /data/test-logs/osale-admin-weixin-test/*/osale-admin-weixin/*.log
  21. fields:
  22. tags: ["osale-admin-weixin"]
  23. - type: log
  24. enabled: true
  25. paths:
  26. - /data/tengine/logs/*.log
  27. fields:
  28. tags: ["nginx.log"]
  29. filebeat.config.modules:
  30. path: ${path.config}/modules.d/*.yml
  31. reload.enabled: false
  32. setup.template.settings:
  33. index.number_of_shards: 1
  34. setup.kibana:
  35. output.redis:
  36. enabled: true
  37. hosts: ["10.36.21.220:30079"]
  38. password: "123456"
  39. db: 0
  40. key: localhost
  41. worker: 4
  42. timeout: 5
  43. max_retries: 3
  44. datatype: list
  45. processors:
  46. - add_host_metadata: ~
  47. - add_cloud_metadata: ~
  48. - add_docker_metadata: ~
  49. - add_kubernetes_metadata: ~
  1. ---
  2. apiVersion: v1
  3. kind: ConfigMap
  4. metadata:
  5. name: filebeat-config-to-logstash
  6. namespace: halashow
  7. data:
  8. filebeat.yml: |-
  9. filebeat.inputs:
  10. - type: log
  11. paths:
  12. - /logm/*.log
  13. output.logstash:
  14. hosts: ['logstash:5044']
  15. ---
  16. apiVersion: apps/v1
  17. kind: Deployment
  18. metadata:
  19. name: filebeat
  20. namespace: halashow
  21. labels:
  22. name: filebeat
  23. spec:
  24. replicas: 1
  25. selector:
  26. matchLabels:
  27. name: filebeat
  28. template:
  29. metadata:
  30. labels:
  31. app: filebeat
  32. name: filebeat
  33. spec:
  34. containers:
  35. - name: filebeat
  36. image: docker.elastic.co/beats/filebeat:7.12.0
  37. args: [
  38. "-c", "/etc/filebeat.yml",
  39. "-e",
  40. ]
  41. volumeMounts:
  42. - mountPath: /logm
  43. name: logm
  44. - name: config
  45. mountPath: /etc/filebeat.yml
  46. readOnly: true
  47. subPath: filebeat.yml
  48. volumes:
  49. - name: logm
  50. emptyDir: {}
  51. - name: config
  52. configMap:
  53. defaultMode: 0640
  54. name: filebeat-config-to-logstash

cd /data/elk/filebeat-7.8.0-linux-x86_64 
 
sudo ./filebeat -e -c filebeat.yml -d "publish"        #前台启动filebeat
 
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1&   #后台启动
 

6 部署kibana  

6.1 准备资源配置清单

  1. apiVersion: apps/v1
  2. kind: Deployment
  3. metadata:
  4. name: kibana
  5. namespace: halashow
  6. labels:
  7. name: kibana
  8. spec:
  9. serviceName: halashow
  10. replicas: 1
  11. selector:
  12. matchLabels:
  13. name: kibana
  14. template:
  15. metadata:
  16. labels:
  17. app: kibana
  18. name: kibana
  19. spec:
  20. restartPolicy: Always
  21. containers:
  22. - name: kibana
  23. image: kibana:7.12.0
  24. imagePullPolicy: Always
  25. ports:
  26. - containerPort: 5601
  27. resources:
  28. requests:
  29. memory: 1024Mi
  30. cpu: 50m
  31. limits:
  32. memory: 1024Mi
  33. cpu: 1000m
  34. volumeMounts:
  35. - name: kibana-config
  36. mountPath: /usr/share/kibana/config/kibana.yml
  37. subPath: kibana.yml
  38. volumes:
  39. - name: kibana-config
  40. configMap:
  41. name: kibana-cm
  42. items:
  43. - key: "kibana.yml"
  44. path: "kibana.yml"
  45. ---
  46. apiVersion: v1
  47. kind: Service
  48. metadata:
  49. labels:
  50. app: kibana
  51. name: kibana
  52. namespace: halashow
  53. spec:
  54. type: NodePort
  55. ports:
  56. - name: kibana
  57. port: 5601
  58. nodePort: 30102
  59. protocol: TCP
  60. targetPort: 5601
  61. selector:
  62. app: kibana
  63. server.name: kibana
  64. server.host: "0"
  65. elasticsearch.hosts: [ "http://elasticsearch:9200" ]
  66. monitoring.ui.container.elasticsearch.enabled: true
  67. i18n.locale: "zh-CN" #kibana汉化

location /  #必须是/否则代码不上

passpoxry http:ip:port

 1.Kibana设置elasticsearch索引过期时间,到期自动删除

首先创建一个索引,索引必须名字后面*这样所有日期都能检索出。然后在创建一个十天的日志生命周期管理,在创建一个索引模板,索引模板可以检索出所有需要检索的日志,这个索引模板可以直接复制日志生命周期管理代码,也可之后日志生命周期里面加入这个索引模板。

 

2.创建一个索引生命周期策略 

Index Management    索引管理
Index Lifecycle Policies   索引生命周期策略

Delete phase  删除阶段

 

 

 

3.创建一个索引模板用来管理所有索引

Index Templates    索引模板

 

 

 

  1. {
  2. "index": {
  3. "lifecycle": {
  4. "name": "gdnb-tes*"
  5. }
  6. }
  7. }

 可将索引周期管理的代码复制过去,也可直接到索引周期管理里面选择gdnb-test-10day这个索引模板

 

3.将需要保存十天的日志索引模板加入刚创建的周期生命管理 

 

 

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/我家小花儿/article/detail/944657
推荐阅读
相关标签
  

闽ICP备14008679号