当前位置:   article > 正文

Istio的熔断功能_istio 熔断

istio 熔断

 一、熔断作用和原理   

1.1 熔断的作用   

        熔断,是创建弹性微服务应用程序的重要模式。熔断能够使您的应用程序具备应对来自故障、潜在峰值和其他未知网络因素影响的能力

        熔断机制其实是一种保护机制,在微服务架构中,服务部署在不同的节点上,如果服务成功响应请求,那么万事大吉。但事实往往并非如此,下游客户端需要受到保护,以免上游服务过慢。反过来,上游服务必须受到保护,以免因请求积压而过载。不然很可能导致整个系统响应过慢压力过大,最终系统崩溃。熔断机制则是个很好的解决方案。

1.2 熔断的原理

        熔断有三种状态:关闭、打开和半开,默认情况下处于关闭状态。

        关闭状态:无论请求成功或失败,到达预先设定的故障数量阈值前,都不会触发熔断。

        打开状态:当达到阈值时,熔断器就会打开。当调用处于打开状态的服务时,熔断器将断开请求,直接返回一个错误,而不去执行调用。通过在客户端断开下游请求的方式,可以在生产环境中防止级联故障的发生。

        半开状态:在经过事先配置的超时时长后,熔断器进入半开状态,这种状态下故障服务有时间从其中断的行为中恢复。如果请求在这种状态下继续失败,则熔断器将再次打开并继续阻断请求。否则熔断器将关闭,服务将被允许再次处理请求。
如下图,转载自banzaicloud

断路

 

1.3 istio熔断配置

        Istio的熔断可以在配置TrafficPolicy的内场Destination RuleIstio自定义资源。有两个字段TrafficPolicy与断路器相关:ConnectionPool和OutlierDetection。

ConnectionPool:可以为服务配置连接量,控制请求、挂起请求、重试或超时的最大数量

OutlierDetection:用于控制从负载平衡池中驱逐不健康的服务,控制服务从连接池中弹出之前的错误数量,并且可以设置最小弹出持续时间和最大弹出百分比。

Destination Rule配置熔断例子:

  1. apiVersion: networking.istio.io/v1alpha3
  2. kind: DestinationRule
  3. metadata:
  4. name: notifications
  5. spec:
  6. host: notifications
  7. trafficPolicy:
  8. connectionPool:
  9. tcp:
  10. maxConnections: 1
  11. http:
  12. http1MaxPendingRequests: 1
  13. maxRequestsPerConnection: 1
  14. outlierDetection:
  15. consecutiveErrors: 1
  16. interval: 1s
  17. baseEjectionTime: 3m
  18. maxEjectionPercent: 100

  hosts指定请求目标,服务名notifications

  ConnectionPool中设置了在一定时间内只能和notifications 服务建立一个连接:每个连接最多只能有一个挂起(pending)的请求。如果达到阈值,熔断器将开启阻断请求

  OutlierDetection部分的设置用来检查每秒调用服务是否有错误发生。如果有,则将服务从负载均衡池中逐出至少三分钟(100%最大弹出百分比表示,如果需要,所有的服务实例都可以同时被逐出)。

如果 Istio 启用了双向 TLS 身份验证,则必须在应用目标规则之前将 TLS 流量策略 mode:ISTIO_MUTUAL 添加到 DestinationRule。否则请求将产生 503 错误

二、配置实验环境

1、创建一个提供响应请求的服务(httpbin )

2、创建一个熔断规则DestinationRule

3、创建一个负载测试客户端Fortio

4、测试触发熔断

2.1 创建httpbin服务

  1. [root@k8s-master httpbin]# cat httpbin.yaml
  2. apiVersion: v1
  3. kind: ServiceAccount
  4. metadata:
  5. name: httpbin
  6. ---
  7. apiVersion: v1
  8. kind: Service
  9. metadata:
  10. name: httpbin
  11. labels:
  12. app: httpbin
  13. service: httpbin
  14. spec:
  15. ports:
  16. - name: http
  17. port: 8000
  18. targetPort: 80
  19. selector:
  20. app: httpbin
  21. ---
  22. apiVersion: apps/v1
  23. kind: Deployment
  24. metadata:
  25. name: httpbin
  26. spec:
  27. replicas: 1
  28. selector:
  29. matchLabels:
  30. app: httpbin
  31. version: v1
  32. template:
  33. metadata:
  34. labels:
  35. app: httpbin
  36. version: v1
  37. spec:
  38. serviceAccountName: httpbin
  39. containers:
  40. - image: docker.io/kennethreitz/httpbin
  41. imagePullPolicy: IfNotPresent
  42. name: httpbin
  43. ports:
  44. - containerPort: 80
  45. [root@k8s-master httpbin]# kubectl apply -f httpbin.yaml
  46. serviceaccount/httpbin created
  47. service/httpbin created
  48. deployment.apps/httpbin created

查看pod已经启动成功

  1. [root@k8s-master httpbin]# kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. appv1-5cf75d8d8b-vdvzr 2/2 Running 4 (5d3h ago) 5d4h
  4. appv2-684dd44db7-r6k6k 2/2 Running 4 (5d3h ago) 5d4h
  5. httpbin-74fb669cc6-5hkjz 2/2 Running 0 70s

2.2 创建熔断规则

  1. [root@k8s-master cricuit-breaking]# cat destinationrule.yaml
  2. apiVersion: networking.istio.io/v1alpha3
  3. kind: DestinationRule
  4. metadata:
  5. name: httpbin
  6. spec:
  7. host: httpbin
  8. trafficPolicy:
  9. connectionPool:
  10. tcp:
  11. maxConnections: 1
  12. http:
  13. http1MaxPendingRequests: 1
  14. maxRequestsPerConnection: 1
  15. outlierDetection:
  16. consecutive5xxErrors: 1
  17. interval: 1s
  18. baseEjectionTime: 3m
  19. maxEjectionPercent: 100
  20. [root@k8s-master cricuit-breaking]# kubectl apply -f destinationrule.yaml
  21. destinationrule.networking.istio.io/httpbin created
  22. [root@k8s-master cricuit-breaking]# kubectl get destinationrules.networking.istio.io
  23. NAME HOST AGE
  24. canary canary.default.svc.cluster.local 5d4h
  25. httpbin httpbin 11s
  26. [root@k8s-master cricuit-breaking]# kubectl get destinationrules.networking.istio.io httpbin -o yaml
  27. apiVersion: networking.istio.io/v1beta1
  28. kind: DestinationRule
  29. metadata:
  30. annotations:
  31. ... ...
  32. spec:
  33. host: httpbin
  34. trafficPolicy:
  35. connectionPool:
  36. http:
  37. http1MaxPendingRequests: 1
  38. maxRequestsPerConnection: 1
  39. tcp:
  40. maxConnections: 1
  41. outlierDetection:
  42. baseEjectionTime: 3m
  43. consecutive5xxErrors: 1
  44. interval: 1s
  45. maxEjectionPercent: 100
  46. [root@k8s-master cricuit-breaking]#

2.3 创建负载测试客户端Fortio

  1. [root@k8s-master cricuit-breaking]# cat fortio-deploy.yaml
  2. apiVersion: v1
  3. kind: Service
  4. metadata:
  5. name: fortio
  6. labels:
  7. app: fortio
  8. service: fortio
  9. spec:
  10. ports:
  11. - port: 8080
  12. name: http
  13. selector:
  14. app: fortio
  15. ---
  16. apiVersion: apps/v1
  17. kind: Deployment
  18. metadata:
  19. name: fortio-deploy
  20. spec:
  21. replicas: 1
  22. selector:
  23. matchLabels:
  24. app: fortio
  25. template:
  26. metadata:
  27. annotations:
  28. # This annotation causes Envoy to serve cluster.outbound statistics via 15000/stats
  29. # in addition to the stats normally served by Istio. The Circuit Breaking example task
  30. # gives an example of inspecting Envoy stats via proxy config.
  31. proxy.istio.io/config: |-
  32. proxyStatsMatcher:
  33. inclusionPrefixes:
  34. - "cluster.outbound"
  35. - "cluster_manager"
  36. - "listener_manager"
  37. - "server"
  38. - "cluster.xds-grpc"
  39. labels:
  40. app: fortio
  41. spec:
  42. containers:
  43. - name: fortio
  44. image: fortio/fortio:latest_release
  45. imagePullPolicy: Always
  46. ports:
  47. - containerPort: 8080
  48. name: http-fortio
  49. - containerPort: 8079
  50. name: grpc-ping
  51. [root@k8s-master cricuit-breaking]# kubectl apply -f fortio-deploy.yaml
  52. service/fortio created
  53. deployment.apps/fortio-deploy created
  54. [root@k8s-master cricuit-breaking]#

查看创建pod状态

  1. [root@k8s-master cricuit-breaking]# kubectl get pod
  2. NAME READY STATUS RESTARTS AGE
  3. appv1-5cf75d8d8b-vdvzr 2/2 Running 4 (5d3h ago) 5d4h
  4. appv2-684dd44db7-r6k6k 2/2 Running 4 (5d3h ago) 5d4h
  5. fortio-deploy-687945c6dc-zjb7s 2/2 Running 0 41s
  6. httpbin-74fb669cc6-5hkjz 2/2 Running 0 14m
  7. [root@k8s-master cricuit-breaking]#

登入客户端 Pod 并使用 Fortio 工具调用 httpbin 服务,注意pod名要替换成刚刚查找出来的pod

  1. [root@k8s-master cricuit-breaking]# kubectl exec fortio-deploy-687945c6dc-zjb7s fortio -- /usr/bin/fortio curl -quiet http://httpbin:8000/get
  2. HTTP/1.1 200 OK
  3. server: envoy
  4. date: Wed, 05 Jan 2022 12:50:06 GMT
  5. content-type: application/json
  6. content-length: 594
  7. access-control-allow-origin: *
  8. access-control-allow-credentials: true
  9. x-envoy-upstream-service-time: 25
  10. {
  11. "args": {},
  12. "headers": {
  13. "Host": "httpbin:8000",
  14. "User-Agent": "fortio.org/fortio-1.17.1",
  15. "X-B3-Parentspanid": "4d6f06146b076eb4",
  16. "X-B3-Sampled": "1",
  17. "X-B3-Spanid": "d01931edb506c09b",
  18. "X-B3-Traceid": "436319cccbb1ae594d6f06146b076eb4",
  19. "X-Envoy-Attempt-Count": "1",
  20. "X-Forwarded-Client-Cert": "By=spiffe://cluster.local/ns/default/sa/httpbin;Hash=31f26376fdc1840183774e5be2fe1d56ea196ef69acb2785b6710fe487fcd1df;Subject=\"\";URI=spiffe://cluster.local/ns/default/sa/default"
  21. },
  22. "origin": "127.0.0.6",
  23. "url": "http://httpbin:8000/get"
  24. }
  25. [root@k8s-master cricuit-breaking]#

2.4 测试触发熔断

        在 DestinationRule 配置中,我们设置了 maxConnections: 1 和 http1MaxPendingRequests: 1,意思是如果并发的连接和请求数超过1,那么超出的请求就会被阻止,报503错误。

首先:发送并发数为 2 的连接(-c 2),请求 20 次(-n 20

  1. [root@k8s-master cricuit-breaking]# kubectl exec fortio-deploy-687945c6dc-zjb7s fortio -- /usr/bin/fortio load -c 2 -qps 0 -n 20 -loglevel Warning http://httpbin:8000/get
  2. 12:53:59 I logger.go:127> Log level is now 3 Warning (was 2 Info)
  3. Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 20 calls: http://httpbin:8000/get
  4. Starting at max qps with 2 thread(s) [gomax 2] for exactly 20 calls (10 per thread + 0)
  5. 12:53:59 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  6. 12:53:59 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  7. 12:53:59 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  8. Ended after 65.85345ms : 20 calls. qps=303.7
  9. Aggregated Function Time : count 20 avg 0.0059132964 +/- 0.004518 min 0.000490991 max 0.022192328 sum 0.118265928
  10. # range, mid point, percentile, count
  11. >= 0.000490991 <= 0.001 , 0.000745496 , 10.00, 2
  12. > 0.003 <= 0.004 , 0.0035 , 15.00, 1
  13. > 0.004 <= 0.005 , 0.0045 , 60.00, 9
  14. > 0.005 <= 0.006 , 0.0055 , 75.00, 3
  15. > 0.006 <= 0.007 , 0.0065 , 85.00, 2
  16. > 0.008 <= 0.009 , 0.0085 , 90.00, 1
  17. > 0.012 <= 0.014 , 0.013 , 95.00, 1
  18. > 0.02 <= 0.0221923 , 0.0210962 , 100.00, 1
  19. # target 50% 0.00477778
  20. # target 75% 0.006
  21. # target 90% 0.009
  22. # target 99% 0.0217539
  23. # target 99.9% 0.0221485
  24. Sockets used: 5 (for perfect keepalive, would be 2)
  25. Jitter: false
  26. Code 200 : 17 (85.0 %)
  27. Code 503 : 3 (15.0 %)
  28. Response Header Sizes : count 20 avg 195.55 +/- 82.15 min 0 max 231 sum 3911
  29. Response Body/Total Sizes : count 20 avg 736.6 +/- 208.2 min 241 max 825 sum 14732
  30. All done 20 calls (plus 0 warmup) 5.913 ms avg, 303.7 qps

请求几乎都完成了,请求并发比较低,几乎都处理掉了

然后:我们增加并发,发送并发数为 2 的连接(-c 3),请求 20 次(-n 30

  1. [root@k8s-master cricuit-breaking]# kubectl exec fortio-deploy-687945c6dc-zjb7s fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
  2. 13:00:19 I logger.go:127> Log level is now 3 Warning (was 2 Info)
  3. Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 30 calls: http://httpbin:8000/get
  4. Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
  5. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  6. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  7. 13:00:19 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  8. 13:00:19 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  9. 13:00:19 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  10. 13:00:19 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  11. 13:00:19 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  12. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  13. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  14. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  15. 13:00:19 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  16. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  17. 13:00:19 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  18. 13:00:19 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  19. 13:00:19 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  20. Ended after 63.023172ms : 30 calls. qps=476.02
  21. Aggregated Function Time : count 30 avg 0.0046750369 +/- 0.005078 min 0.00040163 max 0.020853069 sum 0.140251107
  22. # range, mid point, percentile, count
  23. >= 0.00040163 <= 0.001 , 0.000700815 , 30.00, 9
  24. > 0.001 <= 0.002 , 0.0015 , 46.67, 5
  25. > 0.003 <= 0.004 , 0.0035 , 50.00, 1
  26. > 0.004 <= 0.005 , 0.0045 , 63.33, 4
  27. > 0.005 <= 0.006 , 0.0055 , 66.67, 1
  28. > 0.006 <= 0.007 , 0.0065 , 76.67, 3
  29. > 0.007 <= 0.008 , 0.0075 , 83.33, 2
  30. > 0.008 <= 0.009 , 0.0085 , 86.67, 1
  31. > 0.009 <= 0.01 , 0.0095 , 93.33, 2
  32. > 0.018 <= 0.02 , 0.019 , 96.67, 1
  33. > 0.02 <= 0.0208531 , 0.0204265 , 100.00, 1
  34. # target 50% 0.004
  35. # target 75% 0.00683333
  36. # target 90% 0.0095
  37. # target 99% 0.0205971
  38. # target 99.9% 0.0208275
  39. Sockets used: 16 (for perfect keepalive, would be 3)
  40. Jitter: false
  41. Code 200 : 15 (50.0 %)
  42. Code 503 : 15 (50.0 %)
  43. Response Header Sizes : count 30 avg 115 +/- 115 min 0 max 230 sum 3450
  44. Response Body/Total Sizes : count 30 avg 532.5 +/- 291.5 min 241 max 824 sum 15975
  45. All done 30 calls (plus 0 warmup) 4.675 ms avg, 476.0 qps
  46. [root@k8s-master cricuit-breaking]#

目前已经触发熔断,多访问几次,效果显现

  1. [root@k8s-master cricuit-breaking]# kubectl exec fortio-deploy-687945c6dc-zjb7s fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
  2. 13:05:28 I logger.go:127> Log level is now 3 Warning (was 2 Info)
  3. Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 30 calls: http://httpbin:8000/get
  4. Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
  5. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  6. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  7. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  8. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  9. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  10. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  11. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  12. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  13. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  14. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  15. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  16. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  17. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  18. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  19. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  20. 13:05:28 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  21. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  22. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  23. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  24. 13:05:28 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  25. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  26. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  27. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  28. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  29. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  30. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  31. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  32. 13:05:28 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  33. Ended after 38.111835ms : 30 calls. qps=787.16
  34. Aggregated Function Time : count 30 avg 0.0035141705 +/- 0.008374 min 0.000294241 max 0.033723522 sum 0.105425115
  35. # range, mid point, percentile, count
  36. >= 0.000294241 <= 0.001 , 0.000647121 , 63.33, 19
  37. > 0.001 <= 0.002 , 0.0015 , 90.00, 8
  38. > 0.025 <= 0.03 , 0.0275 , 96.67, 2
  39. > 0.03 <= 0.0337235 , 0.0318618 , 100.00, 1
  40. # target 50% 0.000843165
  41. # target 75% 0.0014375
  42. # target 90% 0.002
  43. # target 99% 0.0326065
  44. # target 99.9% 0.0336118
  45. Sockets used: 28 (for perfect keepalive, would be 3)
  46. Jitter: false
  47. Code 200 : 2 (6.7 %)
  48. Code 503 : 28 (93.3 %)
  49. Response Header Sizes : count 30 avg 15.4 +/- 57.62 min 0 max 231 sum 462
  50. Response Body/Total Sizes : count 30 avg 200.73333 +/- 167.6 min 153 max 825 sum 6022
  51. All done 30 calls (plus 0 warmup) 3.514 ms avg, 787.2 qps
  52. [root@k8s-master cricuit-breaking]#
  53. [root@k8s-master cricuit-breaking]#
  54. [root@k8s-master cricuit-breaking]#
  55. [root@k8s-master cricuit-breaking]#
  56. [root@k8s-master cricuit-breaking]#
  57. [root@k8s-master cricuit-breaking]#
  58. [root@k8s-master cricuit-breaking]# kubectl exec fortio-deploy-687945c6dc-zjb7s fortio -- /usr/bin/fortio load -c 3 -qps 0 -n 30 -loglevel Warning http://httpbin:8000/get
  59. 13:05:32 I logger.go:127> Log level is now 3 Warning (was 2 Info)
  60. Fortio 1.17.1 running at 0 queries per second, 2->2 procs, for 30 calls: http://httpbin:8000/get
  61. Starting at max qps with 3 thread(s) [gomax 2] for exactly 30 calls (10 per thread + 0)
  62. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  63. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  64. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  65. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  66. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  67. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  68. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  69. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  70. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  71. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  72. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  73. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  74. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  75. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  76. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  77. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  78. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  79. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  80. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  81. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  82. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  83. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  84. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  85. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  86. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  87. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  88. 13:05:32 W http_client.go:806> [0] Non ok http code 503 (HTTP/1.1 503)
  89. 13:05:32 W http_client.go:806> [2] Non ok http code 503 (HTTP/1.1 503)
  90. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  91. 13:05:32 W http_client.go:806> [1] Non ok http code 503 (HTTP/1.1 503)
  92. Ended after 8.257874ms : 30 calls. qps=3632.9
  93. Aggregated Function Time : count 30 avg 0.00076742657 +/- 0.0002507 min 0.000307242 max 0.001269222 sum 0.023022797
  94. # range, mid point, percentile, count
  95. >= 0.000307242 <= 0.001 , 0.000653621 , 83.33, 25
  96. > 0.001 <= 0.00126922 , 0.00113461 , 100.00, 5
  97. # target 50% 0.000711351
  98. # target 75% 0.000927838
  99. # target 90% 0.00110769
  100. # target 99% 0.00125307
  101. # target 99.9% 0.00126761
  102. Sockets used: 30 (for perfect keepalive, would be 3)
  103. Jitter: false
  104. Code 503 : 30 (100.0 %)
  105. Response Header Sizes : count 30 avg 0 +/- 0 min 0 max 0 sum 0
  106. Response Body/Total Sizes : count 30 avg 153 +/- 0 min 153 max 153 sum 4590
  107. All done 30 calls (plus 0 warmup) 0.767 ms avg, 3632.9 qps
  108. [root@k8s-master cricuit-breaking]#

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/Guff_9hys/article/detail/915720
推荐阅读
相关标签
  

闽ICP备14008679号