consul

    First of all, we need to add following configuration in :

    And you can config it in short by default value:

    1. discovery:
    2. consul:
    3. servers:
    4. - "http://127.0.0.1:8500"

    The keepalive has two optional values:

    • true, default and recommend value, use the long pull way to query consul servers
    • false, not recommend, it would use the short pull way to query consul servers, then you can set the fetch_interval for fetch interval

    Dump Data

    When we need reload apisix online, as the consul module maybe loads data from CONSUL slower than load routes from ETCD, and would get the log at the moment before load successfully from consul:

    1. http_access_phase(): failed to set upstream: no valid upstream node

    So, we import the dump function for consul module. When reload, would load the dump file before from consul; when the registered nodes in consul been updated, would dump the upstream nodes into file automatically.

    • path, the dump file save path
      • support relative path, eg: logs/consul.dump
      • support absolute path, eg: /tmp/consul.dump
      • make sure the dump file’s parent path exist
      • make sure the apisix has the dump file’s read-write access permission,eg: add below config in conf/config.yaml
    1. nginx_config: # config for render the template to generate nginx.conf
    2. user: root # specifies the execution user of the worker process.
    • load_on_init, default value is true
      • if true, just try to load the data from the dump file before loading data from consul when starting, does not care the dump file exists or not
      • if false, ignore loading data from the dump file
      • Whether true or false, we don’t need to prepare a dump file for apisix at anytime
    • expire, unit sec, avoiding load expired dump data when load
      • default 0, it is unexpired forever
      • recommend 2592000, which is 30 days(equals 3600 * 24 * 30)

    Now, register nodes into consul:

    In some case, same service name exist in different consul servers. To avoid confusion, use the full consul key url path as service name in practice.

    L7

    Here is an example of routing a request with a URL of “/*“ to a service which named “service_a” and use consul discovery client in the registry :

    1. $ curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
    2. {
    3. "uri": "/*",
    4. "upstream": {
    5. "service_name": "service_a",
    6. "type": "roundrobin",
    7. "discovery_type": "consul"
    8. }
    9. }'

    The format response as below:

    1. {
    2. "key": "/apisix/routes/1",
    3. "value": {
    4. "uri": "/*",
    5. "priority": 0,
    6. "id": "1",
    7. "upstream": {
    8. "scheme": "http",
    9. "type": "roundrobin",
    10. "discovery_type": "consul",
    11. "pass_host": "pass"
    12. },
    13. "create_time": 1669267329,
    14. "status": 1,
    15. "update_time": 1669267329
    16. }
    17. }

    You could find more usage in the apisix/t/discovery/consul.t file.

    L4

    1. $ curl http://127.0.0.1:9180/apisix/admin/stream_routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -i -d '
    2. {
    3. "remote_addr": "127.0.0.1",
    4. "upstream": {
    5. "scheme": "tcp",
    6. "service_name": "service_a",
    7. "type": "roundrobin",
    8. "discovery_type": "consul"
    9. }
    10. }'

    You could find more usage in the apisix/t/discovery/stream/consul.t file.

    It also offers control api for debugging.

    For example:

    1. # curl http://127.0.0.1:9090/v1/discovery/consul/dump | jq
    2. {
    3. "config": {
    4. "fetch_interval": 3,
    5. "timeout": {
    6. "wait": 60,
    7. "connect": 6000,
    8. "read": 6000
    9. },
    10. "weight": 1,
    11. "servers": [
    12. "http://172.19.5.30:8500",
    13. "http://172.19.5.31:8500"
    14. ],
    15. "keepalive": true,
    16. "default_service": {
    17. "host": "172.19.5.11",
    18. "port": 8899,
    19. "metadata": {
    20. "fail_timeout": 1,
    21. "weight": 1,
    22. "max_fails": 1
    23. }
    24. },
    25. "skip_services": [
    26. "service_d"
    27. ]
    28. },
    29. "services": {
    30. "service_a": [
    31. "port": 30513,
    32. "weight": 1
    33. },
    34. {
    35. "host": "127.0.0.1",
    36. "port": 30514,
    37. "weight": 1
    38. }
    39. ],
    40. "service_b": [
    41. {
    42. "host": "172.19.5.51",
    43. "port": 50051,
    44. "weight": 1
    45. }
    46. ],
    47. "service_c": [
    48. {
    49. "host": "127.0.0.1",
    50. "port": 30511,
    51. "weight": 1
    52. },
    53. {
    54. "host": "127.0.0.1",
    55. "port": 30512,
    56. "weight": 1
    57. }
    58. ]
    59. }
    60. }

    It offers another control api for dump file view now. Maybe would add more api for debugging in future.

    1. GET /v1/discovery/consul/show_dump_file

    For example:

    1. curl http://127.0.0.1:9090/v1/discovery/consul/show_dump_file | jq
    2. {
    3. "services": {
    4. "service_a": [
    5. {
    6. "host": "172.19.5.12",
    7. "port": 8000,
    8. "weight": 120
    9. },
    10. {
    11. "host": "172.19.5.13",
    12. "port": 8000,
    13. "weight": 120
    14. }
    15. ]
    16. },
    17. "expire": 0,
    18. }