Front Proxy

    Below you can see a graphic showing the docker compose deployment:

    All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the envoymesh network. Port 80 is mapped to port 8000 by docker compose (see ). Moreover, notice that all traffic routed by the front Envoy to the service containers is actually routed to the service Envoys (routes setup in /examples/front-proxy/front-envoy.yaml). In turn the service envoys route the request to the flask app via the loopback address (routes setup in ). This setup illustrates the advantage of running service Envoys collocated with your services: all requests are handled by the service Envoy, and efficiently routed to your services.

    The following documentation runs through the setup of an Envoy cluster organized as is described in the image above.

    Step 1: Install Docker

    Ensure that you have a recent versions of docker and docker-compose installed.

    Step 2: Clone the Envoy repo, and start all of our containers

    If you have not cloned the Envoy repo, clone it with git clone git@github.com:envoyproxy/envoy or git clone https://github.com/envoyproxy/envoy.git:

    Step 3: Test Envoy’s routing capabilities

    You can now send a request to both services via the front-envoy.

    For service1:

    1. $ curl -v localhost:8000/service/1
    2. * Trying 192.168.99.100...
    3. * Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
    4. > GET /service/1 HTTP/1.1
    5. > Host: 192.168.99.100:8000
    6. > User-Agent: curl/7.54.0
    7. > Accept: */*
    8. >
    9. < HTTP/1.1 200 OK
    10. < content-type: text/html; charset=utf-8
    11. < content-length: 89
    12. < x-envoy-upstream-service-time: 1
    13. < server: envoy
    14. < date: Fri, 26 Aug 2018 19:39:19 GMT
    15. <
    16. Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
    17. * Connection #0 to host 192.168.99.100 left intact

    For service2:

    1. $ curl -v localhost:8000/service/2
    2. * Trying 192.168.99.100...
    3. * Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
    4. > GET /service/2 HTTP/1.1
    5. > Host: 192.168.99.100:8000
    6. > User-Agent: curl/7.54.0
    7. > Accept: */*
    8. >
    9. < HTTP/1.1 200 OK
    10. < content-type: text/html; charset=utf-8
    11. < content-length: 89
    12. < x-envoy-upstream-service-time: 2
    13. < server: envoy
    14. < date: Fri, 26 Aug 2018 19:39:23 GMT
    15. <
    16. Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2
    17. * Connection #0 to host 192.168.99.100 left intact

    Step 4: Test Envoy’s load balancing capabilities

    Now let’s scale up our service1 nodes to demonstrate the load balancing abilities of Envoy.:

    Now if we send a request to service1 multiple times, the front Envoy will load balance the requests by doing a round robin of the three service1 machines:

    1. $ curl -v localhost:8000/service/1
    2. * Trying 192.168.99.100...
    3. * Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
    4. > GET /service/1 HTTP/1.1
    5. > Host: 192.168.99.100:8000
    6. > User-Agent: curl/7.43.0
    7. > Accept: */*
    8. >
    9. < HTTP/1.1 200 OK
    10. < content-type: text/html; charset=utf-8
    11. < content-length: 89
    12. < x-envoy-upstream-service-time: 1
    13. < server: envoy
    14. < x-envoy-protocol-version: HTTP/1.1
    15. <
    16. Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3
    17. * Connection #0 to host 192.168.99.100 left intact
    18. $ curl -v localhost:8000/service/1
    19. * Trying 192.168.99.100...
    20. * Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
    21. > GET /service/1 HTTP/1.1
    22. > Host: 192.168.99.100:8000
    23. > User-Agent: curl/7.54.0
    24. > Accept: */*
    25. >
    26. < HTTP/1.1 200 OK
    27. < content-type: text/html; charset=utf-8
    28. < content-length: 89
    29. < x-envoy-upstream-service-time: 1
    30. < server: envoy
    31. < date: Fri, 26 Aug 2018 19:40:22 GMT
    32. <
    33. Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5
    34. * Connection #0 to host 192.168.99.100 left intact
    35. $ curl -v localhost:8000/service/1
    36. * Trying 192.168.99.100...
    37. * Connected to 192.168.99.100 (192.168.99.100) port 8000 (#0)
    38. > GET /service/1 HTTP/1.1
    39. > Host: 192.168.99.100:8000
    40. > User-Agent: curl/7.43.0
    41. > Accept: */*
    42. >
    43. < HTTP/1.1 200 OK
    44. < content-type: text/html; charset=utf-8
    45. < content-length: 89
    46. < x-envoy-upstream-service-time: 1
    47. < server: envoy
    48. < date: Fri, 26 Aug 2018 19:40:24 GMT
    49. < x-envoy-protocol-version: HTTP/1.1
    50. <
    51. Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
    52. * Connection #0 to host 192.168.99.100 left intact

    Step 5: enter containers and curl services

    In addition of using curl from your host machine, you can also enter the containers themselves and curl from inside them. To enter a container you can use docker-compose exec <container_name> /bin/bash. For example we can enter the front-envoy container, and curl for services locally:

    1. $ docker-compose exec front-envoy /bin/bash
    2. root@81288499f9d7:/# curl localhost:80/service/1
    3. Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3
    4. root@81288499f9d7:/# curl localhost:80/service/1
    5. Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5
    6. root@81288499f9d7:/# curl localhost:80/service/1
    7. Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6
    8. root@81288499f9d7:/# curl localhost:80/service/2
    9. Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2

    Step 6: enter containers and curl admin

    1. "version": "3ba949a9cb5b0b1cccd61e76159969a49377fd7d/1.10.0-dev/Clean/RELEASE/BoringSSL",
    2. "state": "LIVE",
    3. "command_line_options": {
    4. "base_id": "0",
    5. "concurrency": 4,
    6. "config_path": "/etc/front-envoy.yaml",
    7. "config_yaml": "",
    8. "allow_unknown_static_fields": false,
    9. "admin_address_path": "",
    10. "local_address_ip_version": "v4",
    11. "log_level": "info",
    12. "component_log_level": "",
    13. "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] %v",
    14. "log_path": "",
    15. "hot_restart_version": false,
    16. "service_cluster": "front-proxy",
    17. "service_node": "",
    18. "service_zone": "",
    19. "mode": "Serve",
    20. "disable_hot_restart": false,
    21. "enable_mutex_tracing": false,
    22. "restart_epoch": 0,
    23. "cpuset_threads": false,
    24. "file_flush_interval": "10s",
    25. "drain_time": "600s",
    26. "parent_shutdown_time": "900s"
    27. },
    28. "uptime_current_epoch": "401s",
    29. "uptime_all_epochs": "401s"
    30. }
    1. root@e654c2c83277:/# curl localhost:8001/stats
    2. cluster.service1.external.upstream_rq_200: 7
    3. ...
    4. cluster.service1.membership_change: 2
    5. cluster.service1.membership_total: 3
    6. ...
    7. cluster.service1.upstream_cx_http2_total: 3
    8. ...
    9. cluster.service1.upstream_rq_total: 7
    10. ...
    11. cluster.service2.external.upstream_rq_200: 2
    12. ...
    13. cluster.service2.membership_change: 1
    14. cluster.service2.membership_total: 1
    15. ...
    16. cluster.service2.upstream_cx_http2_total: 1
    17. ...

    Notice that we can get the number of members of upstream clusters, number of requests fulfilled by them, information about http ingress, and a plethora of other useful stats.