Front Proxy

    Below you can see a graphic showing the docker compose deployment:

    All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the envoymesh network. Port 8080, 8443, and 8001 are exposed by docker compose (see ) to handle HTTP, HTTPS calls to the services and requests to /admin respectively.

    Moreover, notice that all traffic routed by the front Envoy to the service containers is actually routed to the service Envoys (routes setup in /examples/front-proxy/front-envoy.yaml).

    In turn the service Envoys route the request to the Flask app via the loopback address (routes setup in ). This setup illustrates the advantage of running service Envoys collocated with your services: all requests are handled by the service Envoy, and efficiently routed to your services.

    The following documentation runs through the setup of Envoy described above.

    Ensure that you have a recent versions of docker and docker-compose installed.

    A simple way to achieve this is via the Docker Desktop.

    Step 2: Clone the Envoy repo

    SSH

    HTTPS

    1. git clone https://github.com/envoyproxy/envoy.git
    1. $ pwd
    2. envoy/examples/front-proxy
    3. $ docker-compose build --pull
    4. $ docker-compose up -d
    5. $ docker-compose ps
    6. Name Command State Ports
    7. ------------------------------------------------------------------------------------------------------------------------------------------------------
    8. front-proxy_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8001->8001/tcp, 0.0.0.0:8443->8443/tcp
    9. front-proxy_service1_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp
    10. front-proxy_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp

    Step 4: Test Envoy’s routing capabilities

    You can now send a request to both services via the front-envoy.

    For service1:

    1. $ curl -v localhost:8080/service/1
    2. * Trying ::1...
    3. * TCP_NODELAY set
    4. * Connected to localhost (::1) port 8080 (#0)
    5. > GET /service/1 HTTP/1.1
    6. > Host: localhost:8080
    7. > User-Agent: curl/7.64.1
    8. > Accept: */*
    9. >
    10. < HTTP/1.1 200 OK
    11. < content-type: text/html; charset=utf-8
    12. < content-length: 92
    13. < server: envoy
    14. < date: Mon, 06 Jul 2020 06:20:00 GMT
    15. < x-envoy-upstream-service-time: 2
    16. <
    17. Hello from behind Envoy (service 1)! hostname: 36418bc3c824 resolvedhostname: 192.168.160.4

    For service2:

    Notice that each request, while sent to the front Envoy, was correctly routed to the respective application.

    We can also use HTTPS to call services behind the front Envoy. For example, calling service1:

    1. $ curl https://localhost:8443/service/1 -k -v
    2. * Trying ::1...
    3. * TCP_NODELAY set
    4. * Connected to localhost (::1) port 8443 (#0)
    5. * ALPN, offering h2
    6. * ALPN, offering http/1.1
    7. * successfully set certificate verify locations:
    8. * CAfile: /etc/ssl/cert.pem
    9. CApath: none
    10. * TLSv1.2 (OUT), TLS handshake, Client hello (1):
    11. * TLSv1.2 (IN), TLS handshake, Server hello (2):
    12. * TLSv1.2 (IN), TLS handshake, Certificate (11):
    13. * TLSv1.2 (IN), TLS handshake, Server key exchange (12):
    14. * TLSv1.2 (IN), TLS handshake, Server finished (14):
    15. * TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
    16. * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1):
    17. * TLSv1.2 (OUT), TLS handshake, Finished (20):
    18. * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1):
    19. * TLSv1.2 (IN), TLS handshake, Finished (20):
    20. * SSL connection using TLSv1.2 / ECDHE-RSA-CHACHA20-POLY1305
    21. * ALPN, server did not agree to a protocol
    22. * Server certificate:
    23. * subject: CN=front-envoy
    24. * start date: Jul 5 15:18:44 2020 GMT
    25. * expire date: Jul 5 15:18:44 2021 GMT
    26. * issuer: CN=front-envoy
    27. * SSL certificate verify result: self signed certificate (18), continuing anyway.
    28. > Host: localhost:8443
    29. > User-Agent: curl/7.64.1
    30. > Accept: */*
    31. >
    32. < HTTP/1.1 200 OK
    33. < content-type: text/html; charset=utf-8
    34. < content-length: 92
    35. < server: envoy
    36. < date: Mon, 06 Jul 2020 06:17:14 GMT
    37. < x-envoy-upstream-service-time: 3
    38. <
    39. Hello from behind Envoy (service 1)! hostname: 36418bc3c824 resolvedhostname: 192.168.160.4

    Now let’s scale up our service1 nodes to demonstrate the load balancing abilities of Envoy:

    1. $ docker-compose scale service1=3
    2. Creating and starting example_service1_2 ... done
    3. Creating and starting example_service1_3 ... done
    1. $ curl -v localhost:8080/service/1
    2. * Trying ::1...
    3. * TCP_NODELAY set
    4. * Connected to localhost (::1) port 8080 (#0)
    5. > GET /service/1 HTTP/1.1
    6. > Host: localhost:8080
    7. > User-Agent: curl/7.64.1
    8. > Accept: */*
    9. >
    10. < HTTP/1.1 200 OK
    11. < content-type: text/html; charset=utf-8
    12. < content-length: 92
    13. < server: envoy
    14. < date: Mon, 06 Jul 2020 06:21:47 GMT
    15. < x-envoy-upstream-service-time: 6
    16. <
    17. Hello from behind Envoy (service 1)! hostname: 3dc787578c23 resolvedhostname: 192.168.160.6
    18. $ curl -v localhost:8080/service/1
    19. * Trying 192.168.99.100...
    20. * Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
    21. > GET /service/1 HTTP/1.1
    22. > Host: 192.168.99.100:8080
    23. > User-Agent: curl/7.54.0
    24. > Accept: */*
    25. >
    26. < HTTP/1.1 200 OK
    27. < content-type: text/html; charset=utf-8
    28. < content-length: 89
    29. < x-envoy-upstream-service-time: 1
    30. < server: envoy
    31. < date: Fri, 26 Aug 2018 19:40:22 GMT
    32. <
    33. Hello from behind Envoy (service 1)! hostname: 3a93ece62129 resolvedhostname: 192.168.160.5
    34. $ curl -v localhost:8080/service/1
    35. * Trying 192.168.99.100...
    36. * Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0)
    37. > GET /service/1 HTTP/1.1
    38. > Host: 192.168.99.100:8080
    39. > User-Agent: curl/7.43.0
    40. > Accept: */*
    41. >
    42. < HTTP/1.1 200 OK
    43. < content-type: text/html; charset=utf-8
    44. < content-length: 89
    45. < x-envoy-upstream-service-time: 1
    46. < server: envoy
    47. < date: Fri, 26 Aug 2018 19:40:24 GMT
    48. <
    49. Hello from behind Envoy (service 1)! hostname: 36418bc3c824 resolvedhostname: 192.168.160.4

    Step 6: enter containers and curl services

    In addition of using from your host machine, you can also enter the containers themselves and curl from inside them. To enter a container you can use docker-compose exec <container_name> /bin/bash. For example we can enter the front-envoy container, and curl for services locally:

    When Envoy runs it also attaches an admin to your desired port.

    In the example configs the admin is bound to port 8001.

    We can curl it to gain useful information:

    • /server_info provides information about the Envoy version you are running.

    • /stats provides statistics about the Envoy server.

    In the example we can we can enter the front-envoy container to query admin:

    1. $ docker-compose exec front-envoy /bin/bash
    2. root@e654c2c83277:/# curl localhost:8001/server_info
    1. {
    2. "version": "093e2ffe046313242144d0431f1bb5cf18d82544/1.15.0-dev/Clean/RELEASE/BoringSSL",
    3. "state": "LIVE",
    4. "hot_restart_version": "11.104",
    5. "command_line_options": {
    6. "base_id": "0",
    7. "use_dynamic_base_id": false,
    8. "base_id_path": "",
    9. "concurrency": 8,
    10. "config_path": "/etc/front-envoy.yaml",
    11. "config_yaml": "",
    12. "allow_unknown_static_fields": false,
    13. "reject_unknown_dynamic_fields": false,
    14. "ignore_unknown_dynamic_fields": false,
    15. "admin_address_path": "",
    16. "local_address_ip_version": "v4",
    17. "log_level": "info",
    18. "component_log_level": "",
    19. "log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] [%g:%#] %v",
    20. "log_format_escaped": false,
    21. "log_path": "",
    22. "service_cluster": "front-proxy",
    23. "service_node": "",
    24. "service_zone": "",
    25. "drain_strategy": "Gradual",
    26. "mode": "Serve",
    27. "disable_hot_restart": false,
    28. "enable_mutex_tracing": false,
    29. "restart_epoch": 0,
    30. "cpuset_threads": false,
    31. "disabled_extensions": [],
    32. "bootstrap_version": 0,
    33. "hidden_envoy_deprecated_max_stats": "0",
    34. "hidden_envoy_deprecated_max_obj_name_len": "0",
    35. "file_flush_interval": "10s",
    36. "drain_time": "600s",
    37. "parent_shutdown_time": "900s"
    38. },
    39. "uptime_current_epoch": "188s",
    40. "uptime_all_epochs": "188s"
    41. }
    1. root@e654c2c83277:/# curl localhost:8001/stats
    2. cluster.service1.external.upstream_rq_200: 7
    3. ...
    4. cluster.service1.membership_change: 2
    5. cluster.service1.membership_total: 3
    6. ...
    7. cluster.service1.upstream_cx_http2_total: 3
    8. ...
    9. cluster.service1.upstream_rq_total: 7
    10. ...
    11. cluster.service2.external.upstream_rq_200: 2
    12. ...
    13. cluster.service2.membership_change: 1
    14. cluster.service2.membership_total: 1
    15. ...
    16. cluster.service2.upstream_cx_http2_total: 1
    17. ...
    18. ...

    Notice that we can get the number of members of upstream clusters, number of requests fulfilled by them, information about http ingress, and a plethora of other useful stats.