Set up a local cluster

    For testing and development deployments, the quickest and easiest way is to configure a local cluster. For a production deployment, refer to the section.

    Run the following to deploy an etcd cluster as a standalone cluster:

    If the binary is not present in the current working directory, it might be located either at $GOPATH/bin/etcd or at /usr/local/bin/etcd. Run the command appropriately.

    The running etcd member listens on localhost:2379 for client requests.

    Use etcdctl to interact with the running cluster:

    1. Store an example key-value pair in the cluster:

      1. $ ./etcdctl put foo bar
      2. OK

      If OK is printed, storing key-value pair is successful.

    2. Retrieve the value of foo:

      1. $ ./etcdctl get foo
      2. bar

      If bar is returned, interaction with the etcd cluster is working as expected.

    Local multi-member cluster

    1. Install goreman to control Procfile-based applications:

      1. Start a cluster with using etcd’s stock Procfile:

        1. $ goreman -f Procfile start

        The members start running. They listen on localhost:2379, localhost:22379, and localhost:32379 respectively for client requests.

      Use etcdctl to interact with the running cluster:

      1. Print the list of members:

        The list of etcd members are displayed as follows:

        1. +------------------+---------+--------+------------------------+------------------------+
        2. | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS |
        3. +------------------+---------+--------+------------------------+------------------------+
        4. | 8211f1d0f64f3269 | started | infra1 | http://127.0.0.1:2380 | http://127.0.0.1:2379 |
        5. | 91bc3c398fb3c146 | started | infra2 | http://127.0.0.1:22380 | http://127.0.0.1:22379 |
        6. | fd422379fda50e48 | started | infra3 | http://127.0.0.1:32380 | http://127.0.0.1:32379 |
        7. +------------------+---------+--------+------------------------+------------------------+
      2. Store an example key-value pair in the cluster:

        1. $ etcdctl put foo bar

        If OK is printed, storing key-value pair is successful.

      To exercise etcd’s fault tolerance, kill a member and attempt to retrieve the key.

      1. The Procfile lists the properties of the multi-member cluster. For example, consider the member with the process name, etcd2.

      2. Store a key:

        1. $ etcdctl put key hello
        2. OK
      3. Retrieve the key that is stored in the previous step:

      4. Retrieve a key from the stopped member:

        1. $ etcdctl --endpoints=localhost:22379 get key

        The command should display an error caused by connection failure:

        1. 2017/06/18 23:07:35 grpc: Conn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 127.0.0.1:22379: getsockopt: connection refused"; Reconnecting to "localhost:22379"
        2. Error: grpc: timed out trying to connect
      5. Restart the stopped member:

        1. $ goreman run restart etcd2
      6. Get the key from the restarted member:

        1. $ etcdctl --endpoints=localhost:22379 get key

        Restarting the member re-establish the connection. etcdctl will now be able to retrieve the key successfully. To learn more about interacting with etcd, read interacting with etcd section.