Create universe - Multi-region

    • Run the CassandraKeyValue workload
    • Read data from the local data center (low latency timeline consistent reads)
    • Verify the latencies of the overall app

    We are going to enter the following values to create a multi-region universe on GCP cloud provider. Click Create Universe and enter the following intent.

    • Enter a universe name: helloworld2
    • Enter the set of regions: Oregon, Northern Virginia, Tokyo
    • Change instance type: n1-standard-8
    • Add the following G-Flag for Master and T-Server: leader_failure_max_missed_heartbeat_periods = 10. Since the the data is globally replicated, RPC latencies are higher. We use this flag to increase the failure detection interval in such a higher RPC latency deployment. See the screenshot below.

    Click Create.

    Wait for the universe to get created. Note that YugaWare can manage multiple universes as shown below.

    Multiple Universes in YugaWare

    Once the universe is created, you should see something like the screenshot below in the unverse overview.

    You can browse to the nodes tab of the universe to see a list of nodes. Note that the nodes are across the different geographic regions.

    Browse to the cloud provider’s instances page. In this example, since we are using Google Cloud Platform as the cloud provider, browse to -> VM Instances and search for instances that have helloworld2 in their name. You should see something as follows. It is easy to verify that the instances were created in the appropriate regions.

    Instances for a Pending Universe

    In this section, we are going to connect to each node and perform the following:

    • Run the CassandraKeyValue workload
    • Write data with global consistency (higher latencies because we chose nodes in far away regions)

    Browse to the nodes tab to find the nodes and click on the Connect button. This should bring up a dialog showing how to connect to the nodes.

    Create three Bash terminals and connect to each of the nodes by running the commands shown in the popup above. We are going to start a workload from each of the nodes. Below is a screenshot of the terminals.

    Multi-region universe node terminals

    On each of the terminals, do the following.

    • Install Java.
    • Switch to the yugabyte user.
    • Export the env variable

    Export this into a shell variable on the database node yb-dev-helloworld1-n1 we had connected to. Remember to replace the ip addresses below with those shown by YugaWare.

    Run the following command on each of the nodes. Remember to substitute <REGION> with the region code for each node.

    You can find the region codes for each of the nodes by browsing to the nodes tab for this universe in YugaWare. A screenshot is shown below. In this example, the value for <REGION> is:

    • for node yb-dev-helloworld2-n1
    • asia-northeast1 for node yb-dev-helloworld2-n2
    • us-west1 for node

    Region Codes For Universe Nodes

    Recall that we expect the app to have the following characteristics based on its deployment configuration:

    • Global consistency on writes, which would cause higher latencies in order to replicate data across multiple geographic regions.
    • Low latency reads from the nearest data center, which offers timeline consistency (similar to async replication).

    Let us verify this by browse to the metrics tab of the universe in YugaWare to see the overall performance of the app. It should look similar to the screenshot below.

    • Read latency is 0.23 ms across all regions. Note that the app is performing 100K reads/sec across the regions (about 33K reads/sec in each region).

    It is possible to repeat the same experiment with the RedisKeyValue app and get similar results.