Node

    You can manage more nodes with Pigsty, and use them to deploy various databases or your applications.

    The nodes managed by Pigsty are adjusted by nodes.yml to the state described by , and the node monitoring and log collection components are installed so you can check the node status and logs from the monitoring system.

    Each node has that are configured by parameters in and <cluster>.vars.

    Besides, Pigsty uses an IP address as a unique node identifier, too. Which is the inventory_hostname reflected as the key in the <cluster>.hosts object. A node may have multiple interfaces & IP addresses. But you must explicitly designate one as the PRIMARY IP ADDRESS. Which should be an intranet IP for service access. It’s not mandatory to use that same IP address to ssh from the meta node, you can use ssh tunnel & jump server with Ansible Connect parameters.

    The following cluster configuration declares a three-node cluster.

    IIn the monitoring system, the time-series monitoring data are labeled as follows.

    1. node_load1{cls="pg-test", ins="pg-test-1", ip="10.10.10.11", job="nodes"}
    2. node_load1{cls="pg-test", ins="pg-test-3", ip="10.10.10.13", job="nodes"}

    Pigsty uses exclusively deploy policy for PGSQL. This means the node’s identity and pgsql’s identity are exchangeable. The parameter is designed to assign the Postgres identity to its underlying node: pg_instance and pg_cluster will be assigned to the node’s nodename & .

    In addition to node default services, the following services are available on PGSQL nodes.