Configure Admin Console

    Configuring YugaWare, the YugabyteDB Admin Console, is really simple. A randomly generated password for the YugaWare config database is already pre-filled. You can make a note of it for future use or change it to a new password of your choice. Additionally, is pre-filled as the location of the directory on the YugaWare host where all the YugaWare data will be stored. Clicking Save on this page will take us to the Replicated Dashboard.

    For airgapped installation , all the containers powering the YugaWare application are already available with Replicated. For non-airgapped installations, these containers will be downloaded from the Quay.io Registry when the Dashboard is first launched. Replicated will automatically start the application as soon as all the container images are available.

    Replicated Dashboard

    Click on “View release history” to see the release history of the YugaWare application.

    After starting the YugaWare application, you must register a new tenant in YugaWare by following the instructions in the section below

    Go to http://yugaware-host-public-ip/register to register a tenant account. Note that by default YugaWare runs as a single-tenant application.

    Register

    Logging in

    By default, http://yugaware-host-public-ip redirects to . Login to the application using the credentials you had provided during the Register customer step.

    By clicking on the top right dropdown or going directly to http://yugaware-host-public-ip/profile, you can change the profile of the customer provided during the Register customer step.

    Profile

    Next step is to configure one or more cloud providers in YugaWare as documented .

    We recommend a weekly machine snapshot and weekly backups of /opt/yugabyte.

    Doing a machine snapshot and backing up the above directory before performing an update is recommended as well.

    Upgrade

    Upgrades to YugaWare are managed seamlessly in the Replicated UI. Whenever a new YugaWare version is available for upgrade, the Replicated UI will show the same. You can apply the upgrade anytime you wish.

    Upgrades to Replicated are as simple as rerunning the Replicated install command. This will upgrade Replicated components with the latest build.

    Replace <appid> with the application id of yugaware from the command above

    1. $ /usr/local/bin/replicated app <appid> stop

    Remove yugaware app

    1. $ /usr/local/bin/replicated app <appid> rm

    Remove all yugaware containers

    1. $ docker images | grep "yuga" | awk '{print $3}' | xargs docker rmi -f

    Delete the mapped directory

    And then uninstall Replicated itself by following instructions documented .

    Troubleshoot

    If your host has SELinux turned on, then docker-engine may not be able to connect with the host. Run the following commands to open the ports using firewall exceptions.

    1. sudo firewall-cmd --zone=trusted --add-interface=docker0
    2. sudo firewall-cmd --zone=public --add-port=80/tcp
    3. sudo firewall-cmd --zone=public --add-port=443/tcp
    4. sudo firewall-cmd --zone=public --add-port=5432/tcp
    5. sudo firewall-cmd --zone=public --add-port=9000/tcp
    6. sudo firewall-cmd --zone=public --add-port=9090/tcp
    7. sudo firewall-cmd --zone=public --add-port=32769/tcp
    8. sudo firewall-cmd --zone=public --add-port=32770/tcp
    9. sudo firewall-cmd --zone=public --add-port=9874-9879/tcp

    If your YugaWare host is not able to do passwordless ssh to the data nodes, follow the steps below. Generate key pair.

    1. $ ssh-keygen -t rsa

    Setup passwordless ssh to the data nodes with private IPs 10.1.13.150, 10.1.13.151, 10.1.13.152

    1. $ for IP in 10.1.13.150 10.1.13.151 10.1.13.152; do
    2. ssh $IP mkdir -p .ssh;
    3. cat ~/.ssh/id_rsa.pub | ssh $IP 'cat >> .ssh/authorized_keys';
    4. done

    heck resources on the data nodes with private IPs 10.1.13.150, 10.1.13.151, 10.1.13.152

    1. 10.1.12.103
    2. CPUs: 72
    3. Mem: 251G
    4. Disk: /dev/sda2 160G 13G 148G 8% /
    5. 10.1.12.104
    6. Mem: 251G
    7. Disk: /dev/sda2 208G 22G 187G 11% /
    8. CPUs: 88
    9. Mem: 251G
    10. Disk: /dev/sda2 208G 5.1G 203G 3% /
    1. for IP in 10.1.12.103 10.1.12.104 10.1.12.105; do ssh $IP mkdir -p /mnt/data0; done

    Add firewall exceptions on the data nodes with private IPs 10.1.13.150, 10.1.13.151, 10.1.13.152.

    1. for IP in 10.1.12.103 10.1.12.104 10.1.12.105
    2. do
    3. ssh $IP firewall-cmd --zone=public --add-port=7000/tcp;
    4. ssh $IP firewall-cmd --zone=public --add-port=7100/tcp;
    5. ssh $IP firewall-cmd --zone=public --add-port=9000/tcp;
    6. ssh $IP firewall-cmd --zone=public --add-port=9100/tcp;
    7. ssh $IP firewall-cmd --zone=public --add-port=11000/tcp;
    8. ssh $IP firewall-cmd --zone=public --add-port=12000/tcp;
    9. ssh $IP firewall-cmd --zone=public --add-port=9300/tcp;
    10. ssh $IP firewall-cmd --zone=public --add-port=9042/tcp;
    11. done