Pulsar SQl Deployment and Configuration
There are several configurations for the Presto Pulsar Connector. The properties file that contain these configurations can be found at .The configurations for the connector and its default values are discribed below.
If you already have an existing Presto cluster, you can copy Presto Pulsar connector plugin to your existing cluster. You can download the archived plugin package via:
$ wget https://archive.apache.org/dist/pulsar/pulsar-2.4.0/apache-pulsar-2.4.0-bin.tar.gz
Please note that the Getting Started guide shows you how to easily setup a standalone single node enviroment to experiment with.
Pulsar SQL is powered by thus many of the configurations for deployment is the same for the Pulsar SQL worker.
You can use the same CLI args as the Presto launcher:
$ ./bin/pulsar sql-worker --help
Usage: launcher [options] command
Commands: run, start, stop, restart, kill, status
Options:
-h, --help show this help message and exit
-v, --verbose Run verbosely
--etc-dir=DIR Defaults to INSTALL_PATH/etc
--launcher-config=FILE
--node-config=FILE Defaults to ETC_DIR/node.properties
--jvm-config=FILE Defaults to ETC_DIR/jvm.config
--config=FILE Defaults to ETC_DIR/config.properties
--log-levels-file=FILE
Defaults to ETC_DIR/log.properties
--pid-file=FILE Defaults to DATA_DIR/var/run/launcher.pid
--launcher-log-file=FILE
Defaults to DATA_DIR/var/log/launcher.log (only in
daemon mode)
--server-log-file=FILE
Defaults to DATA_DIR/var/log/server.log (only in
daemon mode)
-D NAME=VALUE Set a Java system property
There is a set of default configs for the cluster located in ${project.root}/conf/presto
that will be used by default. You can change them to customize your deployment
You can also start the worker as daemon process:
For example, if I wanted to deploy a Pulsar SQL/Presto cluster on 3 nodes, you can do the following:
First, copy the Pulsar binary distribution to all three nodes.
The first node, will run the Presto coordinator. The mininal configuration in ${project.root}/conf/presto/config.properties
can be the following
coordinator=true
node-scheduler.include-coordinator=true
http-server.http.port=8080
query.max-memory=50GB
query.max-memory-per-node=1GB
discovery.uri=<coordinator-url>
Also, modify pulsar.broker-service-url
and pulsar.zookeeper-uri
configs in ${project.root}/conf/presto/catalog/pulsar.properties
on those nodes accordingly
Afterwards, you can start the coordinator by just running
For the other two nodes that will only serve as worker nodes, the configurations can be the following:
Also, modify pulsar.broker-service-url
and pulsar.zookeeper-uri
configs in ${project.root}/conf/presto/catalog/pulsar.properties
accordingly
You can also start the worker by just running:
$ ./bin/pulsar sql-worker run
You can check the status of your cluster from the SQL CLI. To start the SQL CLI:
$ ./bin/pulsar sql --server <coordinate_url>
You can then run the following command to check the status of your nodes:
presto> SELECT * FROM system.runtime.nodes;
node_id | http_uri | node_version | coordinator | state
---------+-------------------------+--------------+-------------+--------
1 | http://192.168.2.1:8081 | testversion | true | active
3 | http://192.168.2.2:8081 | testversion | false | active
2 | http://192.168.2.3:8081 | testversion | false | active