Accumulo Connector

    The Accumulo connector supports reading and writing data from Apache Accumulo. Please read this page thoroughly to understand the capabilities and features of the connector.

    Installing the Iterator Dependency

    The Accumulo connector uses custom Accumulo iterators in order to push various information in a SQL predicate clause to Accumulo for server-side filtering, known as predicate pushdown. In order for the server-side iterators to work, you need to add the jar file to Accumulo’s lib/ext directory on each TabletServer node.

    Note that this uses Java 8. If your Accumulo cluster is using Java 7, you’ll receive an Unsupported major.minor version 52.0 error in your TabletServer logs when you attempt to create an indexed table. You’ll instead need to use the presto-accumulo-iterators jar file that is located at .

    Connector Configuration

    Create etc/catalog/accumulo.properties to mount the accumulo connector as the accumulo catalog, replacing the accumulo.xxx properties as required:

    1. connector.name=accumulo
    2. accumulo.instance=xxx
    3. accumulo.zookeepers=xxx
    4. accumulo.username=username
    5. accumulo.password=password

    Configuration Variables

    Unsupported Features

    The following features are not supported:

    • DELETE: Deletion of rows is not yet implemented for the connector.

    Simply begin using SQL to create a new table in Accumulo to begin working with data. By default, the first column of the table definition is set to the Accumulo row ID. This should be the primary key of your table, and keep in mind that any INSERT statements containing the same row ID is effectively an UPDATE as far as Accumulo is concerned, as any previous data in the cell will be overwritten. The row ID can be any valid Presto datatype. If the first column is not your primary key, you can set the row ID column using the row_id table property within the WITH clause of your table definition.

    Simply issue a CREATE TABLE statement to create a new Presto/Accumulo table:

    1. CREATE TABLE myschema.scientists (
    2. recordkey VARCHAR,
    3. name VARCHAR,
    4. age BIGINT,
    5. birthday DATE
    6. );
    1. DESCRIBE myschema.scientists;
    1. Column | Type | Extra | Comment
    2. -----------+---------+-------+---------------------------------------------------
    3. recordkey | varchar | | Accumulo row ID
    4. name | varchar | | Accumulo column name:name. Indexed: false
    5. age | bigint | | Accumulo column age:age. Indexed: false
    6. birthday | date | | Accumulo column birthday:birthday. Indexed: false

    This command will create a new Accumulo table with the recordkey column as the Accumulo row ID. The name, age, and birthday columns are mapped to auto-generated column family and qualifier values (which, in practice, are both identical to the Presto column name).

    When creating a table using SQL, you can optionally specify a column_mapping table property. The value of this property is a comma-delimited list of triples, presto column : accumulo column family : accumulo column qualifier, with one triple for every non-row ID column. This sets the mapping of the Presto column name to the corresponding Accumulo column family and column qualifier.

    If you don’t specify the column_mapping table property, then the connector will auto-generate column names (respecting any configured locality groups). Auto-generation of column names is only available for internal tables, so if your table is external you must specify the column_mapping property.

    For a full list of table properties, see .

    For example:

    1. CREATE TABLE myschema.scientists (
    2. recordkey VARCHAR,
    3. name VARCHAR,
    4. age BIGINT,
    5. birthday DATE
    6. )
    7. WITH (
    8. column_mapping = 'name:metadata:name,age:metadata:age,birthday:metadata:date'
    9. );
    1. DESCRIBE myschema.scientists;
    1. Column | Type | Extra | Comment
    2. -----------+---------+-------+-----------------------------------------------
    3. recordkey | varchar | | Accumulo row ID
    4. name | varchar | | Accumulo column metadata:name. Indexed: false
    5. age | bigint | | Accumulo column metadata:age. Indexed: false
    6. birthday | date | | Accumulo column metadata:date. Indexed: false

    You can then issue INSERT statements to put data into Accumulo.

    Note

    While issuing INSERT statements is convenient, this method of loading data into Accumulo is low-throughput. You’ll want to use the Accumulo APIs to write Mutations directly to the tables. See the section on Loading Data for more details.

    1. INSERT INTO myschema.scientists VALUES
    2. ('row1', 'Grace Hopper', 109, DATE '1906-12-09' ),
    3. ('row2', 'Alan Turing', 103, DATE '1912-06-23' );
    1. SELECT * FROM myschema.scientists;
    1. recordkey | name | age | birthday
    2. -----------+--------------+-----+------------
    3. row1 | Grace Hopper | 109 | 1906-12-09
    4. row2 | Alan Turing | 103 | 1912-06-23
    5. (2 rows)

    As you’d expect, rows inserted into Accumulo via the shell or programatically will also show up when queried. (The Accumulo shell thinks “-5321” is an option and not a number… so we’ll just make TBL a little younger.)

    1. $ accumulo shell -u root -p secret
    2. root@default> table myschema.scientists
    3. root@default myschema.scientists> insert row3 metadata name "Tim Berners-Lee"
    4. root@default myschema.scientists> insert row3 metadata age 60
    5. root@default myschema.scientists> insert row3 metadata date 5321
      1. recordkey | name | age | birthday
      2. -----------+-----------------+-----+------------
      3. row1 | Grace Hopper | 109 | 1906-12-09
      4. row2 | Alan Turing | 103 | 1912-06-23
      5. row3 | Tim Berners-Lee | 60 | 1984-07-27
      6. (3 rows)

      You can also drop tables using DROP TABLE. This command drops both metadata and the tables. See the below section on for more details on internal and external tables.

      Indexing Columns

      Internally, the connector creates an Accumulo Range and packs it in a split. This split gets passed to a Presto Worker to read the data from the Range via a BatchScanner. When issuing a query that results in a full table scan, each Presto Worker gets a single Range that maps to a single tablet of the table. When issuing a query with a predicate (i.e. WHERE x = 10 clause), Presto passes the values within the predicate (10) to the connector so it can use this information to scan less data. When the Accumulo row ID is used as part of the predicate clause, this narrows down the Range lookup to quickly retrieve a subset of data from Accumulo.

      But what about the other columns? If you’re frequently querying on non-row ID columns, you should consider using the indexing feature built into the Accumulo connector. This feature can drastically reduce query runtime when selecting a handful of values from the table, and the heavy lifting is done for you when loading data via Presto INSERT statements (though, keep in mind writing data to Accumulo via INSERT does not have high throughput).

      To enable indexing, add the index_columns table property and specify a comma-delimited list of Presto column names you wish to index (we use the string serializer here to help with this example – you should be using the default lexicoder serializer).

      1. CREATE TABLE myschema.scientists (
      2. recordkey VARCHAR,
      3. name VARCHAR,
      4. age BIGINT,
      5. birthday DATE
      6. )
      7. WITH (
      8. serializer = 'string',
      9. index_columns='name,age,birthday'
      10. );

      After creating the table, we see there are an additional two Accumulo tables to store the index and metrics.

      1. root@default> tables
      2. accumulo.metadata
      3. accumulo.root
      4. myschema.scientists
      5. myschema.scientists_idx
      6. trace

      After inserting data, we can look at the index table and see there are indexed values for the name, age, and birthday columns. The connector queries this index table

      1. INSERT INTO myschema.scientists VALUES
      2. ('row1', 'Grace Hopper', 109, DATE '1906-12-09'),
      3. ('row2', 'Alan Turing', 103, DATE '1912-06-23');
      1. root@default> scan -t myschema.scientists_idx
      2. -21011 metadata_date:row2 []
      3. -23034 metadata_date:row1 []
      4. 103 metadata_age:row2 []
      5. 109 metadata_age:row1 []
      6. Alan Turing metadata_name:row2 []
      7. Grace Hopper metadata_name:row1 []

      When issuing a query with a WHERE clause against indexed columns, the connector searches the index table for all row IDs that contain the value within the predicate. These row IDs are bundled into a Presto split as single-value Range objects (the number of row IDs per split is controlled by the value of accumulo.index_rows_per_split) and passed to a Presto worker to be configured in the BatchScanner which scans the data table.

      1. SELECT * FROM myschema.scientists WHERE age = 109;
      1. recordkey | name | age | birthday
      2. -----------+--------------+-----+------------
      3. row1 | Grace Hopper | 109 | 1906-12-09
      4. (1 row)

      Loading Data

      The Accumulo connector supports loading data via INSERT statements, however this method tends to be low-throughput and should not be relied on when throughput is a concern. Instead, users of the connector should use the PrestoBatchWriter tool that is provided as part of the presto-accumulo-tools subproject in the presto-accumulo repository.

      The PrestoBatchWriter is a wrapper class for the typical BatchWriter that leverages the Presto/Accumulo metadata to write Mutations to the main data table. In particular, it handles indexing the given mutations on any indexed columns. Usage of the tool is provided in the README in the .

      External Tables

      By default, the tables created using SQL statements via Presto are internal tables, that is both the Presto table metadata and the Accumulo tables are managed by Presto. When you create an internal table, the Accumulo table is created as well. You will receive an error if the Accumulo table already exists. When an internal table is dropped via Presto, the Accumulo table (and any index tables) are dropped as well.

      To change this behavior, set the external property to true when issuing the CREATE statement. This will make the table an external table, and a DROP TABLE command will only delete the metadata associated with the table. If the Accumulo tables do not already exist, they will be created by the connector.

      Creating an external table will set any configured locality groups as well as the iterators on the index and metrics tables (if the table is indexed). In short, the only difference between an external table and an internal table is the connector will delete the Accumulo tables when a DROP TABLE command is issued.

      External tables can be a bit more difficult to work with, as the data is stored in an expected format. If the data is not stored correctly, then you’re gonna have a bad time. Users must provide a column_mapping property when creating the table. This creates the mapping of Presto column name to the column family/qualifier for the cell of the table. The value of the cell is stored in the Value of the Accumulo key/value pair. By default, this value is expected to be serialized using Accumulo’s lexicoder API. If you are storing values as strings, you can specify a different serializer using the serializer property of the table. See the section on for more information.

      Next, we create the Presto external table.

      1. CREATE TABLE external_table (
      2. a VARCHAR,
      3. b BIGINT,
      4. c DATE
      5. )
      6. WITH (
      7. column_mapping = 'a:md:a,b:md:b,c:md:c',
      8. external = true,
      9. index_columns = 'b,c',
      10. locality_groups = 'foo:b,c'
      11. );

      After creating the table, usage of the table continues as usual:

      1. INSERT INTO external_table VALUES
      2. ('1', 1, DATE '2015-03-06'),
      3. ('2', 2, DATE '2015-03-07');
      1. SELECT * FROM external_table;
      1. a | b | c
      2. ---+---+------------
      3. 1 | 1 | 2015-03-06
      4. 2 | 2 | 2015-03-06
      5. (2 rows)
      1. DROP TABLE external_table;

      After dropping the table, the table will still exist in Accumulo because it is external.

      1. root@default> tables
      2. accumulo.metadata
      3. accumulo.root
      4. external_table
      5. external_table_idx
      6. external_table_idx_metrics
      7. trace

      If we wanted to add a new column to the table, we can create the table again and specify a new column. Any existing rows in the table will have a value of NULL. This command will re-configure the Accumulo tables, setting the locality groups and iterator configuration.

      1. CREATE TABLE external_table (
      2. a VARCHAR,
      3. b BIGINT,
      4. c DATE,
      5. d INTEGER
      6. )
      7. WITH (
      8. column_mapping = 'a:md:a,b:md:b,c:md:c,d:md:d',
      9. external = true,
      10. index_columns = 'b,c,d',
      11. locality_groups = 'foo:b,c,d'
      12. );
      13. SELECT * FROM external_table;

      Table Properties

      Table property usage example:

      1. recordkey VARCHAR,
      2. name VARCHAR,
      3. age BIGINT,
      4. birthday DATE
      5. )
      6. WITH (
      7. column_mapping = 'name:metadata:name,age:metadata:age,birthday:metadata:date',
      8. index_columns = 'name,age'
      9. );

      You can change the default value of a session property by using . Note that session properties are prefixed with the catalog name:

      1. SET SESSION accumulo.column_filter_optimizations_enabled = false;

      Adding Columns

      Adding a new column to an existing table cannot be done today via ALTER TABLE [table] ADD COLUMN [name] [type] because of the additional metadata required for the columns to work; the column family, qualifier, and if the column is indexed.

      Instead, you can use one of the utilities in the sub-project of the presto-accumulo repository. Documentation and usage can be found in the README.

      Serializers

      The Presto connector for Accumulo has a pluggable serializer framework for handling I/O between Presto and Accumulo. This enables end-users the ability to programatically serialized and deserialize their special data formats within Accumulo, while abstracting away the complexity of the connector itself.

      There are two types of serializers currently available; a string serializer that treats values as Java String and a lexicoder serializer that leverages Accumulo’s Lexicoder API to store values. The default serializer is the lexicoder serializer, as this serializer does not require expensive conversion operations back and forth between String objects and the Presto types – the cell’s value is encoded as a byte array.

      Additionally, the lexicoder serializer does proper lexigraphical ordering of numerical types like BIGINT or TIMESTAMP. This is essential for the connector to properly leverage the secondary index when querying for data.

      You can change the default the serializer by specifying the serializer table property, using either default (which is lexicoder), string or lexicoder for the built-in types, or you could provide your own implementation by extending AccumuloRowSerializer, adding it to the Presto CLASSPATH, and specifying the fully-qualified Java class name in the connector configuration.

      1. CREATE TABLE myschema.scientists (
      2. recordkey VARCHAR,
      3. name VARCHAR,
      4. age BIGINT,
      5. birthday DATE
      6. )
      7. WITH (
      8. column_mapping = 'name:metadata:name,age:metadata:age,birthday:metadata:date',
      9. serializer = 'default'
      10. );
      1. INSERT INTO myschema.scientists VALUES
      2. ('row1', 'Grace Hopper', 109, DATE '1906-12-09' ),
      3. ('row2', 'Alan Turing', 103, DATE '1912-06-23' );
      1. root@default> scan -t myschema.scientists
      2. row1 metadata:age [] \x08\x80\x00\x00\x00\x00\x00\x00m
      3. row1 metadata:date [] \x08\x7F\xFF\xFF\xFF\xFF\xFF\xA6\x06
      4. row1 metadata:name [] Grace Hopper
      5. row2 metadata:age [] \x08\x80\x00\x00\x00\x00\x00\x00g
      6. row2 metadata:date [] \x08\x7F\xFF\xFF\xFF\xFF\xFF\xAD\xED
      7. row2 metadata:name [] Alan Turing
      1. CREATE TABLE myschema.stringy_scientists (
      2. recordkey VARCHAR,
      3. name VARCHAR,
      4. age BIGINT,
      5. birthday DATE
      6. )
      7. WITH (
      8. column_mapping = 'name:metadata:name,age:metadata:age,birthday:metadata:date',
      9. serializer = 'string'
      10. );
      1. INSERT INTO myschema.stringy_scientists VALUES
      2. ('row1', 'Grace Hopper', 109, DATE '1906-12-09' ),
      3. ('row2', 'Alan Turing', 103, DATE '1912-06-23' );
      1. root@default> scan -t myschema.stringy_scientists
      2. row1 metadata:age [] 109
      3. row1 metadata:date [] -23034
      4. row1 metadata:name [] Grace Hopper
      5. row2 metadata:age [] 103
      6. row2 metadata:date [] -21011
      7. row2 metadata:name [] Alan Turing
      1. CREATE TABLE myschema.custom_scientists (
      2. recordkey VARCHAR,
      3. name VARCHAR,
      4. age BIGINT,
      5. birthday DATE
      6. )
      7. WITH (
      8. column_mapping = 'name:metadata:name,age:metadata:age,birthday:metadata:date',
      9. serializer = 'my.serializer.package.MySerializer'
      10. );

      Metadata Management

      Metadata for the Presto/Accumulo tables is stored in ZooKeeper. You can (and should) issue SQL statements in Presto to create and drop tables. This is the easiest method of creating the metadata required to make the connector work. It is best to not mess with the metadata, but here are the details of how it is stored. Information is power.

      A root node in ZooKeeper holds all the mappings, and the format is as follows:

      1. /metadata-root/schema/table

      Where metadata-root is the value of zookeeper.metadata.root in the config file (default is /presto-accumulo), schema is the Presto schema (which is identical to the Accumulo namespace name), and table is the Presto table name (again, identical to Accumulo name). The data of the table ZooKeeper node is a serialized AccumuloTable Java object (which resides in the connector code). This table contains the schema (namespace) name, table name, column definitions, the serializer to use for the table, and any additional table properties.

      If you have a need to programmatically manipulate the ZooKeeper metadata for Accumulo, take a look at com.facebook.presto.accumulo.metadata.ZooKeeperMetadataManager for some Java code to simplify the process.

      Converting Table from Internal to External

      If your table is internal, you can convert it to an external table by deleting the corresponding znode in ZooKeeper, effectively making the table no longer exist as far as Presto is concerned. Then, create the table again using the same DDL, but adding the external = true table property.

      For example:

      1. We’re starting with an internal table foo.bar that was created with the below DDL. If you have not previously defined a table property for column_mapping (like this example), be sure to describe the table before deleting the metadata. We’ll need the column mappings when creating the external table.

      1. CREATE TABLE foo.bar (a VARCHAR, b BIGINT, c DATE)
      2. WITH (
      3. index_columns = 'b,c'
      4. );
      1. DESCRIBE foo.bar;
      1. Column | Type | Extra | Comment
      2. --------+---------+-------+-------------------------------------
      3. a | varchar | | Accumulo row ID
      4. b | bigint | | Accumulo column b:b. Indexed: true
      5. c | date | | Accumulo column c:c. Indexed: true

      2. Using the ZooKeeper CLI, delete the corresponding znode. Note this uses the default ZooKeeper metadata root of /presto-accumulo

      3. Re-create the table using the same DDL as before, but adding the external=true property. Note that if you had not previously defined the column_mapping, you’ll need to add the property to the new DDL (external tables require this property to be set). The column mappings are in the output of the DESCRIBE statement.

      1. CREATE TABLE foo.bar (
      2. a VARCHAR,
      3. b BIGINT,
      4. c DATE
      5. )
      6. WITH (
      7. column_mapping = 'a:a:a,b:b:b,c:c:c',
      8. index_columns = 'b,c',
      9. external = true