TimescaleDB API Reference

add_dimension()

Space partitions: Using space partitions is highly recommended for to achieve efficient scale-out performance. Forregular hypertables that exist only on a single node, additional partitioning can be used for specialized use cases and not recommended for most users.

Space partitions use hashing: Every distinct item is hashed to one ofN buckets. Remember that we are already using (flexible) time intervals to manage chunk sizes; the main purpose of space partitioning is to enable parallelization across multiple data nodes (in the case of distributed hypertables) or across multiple disks within the same time interval (in the case of single-node deployments).

In a distributed hypertable, space partitioning enables inserts to be parallelized across data nodes, even while the inserted rows share timestamps from the same time interval, and thus increases the ingest rate. Query performance also benefits by being able to parallelize queries across nodes, particularly when full or partial aggregations can be “pushed down” to data nodes (e.g., as in the queryavg(temperature) FROM conditions GROUP BY hour, location when usinglocation as a space partition). Please see our for more information.

Parallel I/O can benefit in two scenarios: (a) two or more concurrent queries should be able to read from different disks in parallel, or (b) a single query should be able to use query parallelization to read from multiple disks in parallel.

Thus, users looking for parallel I/O have two options:

  1. Use a RAID setup across multiple physical disks, and expose a single logical disk to the hypertable (i.e., via a single tablespace).

  2. For each physical disk, add a separate tablespace to the database. TimescaleDB allows you to actually add multiple tablespaces to asingle hypertable (although under the covers, a hypertable’s chunks are spread across the tablespaces associated with that hypertable).

We recommend a RAID setup when possible, as it supports both forms of parallelization described above (i.e., separate queries to separate disks, single query to multiple disks in parallel). The multiple tablespace approach only supports the former. With a RAID setup,no spatial partitioning is required.

That said, when using space partitions, we recommend using 1 space partition per disk.

TimescaleDB doesnot benefit from a very large number of space partitions (such as the number of unique items you expect in partition field). A very large number of such partitions leads both to poorer per-partition load balancing (the mapping of items to partitions using hashing), as well as much increased planning latency for some types of queries.

Required Arguments

Optional Arguments

Name Description
number_partitions Number of hash partitions to use oncolumn_name. Must be > 0.
chunk_time_interval Interval that each chunk covers. Must be > 0.
partitioning_func The function to use for calculating a value’s partition (seecreate_hypertableinstructions).
if_not_exists Set to true to avoid throwing an error if a dimension for the column already exists. A notice is issued instead. Defaults to false.

Returns

Column Description
dimension_id ID of the dimension in the TimescaleDB internal catalog.
schema_name Schema name of the hypertable.
table_name Table name of the hypertable.
column_name Column name of the column to partition by.
created True if the dimension was added, false whenif_not_exists is true and no dimension was added.

When executing this function, eithernumber_partitions orchunk_time_interval must be supplied, which will dictate if the dimension will use hash or interval partitioning.

Thechunk_time_interval should be specified as follows:

  • If the column to be partitioned is a TIMESTAMP, TIMESTAMPTZ, or DATE, this length should be specified either as an INTERVAL type or an integer value inmicroseconds.

  • If the column is some other integer type, this length should be an integer that reflects the column’s underlying semantics (e.g., thechunk_time_interval should be given in milliseconds if this column is the number of milliseconds since the UNIX epoch).

WARNING:Supporting more thanone additional dimension is currently experimental. For any production environments, users are recommended to use at most one “space” dimension.

Sample Usage

First convert tableconditions to hypertable with just time partitioning on columntime, then add an additional partition key onlocation with four partitions:

Convert tableconditions to hypertable with time partitioning ontime and space partitioning (2 partitions) onlocation, then add two additional dimensions.

  1. SELECT create_hypertable('conditions', 'time', 'location', 2);
  2. SELECT add_dimension('conditions', 'time_received', chunk_time_interval => INTERVAL '1 day');
  3. SELECT add_dimension('conditions', 'device_id', number_partitions => 2);
  4. SELECT add_dimension('conditions', 'device_id', number_partitions => 2, if_not_exists => true);

Now in a multi-node example for distributed hypertables with a cluster of one access node and two data nodes, configure the access node for access to the two data nodes. Then, convert tableconditions to a distributed hypertable with just time partitioning on columntime, and finally add a space partitioning dimension onlocation with two partitions (as the number of the attached data nodes).

  1. SELECT add_data_node('dn1', host => 'dn1.example.com');
  2. SELECT add_data_node('dn2', host => 'dn2.example.com');
  3. SELECT create_distributed_hypertable('conditions', 'time');
  4. SELECT add_dimension('conditions', 'location', number_partitions => 2);

attach_data_node()

Attach a data node to a hypertable. The data node should have been previously created using.

When a distributed hypertable is created it will by default use all available data nodes for the hypertable, but if a data node is addedafter a hypertable is created, the data node will not automatically be used by existing distributed hypertables.

If you want a hypertable to use a data node that was created later, you must attach the data node to the hypertable using this function.

Required Arguments

Name Description
node_name Name of data node to attach
hypertable Name of distributed hypertable to attach node to

Optional Arguments

Name Description
if_not_attached Prevents error if the data node is already attached to the hypertable. A notice will be printed that the data node is attached. Defaults toFALSE.
repartition Change the partitioning configuration so that all the attached data nodes are used. Defaults toTRUE.

Returns

Column Description
hypertable_id Hypertable id of the modified hypertable
node_hypertable_id Hypertable id on the remote data node
node_name Name of the attached data node

Sample Usage

Attach a data nodedn3 to a distributed hypertableconditions previously created with.

  1. SELECT * FROM attach_data_node('dn3','conditions');
  2. hypertable_id | node_hypertable_id | node_name
  3. --------------+--------------------+-------------
  4. 5 | 3 | dn3
  5. (1 row)

TIP:You must add a data node to your distributed database first withadd_data_node first before attaching it.


add_data_node()

Add a new data node on the access node to be used by distributed hypertables. The data node will automatically be used by distributed hypertables that are created after the data node has been added, while existing distributed hypertables require an additionalattach_data_node.

If the data node already exists, the command will abort with either an error or a notice depending on the value ofif_not_exists.

For security purposes, only superusers or users with necessary privileges can add data nodes (see below for details). When adding a data node, the access node will also try to connect to the data node and therefore needs a way to authenticate with it. TimescaleDB currently supports several different such authentication methods for flexibility (including trust, user mappings, password, and certificate methods). Please refer to for more information about node-to-node authentication.

Unlessbootstrap is false, the function will attempt to bootstrap the data node by:

  1. Creating the database given indatabase that will serve as the new data node.
  2. Loading the TimescaleDB extension in the new database.
  3. Setting metadata to make the data node part of the distributed database.

Note that user roles are not automatically created on the new data node during bootstrapping. Thedistributed_exec procedure can be used to create additional roles on the data node after it is added.

Required Arguments

Name Description
node_name Name for the data node.
host Host name for the remote data node.

Optional Arguments

Name Description
database Database name where remote hypertables will be created. The default is the current database name.
port Port to use on the remote data node. The default is the PostgreSQL port used by the access node on which the function is executed.
if_not_exists Do not fail if the data node already exists. The default isFALSE.
bootstrap Bootstrap the remote data node. The default isTRUE.
password Password for authenticating with the remote data node during bootstrapping or validation. A password only needs to be provided if the data node requires password authentication and a password for the user does not exist in a local password file on the access node. If password authentication is not used, the specified password will be ignored.

Returns

Column Description
node_name Local name to use for the data node
host Host name for the remote data node
port Port for the remote data node
database Database name used on the remote data node
node_created Was the data node created locally
database_created Was the database created on the remote data node
extension_created Was the extension created on the remote data node

Errors

An error will be given if:

  • The function is executed inside a transaction.
  • The function is executed in a database that is already a data node.
  • The data node already exists andif_not_exists isFALSE.
  • The access node cannot connect to the data node due to a network failure or invalid configuration (e.g., wrong port, or there is no way to authenticate the user).
  • Ifbootstrap isFALSE and the database was not previously bootstrapped.

Privileges

To add a data node, you must be a superuser or have theUSAGE privilege on thetimescaledb_fdw foreign data wrapper. To grant such privileges to a regular user role, do:

  1. GRANT USAGE ON FOREIGN DATA WRAPPER timescaledb_fdw TO <newrole>;

Note, however, that superuser privileges might still be necessary on the data node in order to bootstrap it, including creating the TimescaleDB extension on the data node unless it is already installed.

Sample usage

Let’s assume that you have an existing hypertableconditions and want to usetime as the time partitioning column andlocation as the space partitioning column. You also want to distribute the chunks of the hypertable on two data nodesdn1.example.com anddn2.example.com:

  1. SELECT add_data_node('dn1', host => 'dn1.example.com');
  2. SELECT add_data_node('dn2', host => 'dn2.example.com');
  3. SELECT create_distributed_hypertable('conditions', 'time', 'location');

If you want to create a distributed database with the two data nodes local to this instance, you can write:

  1. SELECT add_data_node('dn1', host => 'localhost', database => 'dn1');
  2. SELECT add_data_node('dn2', host => 'localhost', database => 'dn2');
  3. SELECT create_distributed_hypertable('conditions', 'time', 'location');

Note that this does not offer any performance advantages over using a regular hypertable, but it can be useful for testing.


attach_tablespace()

Attach a tablespace to a hypertable and use it to store chunks. Atablespace is a directory on the filesystem that allows control over where individual tables and indexes are stored on the filesystem. A common use case is to create a tablespace for a particular storage disk, allowing tables to be stored there. Please review the standard PostgreSQL documentation for more.

TimescaleDB can manage a set of tablespaces for each hypertable, automatically spreading chunks across the set of tablespaces attached to a hypertable. If a hypertable is hash partitioned, TimescaleDB will try to place chunks that belong to the same partition in the same tablespace. Changing the set of tablespaces attached to a hypertable may also change the placement behavior. A hypertable with no attached tablespaces will have its chunks placed in the database’s default tablespace.

Required Arguments

Name Description
tablespace (NAME) Name of the tablespace to attach.
hypertable (REGCLASS) Hypertable to attach the tablespace to.

Tablespaces need to be before being attached to a hypertable. Once created, tablespaces can be attached to multiple hypertables simultaneously to share the underlying disk storage. Associating a regular table with a tablespace using theTABLESPACE option toCREATE TABLE, prior to callingcreate_hypertable, will have the same effect as callingattach_tablespace immediately followingcreate_hypertable.

Optional Arguments

Name Description
if_not_attached Set to true to avoid throwing an error if the tablespace is already attached to the table. A notice is issued instead. Defaults to false.

Sample Usage

Attach the tablespacedisk1 to the hypertableconditions:

  1. SELECT attach_tablespace('disk1', 'conditions');
  2. SELECT attach_tablespace('disk2', 'conditions', if_not_attached => true);

WARNING:The management of tablespaces on hypertables is currently an experimental feature.


create_hypertable()

Creates a TimescaleDB hypertable from a PostgreSQL table (replacing the latter), partitioned on time and with the option to partition on one or more other columns (i.e., space). All actions, such asALTER TABLE,SELECT, etc., still work on the resulting hypertable.

Required Arguments

Name Description
relation Identifier of table to convert to hypertable.
time_column_name Name of the column containing time values as well as the primary column to partition by.

Optional Arguments

Name Description
partitioning_column Name of an additional column to partition by. If provided, thenumber_partitions argument must also be provided.
number_partitions Number of to use forpartitioning_column. Must be > 0.
chunk_time_interval Interval in event time that each chunk covers. Must be > 0. As of TimescaleDB v0.11.0, default is 7 days. For previous versions, default is 1 month.
create_default_indexes Boolean whether to create default indexes on time/partitioning columns. Default is TRUE.
if_not_exists Boolean whether to print warning if table already converted to hypertable or raise exception. Default is FALSE.
partitioning_func The function to use for calculating a value’s partition.
associated_schema_name Name of the schema for internal hypertable tables. Default is “_timescaledb_internal”.
associated_table_prefix Prefix for internal hypertable chunk names. Default is “_hyper”.
migrate_data Set to TRUE to migrate any existing data from therelation table to chunks in the new hypertable. A non-empty table will generate an error without this option. Large tables may take significant time to migrate. Defaults to FALSE.
time_partitioning_func Function to convert incompatible primary time column values to compatible ones. The function must beIMMUTABLE.
replication_factor If set to 1 or greater, will create a distributed hypertable. Default is NULL. When creating a distributed hypertable, consider usingcreate_distributed_hypertable in place ofcreate_hypertable.
data_nodes This is the set of data nodes that will be used for this table if it is distributed. This has no impact on non-distributed hypertables. If no data nodes are specified, a distributed hypertable will use all data nodes known by this instance.

Returns

Column Description
hypertable_id ID of the hypertable in TimescaleDB.
schema_name Schema name of the table converted to hypertable.
table_name Table name of the table converted to hypertable.
created TRUE if the hypertable was created, FALSE whenif_not_exists is true and no hypertable was created.

TIP:If you useSELECT * FROM create_hypertable(...) you will get the return value formatted as a table with column headings.

WARNING:The use of themigrate_data argument to convert a non-empty table can lock the table for a significant amount of time, depending on how much data is in the table. It can also run into deadlock if foreign key constraints exist to other tables.

If you would like finer control over index formation and other aspects of your hypertable,follow these migration instructions instead.

When converting a normal SQL table to a hypertable, pay attention to how you handle constraints. A hypertable can contain foreign keys to normal SQL table columns, but the reverse is not allowed. UNIQUE and PRIMARY constraints must include the partitioning key.

The deadlock is likely to happen when concurrent transactions simultaneously try to insert data into tables that are referenced in the foreign key constraints and into the converting table itself. The deadlock can be prevented by manually obtainingSHARE ROW EXCLUSIVE lock on the referenced tables before callingcreate_hypertable in the same transaction, see for the syntax.

Units

The ‘time’ column supports the following data types:

Types
Timestamp (TIMESTAMP, TIMESTAMPTZ)
DATE
Integer (SMALLINT, INT, BIGINT)

TIP:The type flexibility of the ‘time’ column allows the use of non-time-based values as the primary chunk partitioning column, as long as those values can increment.

TIP:For incompatible data types (e.g.jsonb) you can specify a function to thetime_partitioning_func argument which can extract a compatible data type

The units ofchunk_time_interval should be set as follows:

  • For time columns having timestamp or DATE types, thechunk_time_interval should be specified either as aninterval type or an integral value inmicroseconds.

  • For integer types, thechunk_time_intervalmust be set explicitly, as the database does not otherwise understand the semantics of what each integer value represents (a second, millisecond, nanosecond, etc.). So if your time column is the number of milliseconds since the UNIX epoch, and you wish to each chunk to cover 1 day, you should specifychunk_time_interval => 86400000.

In case of hash partitioning (i.e.,number_partitions is greater than zero), it is possible to optionally specify a custom partitioning function. If no custom partitioning function is specified, the default partitioning function is used. The default partitioning function calls PostgreSQL’s internal hash function for the given type, if one exists. Thus, a custom partitioning function can be used for value types that do not have a native PostgreSQL hash function. A partitioning function should take a singleanyelement type argument and return a positiveinteger hash value. Note that this hash value isnot a partition ID, but rather the inserted value’s position in the dimension’s key space, which is then divided across the partitions.

TIP:The time column increate_hypertable must be defined asNOT NULL. If this is not already specified on table creation,create_hypertable will automatically add this constraint on the table when it is executed.

Sample Usage

Convert tableconditions to hypertable with just time partitioning on columntime:

  1. SELECT create_hypertable('conditions', 'time');

Convert tableconditions to hypertable, settingchunk_time_interval to 24 hours.

  1. SELECT create_hypertable('conditions', 'time', chunk_time_interval => 86400000000);
  2. SELECT create_hypertable('conditions', 'time', chunk_time_interval => INTERVAL '1 day');

Convert tableconditions to hypertable with time partitioning ontime and space partitioning (4 partitions) onlocation:

  1. SELECT create_hypertable('conditions', 'time', 'location', 4);

The same as above, but using a custom partitioning function:

  1. SELECT create_hypertable('conditions', 'time', 'location', 4, partitioning_func => 'location_hash');

Convert tableconditions to hypertable. Do not raise a warning ifconditions is already a hypertable:

  1. SELECT create_hypertable('conditions', 'time', if_not_exists => TRUE);

Time partition tablemeasurements on a composite column typereport using a time partitioning function: Requires an immutable function that can convert the column value into a supported column value:

  1. CREATE TYPE report AS (reported timestamp with time zone, contents jsonb);
  2. CREATE FUNCTION report_reported(report)
  3. RETURNS timestamptz
  4. LANGUAGE SQL
  5. IMMUTABLE AS
  6. 'SELECT $1.reported';
  7. SELECT create_hypertable('measurements', 'report', time_partitioning_func => 'report_reported');

Time partition tableevents, on a column typejsonb (event), which has a top level key (started) containing an ISO 8601 formatted timestamp:

  1. CREATE FUNCTION event_started(jsonb)
  2. RETURNS timestamptz
  3. LANGUAGE SQL
  4. IMMUTABLE AS
  5. $func$SELECT ($1->>'started')::timestamptz$func$;
  6. SELECT create_hypertable('events', 'event', time_partitioning_func => 'event_started');

Best Practices

One of the most common questions users of TimescaleDB have revolves around configuringchunk_time_interval.

Time intervals: The current release of TimescaleDB enables both the manual and automated adaption of its time intervals. With manually-set intervals, users should specify achunk_time_interval when creating their hypertable (the default value is 1 week). The interval used for new chunks can be changed by calling.

The key property of choosing the time interval is that the chunk (including indexes) belonging to the most recent interval (or chunks if using space partitions) fit into memory. As such, we typically recommend setting the interval so that these chunk(s) comprise no more than 25% of main memory.

To determine this, you roughly need to understand your data rate. If you are writing roughly 2GB of data per day and have 64GB of memory, setting the time interval to a week would be good. If you are writing 10GB per day on the same machine, setting the time interval to a day would be appropriate. This interval would also hold if data is loaded more in batches, e.g., you bulk load 70GB of data per week, with data corresponding to records from throughout the week.

While it’s generally safer to make chunks smaller rather than too large, setting intervals too small can lead tomany chunks, which corresponds to increased planning latency for some types of queries.

TIP:One caveat is that the total chunk size is actually dependent on both the underlying data sizeand any indexes, so some care might be taken if you make heavy use of expensive index types (e.g., some PostGIS geospatial indexes). During testing, you might check your total chunk sizes via thechunks_detailed_size function.

Space partitions: In most cases, it is advised for users not to use space partitions. However, if you create a distributed hypertable, it is important to create space partitioning, see. The rare cases in which space partitions may be useful for non-distributed hypertables are described in theadd_dimension section.


create_distributed_hypertable()

Creates a TimescaleDB hypertable distributed across a multinode environment. Use this function in place ofcreate_hypertable when creating distributed hypertables.

Required Arguments

Name Description
relation Identifier of table to convert to hypertable.
time_column_name Name of the column containing time values as well as the primary column to partition by.

Optional Arguments

Name Description
partitioning_column Name of an additional column to partition by.
number_partitions Number of hash partitions to use forpartitioning_column. Must be > 0. Default is the number ofdata_nodes.
associated_schema_name Name of the schema for internal hypertable tables. Default is “_timescaledb_internal”.
associated_table_prefix Prefix for internal hypertable chunk names. Default is “_hyper”.
chunk_time_interval Interval in event time that each chunk covers. Must be > 0. As of TimescaleDB v0.11.0, default is 7 days, unless adaptive chunking (DEPRECATED) is enabled, in which case the interval starts at 1 day. For previous versions, default is 1 month.
create_default_indexes Boolean whether to create default indexes on time/partitioning columns. Default is TRUE.
if_not_exists Boolean whether to print warning if table already converted to hypertable or raise exception. Default is FALSE.
partitioning_func The function to use for calculating a value’s partition.
migrate_data Set to TRUE to migrate any existing data from therelation table to chunks in the new hypertable. A non-empty table will generate an error without this option. Large tables may take significant time to migrate. Default is FALSE.
time_partitioning_func Function to convert incompatible primary time column values to compatible ones. The function must beIMMUTABLE.
replication_factor The number of data nodes to which the same data is written to. This is done by creating chunk copies on this amount of data nodes. Must be >= 1; default is 1. Read before changing the default.
data_nodes The set of data nodes used for the distributed hypertable. If not present, defaults to all data nodes known by the access node (the node on which the distributed hypertable is created).

Returns

Column Description
hypertable_id ID of the hypertable in TimescaleDB.
schema_name Schema name of the table converted to hypertable.
table_name Table name of the table converted to hypertable.
created TRUE if the hypertable was created, FALSE whenif_not_exists is TRUE and no hypertable was created.

Sample Usage

Create a tableconditions which will be partitioned across data nodes by the ‘location’ column. Note that the number of space partitions is automatically equal to the number of data nodes assigned to this hypertable (all configured data nodes in this case, asdata_nodes is not specified).

  1. SELECT create_distributed_hypertable('conditions', 'time', 'location');

Create a tableconditions using a specific set of data nodes.

  1. SELECT create_distributed_hypertable('conditions', 'time', 'location',
  2. data_nodes => '{ "data_node_1", "data_node_2", "data_node_4", "data_node_7" }');

Best Practices

Space partitions: As opposed to the normalcreate_hypertable best practices, space partitions are highly recommended for distributed hypertables. Incoming data will be divided among data nodes based upon the space partition (the first one if multiple space partitions have been defined). If there is no space partition, all the data for each time slice will be written to a single data node.

Time intervals: Follow the same guideline in setting thechunk_time_interval as with, bearing in mind that the calculation needs to be based on the memory capacity of the data nodes. However, one additional thing to consider, assuming space partitioning is being used, is that the hypertable will be evenly distributed across the data nodes, allowing a larger time interval.

For example, assume you are ingesting 10GB of data per day and you have five data nodes, each with 64GB of memory. If this is the only table being served by these data nodes, then you should use a time interval of 1 week (7 * 10GB / 5 * 64GB ~= 22% main memory used for most recent chunks).

If space partitioning is not being used, thechunk_time_interval should be the same as the non-distributed case, as all of the incoming data will be handled by a single node.

Replication factor: The hypertable’sreplication_factor defines to how many data nodes a newly created chunk will be replicated. That is, a chunk with areplication_factor of three will exist on three separate data nodes, and rows written to that chunk will be inserted (as part of a two-phase commit protocol) to all three chunk copies. For chunks replicated more than once, if a data node fails or is removed, no data will be lost, and writes can continue to succeed on the remaining chunk copies. However, the chunks present on the lost data node will now be under-replicated. Currently, it is not possible to restore under-replicated chunks, although this limitation will be removed in a future release. To avoid such inconsistency, we do not yet recommend usingreplication_factor > 1, and instead rely on physical replication of each data node if such fault-tolerance is required.


CREATE INDEX (Transaction Per Chunk)

  1. CREATE INDEX ... WITH (timescaledb.transaction_per_chunk, ...);

This option extends with the ability to use a separate transaction for each chunk it creates an index on, instead of using a single transaction for the entire hypertable. This allowsINSERTs, and other operations to to be performed concurrently during most of the duration of theCREATE INDEX command. While the index is being created on an individual chunk, it functions as if a regularCREATE INDEX were called on that chunk, however other chunks are completely un-blocked.

TIP:This version ofCREATE INDEX can be used as an alternative toCREATE INDEX CONCURRENTLY, which is not currently supported on hypertables.

WARNING:If the operation fails partway through, indexes may not be created on all hypertable chunks. If this occurs, the index on the root table of the hypertable will be marked as invalid (this can be seen by running\d+ on the hypertable). The index will still work, and will be created on new chunks, but if you wish to ensureall chunks have a copy of the index, drop and recreate it.

Sample Usage

Anonymous index

  1. CREATE INDEX ON conditions(time, device_id) WITH (timescaledb.transaction_per_chunk);

Other index methods

  1. CREATE INDEX ON conditions(time, location) USING brin
  2. WITH (timescaledb.transaction_per_chunk);

delete_data_node()

This function will remove the data node locally. This willnot affect the remote database in any way, it will just update the local index over all existing data nodes.

The data node will be detached from all hypertables that are using it if permissions and data integrity requirements are satisfied. For more information, seedetach_data_node.

Errors

An error will be generated if the data node cannot be detached from all attached hypertables.

Required Arguments

Name Description
node_name (NAME) Name of the data node.

Optional Arguments

Name Description
if_exists (BOOLEAN) Prevent error if the data node does not exist. Defaults to false.
force (BOOLEAN) Force removal of data nodes from hypertables unless that would result in data loss. Defaults to false.
repartition (BOOLEAN) Make the number of space partitions equal to the new number of data nodes (if such partitioning exists). This ensures that the remaining data nodes are used evenly. Defaults to true.

Returns

A boolean indicating if the operation was successful or not.

Sample usage

To delete a data node nameddn1:

  1. SELECT delete_data_node('dn1');

detach_data_node()

Detach a data node from one hypertable or from all hypertables.

Reasons for detaching a data node:

  • A data node should no longer be used by a hypertable and needs to be removed from all hypertables that use it
  • You want to have fewer data nodes for a distributed hypertable to partition across

Required Arguments

Name Description
node_name (NAME) Name of data node to detach from the distributed hypertable

Optional Arguments

Name Description
hypertable (REGCLASS) Name of the distributed hypertable where the data node should be detached. If NULL, the data node will be detached from all hypertables.
if_attached (BOOLEAN) Prevent error if the data node is not attached. Defaults to false.
force (BOOLEAN) Force detach of the data node even if that means that the replication factor is reduced below what was set. Note that it will never be allowed to reduce the replication factor below 1 since that would cause data loss.
repartition (BOOLEAN) Make the number of space partitions equal to the new number of data nodes (if such partitioning exists). This ensures that the remaining data nodes are used evenly. Defaults to true.

Returns

The number of hypertables the data node was detached from.

Errors

Detaching a node is not permitted:

  • If it would result in data loss for the hypertable due to the data node containing chunks that are not replicated on other data nodes
  • If it would result in under-replicated chunks for the distributed hypertable (without theforce argument)

TIP:Replication is currently experimental, and not a supported feature

Detaching a data node is under no circumstances possible if that would mean data loss for the hypertable. Nor is it possible to detach a data node, unless forced, if that would mean that the distributed hypertable would end up with under-replicated chunks.

The only safe way to detach a data node is to first safely delete any data on it or replicate it to another data node.

Sample Usage

Detach data nodedn3 fromconditions:

  1. SELECT detach_data_node('dn3', 'conditions');

detach_tablespace()

Detach a tablespace from one or more hypertables. Thisonly means thatnew chunks will not be placed on the detached tablespace. This is useful, for instance, when a tablespace is running low on disk space and one would like to prevent new chunks from being created in the tablespace. The detached tablespace itself and any existing chunks with data on it will remain unchanged and will continue to work as before, including being available for queries. Note that newly inserted data rows may still be inserted into an existing chunk on the detached tablespace since existing data is not cleared from a detached tablespace. A detached tablespace can be reattached if desired to once again be considered for chunk placement.

Required Arguments

Name Description
tablespace (NAME) Tablespace to detach.

When giving only the tablespace name as argument, the given tablespace will be detached from all hypertables that the current role has the appropriate permissions for. Therefore, without proper permissions, the tablespace may still receive new chunks after this command is issued.

Optional Arguments

Name Description
hypertable (REGCLASS) Hypertable to detach a the tablespace from.
if_attached (BOOLEAN) Set to true to avoid throwing an error if the tablespace is not attached to the given table. A notice is issued instead. Defaults to false.

When specifying a specific hypertable, the tablespace will only be detached from the given hypertable and thus may remain attached to other hypertables.

Sample Usage

Detach the tablespacedisk1 from the hypertableconditions:

  1. SELECT detach_tablespace('disk1', 'conditions');
  2. SELECT detach_tablespace('disk2', 'conditions', if_attached => true);

Detach the tablespacedisk1 from all hypertables that the current user has permissions for:

  1. SELECT detach_tablespace('disk1');

detach_tablespaces()

Detach all tablespaces from a hypertable. After issuing this command on a hypertable, it will no longer have any tablespaces attached to it. New chunks will instead be placed in the database’s default tablespace.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to detach a the tablespace from.

Sample Usage

Detach all tablespaces from the hypertableconditions:

  1. SELECT detach_tablespaces('conditions');

distributed_exec()

This procedure is used on an access node to execute a SQL command across the data nodes of a distributed database. For instance, one use case is to create the roles and permissions needed in a distributed database.

The procedure can run distributed commands transactionally, so a command is executed either everywhere or nowhere. However, not all SQL commands can run in a transaction. This can be toggled with the argumenttransactional. Note if the execution is not transactional, a failure on one of the data node will require manual dealing with any introduced inconsistency.

Note that the command isnot executed on the access node itself and it is not possible to chain multiple commands together in one call.

Required Arguments

Name Description
query The command to execute on data nodes.

Optional Arguments

Name Description
node_list An array of data nodes where the command should be executed. Defaults to all data nodes if not specified.
transactional Allows to specify if the execution of the statement should be transactional or not. Defaults to TRUE.

Sample Usage

Create the roletestrole across all data nodes in a distributed database:

  1. CALL distributed_exec($$ CREATE USER testrole WITH LOGIN $$);

Create the roletestrole on two specific data nodes:

  1. CALL distributed_exec($$ CREATE USER testrole WITH LOGIN $$, node_list => '{ "dn1", "dn2" }');
  1. CALL distributed_exec('CREATE DATABASE dist_database', transactional => FALSE);

drop_chunks()

Removes data chunks whose time range falls completely before (or after) a specified time. Shows a list of the chunks that were dropped, in the same style as theshow_chunks.

Chunks are constrained by a start and end time and the start time is always before the end time. A chunk is dropped if its end time is older than theolder_than timestamp or, ifnewer_than is given, its start time is newer than thenewer_than timestamp.

Note that, because chunks are removed if and only if their time range falls fully before (or after) the specified timestamp, the remaining data may still contain timestamps that are before (or after) the specified one.

Required Arguments

Name Description
relation Hypertable or continuous aggregate from which to drop chunks.
older_than Specification of cut-off point where any full chunks older than this timestamp should be removed.

Optional Arguments

Name Description
newer_than Specification of cut-off point where any full chunks newer than this timestamp should be removed.
verbose (BOOLEAN) Setting to true will display messages about the progress of the reorder command. Defaults to false.

Theolder_than andnewer_than parameters can be specified in two ways:

  • timestamp, date, or integer type: The cut-off point is explicitly given as aTIMESTAMP /TIMESTAMPTZ /DATE or as aSMALLINT /INT /BIGINT. The choice of timestamp or integer must follow the type of the hypertable’s time column.

WARNING:When using just an interval type, the function assumes that you are are removing thingsin the past. If you want to remove data in the future (i.e., erroneous entries), use a timestamp.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example, specifyingnewer_than => 4 months andolder_than => 3 months will drop all full chunks that are between 3 and 4 months old. Similarly, specifyingnewer_than => '2017-01-01' andolder_than => '2017-02-01' will drop all full chunks between ‘2017-01-01’ and ‘2017-02-01’. Specifying parameters that do not result in an overlapping intersection between two ranges will result in an error.

Sample Usage

Drop all chunks from hypertableconditions older than 3 months:

  1. SELECT drop_chunks('conditions', INTERVAL '3 months');

Example output:

  1. drop_chunks
  2. ----------------------------------------
  3. _timescaledb_internal._hyper_3_5_chunk
  4. _timescaledb_internal._hyper_3_6_chunk
  5. _timescaledb_internal._hyper_3_7_chunk
  6. _timescaledb_internal._hyper_3_8_chunk
  7. _timescaledb_internal._hyper_3_9_chunk
  8. (5 rows)

Drop all chunks more than 3 months in the future from hypertableconditions. This is useful for correcting data ingested with incorrect clocks:

  1. SELECT drop_chunks('conditions', newer_than => now() + interval '3 months');

Drop all chunks from hypertableconditions before 2017:

  1. SELECT drop_chunks('conditions', '2017-01-01'::date);

Drop all chunks from hypertableconditions before 2017, where time column is given in milliseconds from the UNIX epoch:

  1. SELECT drop_chunks('conditions', 1483228800000);

Drop all chunks older than 3 months ago and newer than 4 months ago from hypertableconditions:

  1. SELECT drop_chunks('conditions', older_than => interval '3 months', newer_than => interval '4 months')

set_chunk_time_interval()

Sets the chunk_time_interval on a hypertable. The new interval is used when new chunks are created but the time intervals on existing chunks are not affected.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to update interval for.
chunk_time_interval Interval in event time that each new chunk covers. Must be > 0.

Optional Arguments

Name Description
dimension_name The name of the time dimension to set the number of partitions for. Only used when hypertable has multiple time dimensions.

The valid types for thechunk_time_interval depend on the type of hypertable time column:

  • TIMESTAMP, TIMESTAMPTZ, DATE: The specifiedchunk_time_interval should be given either as an INTERVAL type (INTERVAL '1 day') or as an integer or bigint value (representing some number of microseconds).

  • INTEGER: The specifiedchunk_time_interval should be an integer (smallint, int, bigint) value and represent the underlying semantics of the hypertable’s time column, e.g., given in milliseconds if the time column is expressed in milliseconds (seecreate_hypertableinstructions).

Sample Usage

For a TIMESTAMP column, setchunk_time_interval to 24 hours.

  1. SELECT set_chunk_time_interval('conditions', INTERVAL '24 hours');
  2. SELECT set_chunk_time_interval('conditions', 86400000000);

For a time column expressed as the number of milliseconds since the UNIX epoch, setchunk_time_interval to 24 hours.

  1. SELECT set_chunk_time_interval('conditions', 86400000);

set_number_partitions()

Sets the number of partitions (slices) of a space dimension on a hypertable. The new partitioning only affects new chunks.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to update the number of partitions for.
number_partitions The new number of partitions for the dimension. Must be greater than 0 and less than 32,768.

Optional Arguments

Name Description
dimension_name The name of the space dimension to set the number of partitions for.

Thedimension_name needs to be explicitly specified only if the hypertable has more than one space dimension. An error will be thrown otherwise.

Sample Usage

For a table with a single space dimension:

  1. SELECT set_number_partitions('conditions', 2);

For a table with more than one space dimension:

  1. SELECT set_number_partitions('conditions', 2, 'device_id');

set_integer_now_func()

This function is only relevant for hypertables with integer (as opposed to TIMESTAMP/TIMESTAMPTZ/DATE) time values. For such hypertables, it sets a function that returns thenow() value (current time) in the units of the time column. This is necessary for running some policies on integer-based tables. In particular, many policies only apply to chunks of a certain age and a function that returns the current time is necessary to determine the age of a chunk.

Required Arguments

Name Description
main_table (REGCLASS) Hypertable to set the integer now function for .
integer_now_func (REGPROC) A function that returns the current time value in the same units as the time column.

Optional Arguments

Name Description
replace_if_exists (BOOLEAN) Whether to override the function if one is already set. Defaults to false.

Sample Usage

To set the integer now function for a hypertable with a time column in unix time (number of seconds since the unix epoch, UTC).

  1. CREATE OR REPLACE FUNCTION unix_now() returns BIGINT LANGUAGE SQL STABLE as $$ SELECT extract(epoch from now())::BIGINT $$;
  2. SELECT set_integer_now_func('test_table_bigint', 'unix_now');

set_replication_factor()

Sets the replication factor of a distributed hypertable to the given value. Changing the replication factor does not affect the number of replicas for existing chunks. Chunks created after changing the replication factor will be replicated in accordance with new value of the replication factor. If the replication factor cannot be satisfied, since the amount of attached data nodes is less than new replication factor, the command aborts with an error.

If existing chunks have less replicas than new value of the replication factor, the function will print a warning.

Required Arguments

Name Description
hypertable (REGCLASS) Distributed hypertable to update the replication factor for.
replication_factor (INTEGER) The new value of the replication factor. Must be greater than 0, and smaller than or equal to the number of attached data nodes.

Errors

An error will be given if:

  • hypertable is not a distributed hypertable.
  • replication_factor is less than1, which cannot be set on a distributed hypertable.
  • replication_factor is bigger than the number of attached data nodes.

If a bigger replication factor is desired, it is necessary to attach more data nodes by usingattach_data_node.

Sample Usage

Update the replication factor for a distributed hypertable to2:

  1. SELECT set_replication_factor('conditions', 2);

Example of the warning if any existing chunk of the distributed hypertable has less than 2 replicas:

  1. WARNING: hypertable "conditions" is under-replicated
  2. DETAIL: Some chunks have less than 2 replicas.

Example of providing too big of a replication factor for a hypertable with 2 attached data nodes:

  1. SELECT set_replication_factor('conditions', 3);
  2. ERROR: too big replication factor for hypertable "conditions"
  3. DETAIL: The hypertable has 2 data nodes attached, while the replication factor is 3.
  4. HINT: Decrease the replication factor or attach more data nodes to the hypertable.

show_chunks()

Get list of chunks associated with hypertables.

Optional Arguments

Function accepts the following arguments. These arguments have the same semantics as thedrop_chunksfunction.

Name Description
relation Hypertable or continuous aggregate from which to select chunks. If not supplied, all chunks are shown.
older_than Specification of cut-off point where any full chunks older than this timestamp should be shown.
newer_than Specification of cut-off point where any full chunks newer than this timestamp should be shown.

Theolder_than andnewer_than parameters can be specified in two ways:

  • interval type: The cut-off point is computed asnow() - older_than and similarlynow() - newer_than. An error will be returned if an INTERVAL is supplied and the time column is not one of a TIMESTAMP, TIMESTAMPTZ, or DATE.

  • timestamp, date, or integer type: The cut-off point is explicitly given as a TIMESTAMP / TIMESTAMPTZ / DATE or as a SMALLINT / INT / BIGINT. The choice of timestamp or integer must follow the type of the hypertable’s time column.

When both arguments are used, the function returns the intersection of the resulting two ranges. For example, specifyingnewer_than => 4 months andolder_than => 3 months will show all full chunks that are between 3 and 4 months old. Similarly, specifyingnewer_than => '2017-01-01' andolder_than => '2017-02-01' will show all full chunks between ‘2017-01-01’ and ‘2017-02-01’. Specifying parameters that do not result in an overlapping intersection between two ranges will result in an error.

Sample Usage

Get list of all chunks. Returns 0 if there are no hypertables:

The expected output:

  1. show_chunks
  2. ---------------------------------------
  3. _timescaledb_internal._hyper_1_10_chunk
  4. _timescaledb_internal._hyper_1_11_chunk
  5. _timescaledb_internal._hyper_1_12_chunk
  6. _timescaledb_internal._hyper_1_13_chunk
  7. _timescaledb_internal._hyper_1_14_chunk
  8. _timescaledb_internal._hyper_1_15_chunk
  9. _timescaledb_internal._hyper_1_16_chunk
  10. _timescaledb_internal._hyper_1_17_chunk
  11. _timescaledb_internal._hyper_1_18_chunk

Get list of all chunks associated with a table:

  1. SELECT show_chunks('conditions');

Get all chunks older than 3 months:

  1. SELECT show_chunks(older_than => INTERVAL '3 months');

Get all chunks more than 3 months in the future. This is useful for showing data ingested with incorrect clocks:

  1. SELECT show_chunks(newer_than => now() + INTERVAL '3 months');

Get all chunks from hypertableconditions older than 3 months:

  1. SELECT show_chunks('conditions', older_than => INTERVAL '3 months');

Get all chunks from hypertableconditions before 2017:

  1. SELECT show_chunks('conditions', older_than => DATE '2017-01-01');

Get all chunks newer than 3 months:

  1. SELECT show_chunks(newer_than => INTERVAL '3 months');

Get all chunks older than 3 months and newer than 4 months:

  1. SELECT show_chunks(older_than => INTERVAL '3 months', newer_than => INTERVAL '4 months');

reorder_chunk() Community

Reorder a single chunk’s heap to follow the order of an index. This function acts similarly to the , however it uses lower lock levels so that, unlike with the CLUSTER command, the chunk and hypertable are able to be read for most of the process. It does use a bit more disk space during the operation.

This command can be particularly useful when data is often queried in an order different from that in which it was originally inserted. For example, data is commonly inserted into a hypertable in loose time order (e.g., many devices concurrently sending their current state), but one might typically query the hypertable about aspecific device. In such cases, reordering a chunk using an index on(device_id, time) can lead to significant performance improvement for these types of queries.

One can call this function directly on individual chunks of a hypertable, but usingadd_reorder_policy is often much more convenient.

Required Arguments

Name Description
chunk (REGCLASS) Name of the chunk to reorder.

Optional Arguments

Returns

This function returns void.

Sample Usage

  1. SELECT reorder_chunk('_timescaledb_internal._hyper_1_10_chunk', 'conditions_device_id_time_idx');

runs a reorder on the_timescaledb_internal._hyper_1_10_chunk chunk using theconditions_device_id_time_idx index.


move_chunk() Community

TimescaleDB allows users to move data (and indexes) to alternative tablespaces. This allows the user the ability to move data to more cost effective storage as it ages. This function acts like the combination of thePostgreSQL CLUSTER command and the.

Unlike these PostgreSQL commands, however, themove_chunk function employs lower lock levels so that the chunk and hypertable are able to be read for most of the process. This comes at a cost of slightly higher disk usage during the operation. For a more detailed discussion of this capability, please see theData Tiering documentation.

Required Arguments

Name Description
chunk (REGCLASS) Name of chunk to be moved.
destination_tablespace (Name) Target tablespace for chunk you are moving.
index_destination_tablespace (Name) Target tablespace for index associated with the chunk you are moving.

Optional Arguments

Name Description
reorder_index (REGCLASS) The name of the index (on either the hypertable or chunk) to order by.
verbose (BOOLEAN) Setting to true will display messages about the progress of the move_chunk command. Defaults to false.

Sample Usage

  1. SELECT move_chunk(
  2. chunk => '_timescaledb_internal._hyper_1_4_chunk',
  3. destination_tablespace => 'tablespace_2',
  4. index_destination_tablespace => 'tablespace_3',
  5. reorder_index => 'conditions_device_id_time_idx',
  6. verbose => TRUE
  7. );

Compression Community

We highly recommend reading the andtutorial about compression before trying to set it up for the first time.

Setting up compression on TimescaleDB requires users to first and thenset up a policy for when to compress chunks.

Advanced usage of compression allows users to, instead of automatically as they age.

Restrictions

The current version does not support altering or inserting data into compressed chunks. The data can be queried without any modifications, however if you need to backfill or update data in a compressed chunk you will need to decompress the chunk(s) first.

Associated commands

ALTER TABLE (Compression) Community

‘ALTER TABLE’ statement is used to turn on compression and set compression options.

The syntax is:

  1. ALTER TABLE <table_name> SET (timescaledb.compress, timescaledb.compress_orderby = '<column_name> [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
  2. timescaledb.compress_segmentby = '<column_name> [, ...]'
  3. );

Required Options

Name Description
timescaledb.compress Boolean to enable compression

Other Options

Name Description
timescaledb.compress_orderby Order used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable’s time column.
timescaledb.compress_segmentby Column list on which to key the compressed segments. An identifier representing the source of the data such asdevice_id ortags_id is usually a good candidate. The default is nosegment by columns.

Parameters

Name Description
table_name Name of the hypertable that will support compression
column_name Name of the column used to order by and/or segment by

Sample Usage

Configure a hypertable that ingests device data to use compression.

  1. ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_orderby = 'time DESC', timescaledb.compress_segmentby = 'device_id');

add_compression_policy() Community

Allows you to set a policy by which the system will compress a chunk automatically in the background after it reaches a given age.

Note that compression policies can only be created on hypertables that already have compression enabled, e.g., via theALTER TABLE command to settimescaledb.compress and other configuration parameters.

Required Arguments

Name Description
hypertable (REGCLASS) Name of the hypertable
compress_after (INTERVAL or INTEGER) The age after which the policy job will compress chunks

Thecompress_after parameter should be specified differently depending on the type of the time column of the hypertable:

  • For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time interval should be an INTERVAL type.
  • For hypertables with integer-based timestamps: the time interval should be an integer type (this requires theinteger_now_func to be set).

Optional Arguments

Name Description
if_not_exists (BOOLEAN) Setting to true will cause the command to fail with a warning instead of an error if a compression policy already exists on the hypertable. Defaults to false.

Sample Usage

Add a policy to compress chunks older than 60 days on the ‘cpu’ hypertable.

  1. SELECT add_compression_policy('cpu', INTERVAL '60d');

Add a compress chunks policy to a hypertable with an integer-based time column:

  1. SELECT add_compression_policy('table_with_bigint_time', BIGINT '600000');

If you need to remove the compression policy. To re-start policy-based compression again you will need to re-add the policy.

Required Arguments

Name Description
hypertable (REGCLASS) Name of the hypertable the policy should be removed from.

Optional Arguments

Name Description
if_exists (BOOLEAN) Setting to true will cause the command to fail with a notice instead of an error if a compression policy does not exist on the hypertable. Defaults to false.

Sample Usage

Remove the compression policy from the ‘cpu’ table:

  1. SELECT remove_compression_policy('cpu');

compress_chunk() Community

The compress_chunk function is used to compress a specific chunk. This is most often used instead of the function, when a user wants more control over the scheduling of compression. For most users, we suggest using the policy framework instead.

TIP:You can get a list of chunks belonging to a hypertable using theshow_chunksfunction.

Required Arguments

Name Description
chunk_name (REGCLASS) Name of the chunk to be compressed

Optional Arguments

Name Description
if_not_compressed (BOOLEAN) Setting to true will skip chunks that are already compressed. Defaults to false.

Sample Usage

Compress a single chunk.

  1. SELECT compress_chunk('_timescaledb_internal._hyper_1_2_chunk');

decompress_chunk() Community

If you need to modify or add data to a chunk that has already been compressed, you will need to decompress the chunk first. This is especially useful for backfilling old data.

TIP:Prior to decompressing chunks for the purpose of data backfill or updating you should first stop any compression policy that is active on the hypertable you plan to perform this operation on. Once the update and/or backfill is complete simply turn the policy back on and the system will recompress your chunks.

Required Arguments

Name Description
chunk_name (REGCLASS) Name of the chunk to be decompressed.

Optional Arguments

Name Description
if_compressed (BOOLEAN) Setting to true will skip chunks that are not compressed. Defaults to false.

Sample Usage

Decompress a single chunk

  1. SELECT decompress_chunk('_timescaledb_internal._hyper_2_2_chunk');

Continuous Aggregates Community

TimescaleDB allows users the ability to automatically recompute aggregates at predefined intervals and materialize the results. This is suitable for frequently used queries. For a more detailed discussion of this capability, please see.

CREATE MATERIALIZED VIEW (Continuous Aggregate) Community

CREATE MATERIALIZED VIEW statement is used to create continuous aggregates.

The syntax is:

  1. CREATE MATERIALIZED VIEW <view_name> [ ( column_name [, ...] ) ]
  2. WITH ( timescaledb.continuous [, timescaledb.<option> = <value> ] )
  3. AS
  4. <select_query>
  5. [WITH [NO] DATA]

<select_query> is of the form :

  1. SELECT <grouping_exprs>, <aggregate_functions>
  2. FROM <hypertable>
  3. [WHERE ... ]
  4. GROUP BY time_bucket( <const_value>, <partition_col_of_hypertable> ),
  5. [ optional grouping exprs>]
  6. [HAVING ...]

Note that continuous aggregates have some limitations of what types of queries they can support, described in more length below. For example, theFROM clause must provide only one hypertable, i.e., no joins, CTEs, views or subqueries are supported. TheGROUP BY clause must include a time bucket on the hypertable’s time column, and all aggregates must be parallelizable.

Parameters

Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.
<column_name> Optional list of names to be used for columns of the view. If not given, the column names are deduced from the query.
WITH clause This clause specifies for the continuous aggregate view.
<select_query> ASELECT query that uses the specified syntax.

RequiredWITH clause options

Name
timescaledb.continuous
Description Type Default
If timescaledb.continuous is not specified, then this is a regular PostgresSQL materialized view. BOOLEAN

Optional clause options

Name
timescaledb.materialized_only
Description Type Default
Return only materialized data when querying the continuous aggregate view. See more in section onreal-time aggregates. BOOLEAN false
timescaledb.create_group_indexes
Description Type Default
Create indexes on the materialization table for the group by columns (specified by theGROUP BY clause of theSELECT query). BOOLEAN Indexes are created by default for every group by expression + time_bucket expression pair.

Notes

  • The view will be automatically refreshed (as outlined under) unlessWITH NO DATA is given (WITH DATA is the default).
  • TheSELECT query should be of the form specified in the syntax above, which is discussed in the following items.
  • Only a single hypertable can be specified in theFROM clause of theSELECT query. This means that including more hypertables, joins, tables, views, subqueries is not supported.
  • The hypertable used in theSELECT may not haverow-level-security policies enabled.
  • TheGROUP BY clause must include a time_bucket expression. The expression must use the time dimension column of the hypertable.
  • time_bucket_gapfill is not allowed in continuous aggs, but may be run in aSELECT from the continuous aggregate view.
  • In general, aggregates which can be are allowed in the view definition, this includes most aggregates distributed with PostgreSQL. Aggregates withORDER BY,DISTINCT andFILTER clauses are not permitted.
  • All functions and their arguments included inSELECT,GROUP BY andHAVING clauses must beimmutable.
  • The view is not allowed to be a.
  • Window functions cannot be used in conjunction with continuous aggregates.

Sample Usage

Create a continuous aggregate view.

  1. CREATE MATERIALIZED VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
  2. WITH (timescaledb.continuous) AS
  3. SELECT time_bucket('1day', timec), min(location), sum(temperature), sum(humidity)
  4. FROM conditions
  5. GROUP BY time_bucket('1day', timec)

Add additional continuous aggregates on top of the same raw hypertable.

  1. CREATE MATERIALIZED VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
  2. WITH (timescaledb.continuous) AS
  3. SELECT time_bucket('30day', timec), min(location), sum(temperature), sum(humidity)
  4. FROM conditions
  5. GROUP BY time_bucket('30day', timec);
  1. CREATE MATERIALIZED VIEW continuous_aggregate_view( timec, minl, sumt, sumh )
  2. WITH (timescaledb.continuous) AS
  3. SELECT time_bucket('1h', timec), min(location), sum(temperature), sum(humidity)
  4. FROM conditions
  5. GROUP BY time_bucket('1h', timec);

ALTER MATERIALIZED VIEW (Continuous Aggregate) Community

ALTER MATERIALIZED VIEW statement can be used to modify some of theWITH clauseoptions for the continuous aggregate view.

  1. ALTER MATERIALIZED VIEW <view_name> SET ( timescaledb.<option> = <value> [, ... ] )

Parameters

Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.

Sample Usage

To disablereal-time aggregates for a continuous aggregate:

  1. ALTER MATERIALIZED VIEW contagg_view SET (timescaledb.materialized_only = true);

The only option that currently can be modified withALTER MATERIALIZED VIEW ismaterialized_only. The other optionscontinuous andcreate_group_indexes can only be set when creating the continuous aggregate.


DROP MATERIALIZED VIEW (Continuous Aggregate) Community

Continuous aggregate views can be dropped using theDROP MATERIALIZED VIEW statement.

This statement deletes the continuous aggregate and all its internal objects. To also delete other dependent objects, such as a view defined on the continuous aggregate, add theCASCADE option. Dropping a continuous aggregate does not affect the data in the underlying hypertable from which the continuous aggregate is derived.

  1. DROP MATERIALIZED VIEW <view_name>;

Parameters

Name Description
<view_name> Name (optionally schema-qualified) of continuous aggregate view to be created.

Sample Usage

Drop existing continuous aggregate.

  1. DROP MATERIALIZED VIEW contagg_view;

refresh_continuous_aggregate() Community

Refresh all buckets of a continuous aggregate between two points of time.

The function expects the parameter values to have the same time type as used in the continuous aggregate’s time bucket expression (e.g., if the time bucket specifies intimestamptz, then the start and end time supplied should also betimestamptz).

Required Arguments

Name Description
continuous_aggregate (REGCLASS) The continuous aggregate to refresh.
window_start Start of the window to refresh, has to be beforewindow_end.NULL is eqivalent toMIN(timestamp) of the hypertable.
window_end End of the window to refresh, has to be afterwindow_start.NULL is eqivalent toMAX(timestamp) of the hypertable.

Sample Usage

Refresh the continuous aggregateconditions between2020-01-01 and2020-02-01 exclusive.

  1. CALL refresh_continuous_aggregate('conditions', '2020-01-01', '2020-02-01');

Automation policies Community

TimescaleDB includes an automation framework for allowing background tasks to run inside the database, controllable by user-supplied policies. These tasks currently include capabilities around data retention and data reordering for improving query performance.

The following functions allow the administrator to create/remove/alter policies that schedule administrative actions to take place on a hypertable. The actions are meant to implement data retention or perform tasks that will improve query performance on older chunks. Each policy is assigned a scheduled job which will be run in the background to enforce it.

Information about jobs created by policies can be viewed by queryingtimescaledb_information.jobs andtimescaledb_information.job_stats.

add_continuous_aggregate_policy() Community

Create a policy that automatically refreshes a continuous aggregate.

Required Arguments

Name Description
continuous_aggregate (REGCLASS) The continuous aggregate to add the policy for.
start_offset (INTERVAL or integer) Start of the refresh window as an interval relative to the time when the policy is executed
end_offset (INTERVAL or integer) End of the refresh window as an interval relative to the time when the policy is executed
schedule_interval (INTERVAL) Interval between refresh executions in wall-clock time.

Thestart_offset should be greater thanend_offset. Thestart_offset andend_offset parameters should be specified differently depending on the type of the time column of the hypertable:

  • For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the offset should be an INTERVAL type
  • For hypertables with integer-based timestamps: the offset should be an integer type.

Optional Arguments

Name Description
if_not_exists (BOOLEAN) Set to true to avoid throwing an error if the continuous aggregate policy already exists. A notice is issued instead. Defaults to false.

Returns

Column Description
job_id (INTEGER) TimescaleDB background job id created to implement this policy

Sample Usage

Add a policy that refreshes the last month once an hour, excluding the latest hour from the aggregate (for performance reasons, it is recommended to exclude buckets that still see lots of writes):

  1. SELECT add_continuous_aggregate_policy('conditions_summary',
  2. start_offset => INTERVAL '1 month',
  3. end_offset => INTERVAL '1 hour',
  4. schedule_interval => INTERVAL '1 hour');

add_job() Community

Register an action to be scheduled by our automation framework. Please read the for more details including multiple example actions.

Required Arguments

Name Description
proc (REGPROC) Name of the function or procedure to register as job
schedule_interval (INTERVAL) Interval between executions of this job

Optional Arguments

Name Description
config (JSONB) Job-specific configuration (this will be passed to the function when executed)
initial_start (TIMESTAMPTZ) Time of first execution of job
scheduled (BOOLEAN) Set toFALSE to exclude this job from scheduling. Defaults toTRUE.

Returns

Column Description
job_id (INTEGER) TimescaleDB background job id

Sample Usage

  1. CREATE OR REPLACE PROCEDURE user_defined_action(job_id int, config jsonb) LANGUAGE PLPGSQL AS
  2. $$
  3. BEGIN
  4. RAISE NOTICE 'Executing action % with config %', job_id, config;
  5. END
  6. $$;
  7. SELECT add_job('user_defined_action','1h');

Register the procedureuser_defined_action to be run every hour.

delete_job() Community

Delete a job registered with the automation framework. This works for user-defined actions as well as policies.

If the job is currently running, the process will be terminated.

Required Arguments

Name Description
job_id (INTEGER) TimescaleDB background job id

Sample Usage

  1. SELECT delete_job(1000);

Delete the job with the job id 1000.

run_job() Community

Run a previously registered job in the current session. This works for user-defined actions as well as policies. Sincerun_job is implemented as stored procedure it cannot be executed inside a SELECT query but has to be executed withCALL.

TIP:Any background worker job can be run in foreground when executed withrun_job. This can be useful to debug problems when combined with increased log level.

Required Arguments

Name Description
job_id (INTEGER) TimescaleDB background job id

Sample Usage

  1. SET client_min_messages TO DEBUG1;
  2. CALL run_job(1000);

Set log level shown to client to DEBUG1 and run the job with the job id 1000.

remove_continuous_aggregate_policy() Community

Remove refresh policy for a continuous aggregate.

Required Arguments

Name Description
continuous_aggregate (REGCLASS) Name of the continuous aggregate the policy should be removed from

Sample Usage

  1. SELECT remove_continuous_aggregate_policy('cpu_view');

add_retention_policy() Community

Create a policy to drop chunks older than a given interval of a particular hypertable or continuous aggregate on a schedule in the background. (See). This implements a data retention policy and will remove data on a schedule. Only one retention policy may exist per hypertable.

Required Arguments

Name Description
relation (REGCLASS) Name of the hypertable or continuous aggregate to create the policy for.
drop_after (INTERVAL or INTEGER) Chunks fully older than this interval when the policy is run will be dropped

Thedrop_after parameter should be specified differently depending on the type of the time column of the hypertable:

  • For hypertables with TIMESTAMP, TIMESTAMPTZ, and DATE time columns: the time interval should be an INTERVAL type.
  • For hypertables with integer-based timestamps: the time interval should be an integer type (this requires the to be set).

Optional Arguments

Name Description
if_not_exists (BOOLEAN) Set to true to avoid throwing an error if the drop_chunks_policy already exists. A notice is issued instead. Defaults to false.

Returns

Column Description
job_id (INTEGER) TimescaleDB background job id created to implement this policy

Sample Usage

Create a data retention policy to discard chunks greater than 6 months old:

  1. SELECT add_retention_policy('conditions', INTERVAL '6 months');

Create a data retention policy with an integer-based time column:

  1. SELECT add_retention_policy('conditions', BIGINT '600000');

remove_retention_policy() Community

Remove a policy to drop chunks of a particular hypertable.

Required Arguments

Name Description
relation (REGCLASS) Name of the hypertable or continuous aggregate from which to remove the policy

Optional Arguments

Name Description
if_exists (BOOLEAN) Set to true to avoid throwing an error if the policy does not exist. Defaults to false.

Sample Usage

  1. SELECT remove_retention_policy('conditions');

removes the existing data retention policy for theconditions table.


add_reorder_policy() Community

Create a policy to reorder chunks on a given hypertable index in the background. (Seereorder_chunk). Only one reorder policy may exist per hypertable. Only chunks that are the 3rd from the most recent will be reordered to avoid reordering chunks that are still being inserted into.

TIP:Once a chunk has been reordered by the background worker it will not be reordered again. So if one were to insert significant amounts of data in to older chunks that have already been reordered, it might be necessary to manually re-run the function on older chunks, or to drop and re-create the policy if many older chunks have been affected.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to create the policy for.
index_name (NAME) Existing index by which to order rows on disk.

Optional Arguments

Name Description
if_not_exists (BOOLEAN) Set to true to avoid throwing an error if the reorder_policy already exists. A notice is issued instead. Defaults to false.

Returns

Column Description
job_id (INTEGER) TimescaleDB background job id created to implement this policy

Sample Usage

  1. SELECT add_reorder_policy('conditions', 'conditions_device_id_time_idx');

creates a policy to reorder completed chunks by the existing(device_id, time) index. (Seereorder_chunk).


remove_reorder_policy() Community

Remove a policy to reorder a particular hypertable.

Required Arguments

Name Description
hypertable (REGCLASS) Name of the hypertable from which to remove the policy.

Optional Arguments

Sample Usage

  1. SELECT remove_reorder_policy('conditions', if_exists => true);

removes the existing reorder policy for theconditions table if it exists.


alter_job() Community

Actions scheduled via TimescaleDB’s automation framework run periodically in a background worker. You can change the schedule of their execution usingalter_job. To alter an existing job, you must refer to it byjob_id. Thejob_id which executes a given action and its current schedule can be found either in thetimescaledb_information.jobs view, which lists information about every scheduled action, as well as intimescaledb_information.job_stats. Thejob_stats view additionally contains information about when each job was last run and other useful statistics for deciding what the new schedule should be.

Required Arguments

Name Description
job_id (INTEGER) the id of the policy job being modified

Optional Arguments

Name Description
schedule_interval (INTERVAL) The interval at which the job runs
max_runtime (INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries (INTEGER) The number of times the job will be retried should it fail
retry_period (INTERVAL) The amount of time the scheduler will wait between retries of the job on failure
scheduled (BOOLEAN) Set toFALSE to exclude this job from being run as background job.
config (JSONB) Job-specific configuration (this will be passed to the function when executed)
next_start (TIMESTAMPTZ) The next time at which to run the job. The job can be paused by setting this value to ‘infinity’ (and restarted with a value of now()).
if_exists (BOOLEAN) Set to true to avoid throwing an error if the job does not exist, a notice will be issued instead. Defaults to false.

Returns

Column Description
job_id (INTEGER) the id of the job being modified
schedule_interval (INTERVAL) The interval at which the job runs
max_runtime (INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries (INTEGER) The number of times the job will be retried should it fail
retry_period (INTERVAL) The amount of time the scheduler will wait between retries of the job on failure
scheduled (BOOLEAN) True if this job will be executed by the TimescaleDB scheduler.
config (JSONB) Job-specific configuration (this will be passed to the function when executed)
next_start (TIMESTAMPTZ) The next time at which to run the job.

Sample Usage

  1. SELECT alter_job(1000, schedule_interval => INTERVAL '2 days');

Reschedules the job with id 1000 so that it runs every two days.

  1. SELECT alter_job(job_id, scheduled => false)
  2. FROM timescaledb_information.jobs
  3. WHERE proc_name = 'policy_compression' AND hypertable_name = 'conditions'

Disables scheduling of the compression policy on hypertableconditions.

  1. SELECT alter_job(1015, next_start => '2020-03-15 09:00:00.0+00');

Reschedules continuous aggregate job1015 so that the next execution of the job starts at the specified time (9:00:00 am on March 15, 2020).


Analytics

first()

Thefirst aggregate allows you to get the value of one column as ordered by another. For example,first(temperature, time) will return the earliest temperature value based on time within an aggregate group.

Required Arguments

Name Description
value The value to return (anyelement)
time The timestamp to use for comparison (TIMESTAMP/TIMESTAMPTZ or integer type)

Sample Usage

Get the earliest temperature by device_id:

  1. SELECT device_id, first(temp, time)
  2. FROM metrics
  3. GROUP BY device_id;

WARNING:Thelast andfirst commands donot use indexes, and instead perform a sequential scan through their groups. They are primarily used for ordered selection within aGROUP BY aggregate, and not as an alternative to anORDER BY time DESC LIMIT 1 clause to find the latest value (which will use indexes).


histogram()

Thehistogram() function represents the distribution of a set of values as an array of equal- buckets. It partitions the dataset into a specified number of buckets (nbuckets) ranging from the inputtedmin andmax values.

The return value is an array containingnbuckets+2 buckets, with the middlenbuckets bins for values in the stated range, the first bucket at the head of the array for values under the lowermin bound, and the last bucket for values greater than or equal to themax bound. Each bucket is inclusive on its lower bound, and exclusive on its upper bound. Therefore, values equal to themin are included in the bucket starting withmin, but values equal to themax are in the last bucket.

Required Arguments

Name Description
value A set of values to partition into a histogram
min The histogram’s lower bound used in bucketing (inclusive)
max The histogram’s upper bound used in bucketing (exclusive)
nbuckets The integer value for the number of histogram buckets (partitions)

Sample Usage

A simple bucketing of device’s battery levels from thereadings dataset:

The expected output:

  1. device_id | histogram
  2. ------------+------------------------------
  3. demo000000 | {0,0,0,7,215,206,572}
  4. demo000001 | {0,12,173,112,99,145,459}
  5. demo000002 | {0,0,187,167,68,229,349}
  6. demo000003 | {197,209,127,221,106,112,28}
  7. demo000004 | {0,0,0,0,0,39,961}
  8. demo000005 | {12,225,171,122,233,80,157}
  9. demo000006 | {0,78,176,170,8,40,528}
  10. demo000007 | {0,0,0,126,239,245,390}
  11. demo000008 | {0,0,311,345,116,228,0}
  12. demo000009 | {295,92,105,50,8,8,442}

interpolate() Community

Theinterpolate function does linear interpolation for missing values. It can only be used in an aggregation query withtime_bucket_gapfill. Theinterpolate function call cannot be nested inside other function calls.

Required Arguments

Name Description
value The value to interpolate (int2/int4/int8/float4/float8)

Optional Arguments

Name Description
prev The lookup expression for values before the gapfill time range (record)
next The lookup expression for values after the gapfill time range (record)

Because the interpolation function relies on having values before and after each bucketed period to compute the interpolated value, it might not have enough data to calculate the interpolation for the first and last time bucket if those buckets do not otherwise contain valid values. For example, the interpolation would require looking before this first time bucket period, yet the query’s outer time predicate WHERE time > … normally restricts the function to only evaluate values within this time range. Thus, theprev andnext expression tell the function how to look for values outside of the range specified by the time predicate. These expressions will only be evaluated when no suitable value is returned by the outer query (i.e., the first and/or last bucket in the queried time range is empty). The returned record forprev andnext needs to be a time, value tuple. The datatype of time needs to be the same as the time datatype in thetime_bucket_gapfill call. The datatype of value needs to be the same as thevalue datatype of theinterpolate call.

Sample Usage

Get the temperature every day for each device over the last week interpolating for missing readings:

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(temperature) AS value,
  5. interpolate(avg(temperature))
  6. FROM metrics
  7. WHERE time > now () - INTERVAL '1 week'
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | interpolate
  11. ------------------------+-----------+-------+-------------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 6.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.5
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  19. (7 row)

Get the average temperature every day for each device over the last 7 days interpolating for missing readings with lookup queries for values before and after the gapfill time range:

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(value) AS value,
  5. interpolate(avg(temperature),
  6. (SELECT (time,temperature) FROM metrics m2 WHERE m2.time < now() - INTERVAL '1 week' AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1),
  7. (SELECT (time,temperature) FROM metrics m2 WHERE m2.time > now() AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1)
  8. ) AS interpolate
  9. FROM metrics m
  10. WHERE time > now () - INTERVAL '1 week'
  11. GROUP BY day, device_id
  12. ORDER BY day;
  13. day | device_id | value | interpolate
  14. ------------------------+-----------+-------+-------------
  15. 2019-01-10 01:00:00+01 | 1 | | 3.0
  16. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  17. 2019-01-12 01:00:00+01 | 1 | | 6.0
  18. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  19. 2019-01-14 01:00:00+01 | 1 | | 7.5
  20. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  21. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  22. (7 row)

last()

Thelast aggregate allows you to get the value of one column as ordered by another. For example,last(temperature, time) will return the latest temperature value based on time within an aggregate group.

Required Arguments

Name Description
value The value to return (anyelement)
time The timestamp to use for comparison (TIMESTAMP/TIMESTAMPTZ or integer type)

Sample Usage

Get the temperature every 5 minutes for each device over the past day:

  1. SELECT device_id, time_bucket('5 minutes', time) AS interval,
  2. last(temp, time)
  3. FROM metrics
  4. WHERE time > now () - INTERVAL '1 day'
  5. GROUP BY device_id, interval
  6. ORDER BY interval DESC;

WARNING:Thelast andfirst commands donot use indexes, and instead perform a sequential scan through their groups. They are primarily used for ordered selection within aGROUP BY aggregate, and not as an alternative to anORDER BY time DESC LIMIT 1 clause to find the latest value (which will use indexes).


Thelocf function (last observation carried forward) allows you to carry the last seen value in an aggregation group forward. It can only be used in an aggregation query with. Thelocf function call cannot be nested inside other function calls.

Required Arguments

Name Description
value The value to carry forward (anyelement)

Optional Arguments

Name Description
prev The lookup expression for values before gapfill start (anyelement)
treat_null_as_missing Ignore NULL values in locf and only carry non-NULL values forward

Because the locf function relies on having values before each bucketed period to carry forward, it might not have enough data to fill in a value for the first bucket if it does not contain a value. For example, the function would need to look before this first time bucket period, yet the query’s outer time predicate WHERE time > … normally restricts the function to only evaluate values within this time range. Thus, theprev expression tell the function how to look for values outside of the range specified by the time predicate. Theprev expression will only be evaluated when no previous value is returned by the outer query (i.e., the first bucket in the queried time range is empty).

Sample Usage

Get the average temperature every day for each device over the last 7 days carrying forward the last value for missing readings:

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(temperature) AS value,
  5. locf(avg(temperature))
  6. FROM metrics
  7. WHERE time > now () - INTERVAL '1 week'
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | locf
  11. ------------------------+-----------+-------+------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 5.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.0
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  19. (7 row)

Get the average temperature every day for each device over the last 7 days carrying forward the last value for missing readings with out-of-bounds lookup

  1. SELECT
  2. time_bucket_gapfill('1 day', time, now() - INTERVAL '1 week', now()) AS day,
  3. device_id,
  4. avg(temperature) AS value,
  5. locf(
  6. avg(temperature),
  7. (SELECT temperature FROM metrics m2 WHERE m2.time < now() - INTERVAL '2 week' AND m.device_id = m2.device_id ORDER BY time DESC LIMIT 1)
  8. )
  9. FROM metrics m
  10. WHERE time > now () - INTERVAL '1 week'
  11. GROUP BY day, device_id
  12. ORDER BY day;
  13. day | device_id | value | locf
  14. ------------------------+-----------+-------+------
  15. 2019-01-10 01:00:00+01 | 1 | | 1.0
  16. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  17. 2019-01-12 01:00:00+01 | 1 | | 5.0
  18. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  19. 2019-01-14 01:00:00+01 | 1 | | 7.0
  20. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  21. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0
  22. (7 row)

time_bucket()

This is a more powerful version of the standard PostgreSQLdate_trunc function. It allows for arbitrary time intervals instead of the second, minute, hour, etc. provided bydate_trunc. The return value is the bucket’s start time. Below is necessary information for using it effectively.

TIP:TIMESTAMPTZ arguments are bucketed by the time at UTC. So the alignment of buckets is on UTC time. One consequence of this is that daily buckets are aligned to midnight UTC, not local time.

If the user wants buckets aligned by local time, the TIMESTAMPTZ input should be cast to TIMESTAMP (such a cast converts the value to local time) before being passed to time_bucket (see example below). Note that along daylight savings time boundaries the amount of data aggregated into a bucket after such a cast is irregular: for example if the bucket_ is 2 hours, the number of UTC hours bucketed by local time on daylight savings time boundaries can be either 3 hours or 1 hour.

Required Arguments

Name Description
bucket_ A PostgreSQL time interval for how long each bucket is (interval)
time The timestamp to bucket (timestamp/timestamptz/date)

Optional Arguments

Name Description
offset The time interval to offset all buckets by (interval)
origin Buckets are aligned relative to this timestamp (timestamp/timestamptz/date)

Required Arguments

Name Description
bucket_ The bucket (integer)
time The timestamp to bucket (integer)

Optional Arguments

Name Description
offset The amount to offset all buckets by (integer)

Sample Usage

Simple 5-minute averaging:

  1. SELECT time_bucket('5 minutes', time) AS five_min, avg(cpu)
  2. FROM metrics
  3. GROUP BY five_min
  4. ORDER BY five_min DESC LIMIT 10;

To report the middle of the bucket, instead of the left edge:

  1. SELECT time_bucket('5 minutes', time) + '2.5 minutes'
  2. AS five_min, avg(cpu)
  3. FROM metrics
  4. GROUP BY five_min
  5. ORDER BY five_min DESC LIMIT 10;

For rounding, move the alignment so that the middle of the bucket is at the 5 minute mark (and report the middle of the bucket):

  1. SELECT time_bucket('5 minutes', time, '-2.5 minutes') + '2.5 minutes'
  2. AS five_min, avg(cpu)
  3. FROM metrics
  4. GROUP BY five_min
  5. ORDER BY five_min DESC LIMIT 10;

To shift the alignment of the buckets you can use the origin parameter (passed as a timestamp, timestamptz, or date type). In this example, we shift the start of the week to a Sunday (the default is a Monday).

  1. SELECT time_bucket('1 week', timetz, TIMESTAMPTZ '2017-12-31')
  2. AS one_week, avg(cpu)
  3. FROM metrics
  4. GROUP BY one_week
  5. WHERE time > TIMESTAMPTZ '2017-12-01' AND time < TIMESTAMPTZ '2018-01-03'
  6. ORDER BY one_week DESC LIMIT 10;

The value of the origin parameter we used in this example was2017-12-31, a Sunday within the period being analyzed. However, the origin provided to the function can be before, during, or after the data being analyzed. All buckets are calculated relative to this origin. So, in this example, any Sunday could have been used. Note that becausetime < TIMESTAMPTZ '2018-01-03' in this example, the last bucket would have only 4 days of data.

Bucketing a TIMESTAMPTZ at local time instead of UTC(see note above):

  1. SELECT time_bucket(INTERVAL '2 hours', timetz::TIMESTAMP)
  2. AS five_min, avg(cpu)
  3. FROM metrics
  4. GROUP BY five_min
  5. ORDER BY five_min DESC LIMIT 10;

Note that the above cast to TIMESTAMP converts the time to local time according to the server’s timezone setting.

WARNING:For users upgrading from a version before 1.0.0, please note that the default origin was moved from 2000-01-01 (Saturday) to 2000-01-03 (Monday) between versions 0.12.1 and 1.0.0. This change was made to make time_bucket compliant with the ISO standard for Monday as the start of a week. This should only affect multi-day calls to time_bucket. The old behavior can be reproduced by passing 2000-01-01 as the origin parameter to time_bucket.


time_bucket_gapfill() Community

Thetime_bucket_gapfill function works similar totime_bucket but also activates gap filling for the interval betweenstart andfinish. It can only be used with an aggregation query. Values outside ofstart andfinish will pass through but no gap filling will be done outside of the specified range.

Starting with version 1.3.0,start andfinish are optional arguments and will be inferred from the WHERE clause if not supplied as arguments.

Thetime_bucket_gapfill must be a top-level expression in a query or subquery, as shown in the above examples. You cannot, for example, do something likeround(time_bucket_gapfill(...)) or cast the result of the gapfill call (unless as a subquery where the outer query does the type cast).

Required Arguments

Name Description
bucket_ A PostgreSQL time interval for how long each bucket is (interval)
time The timestamp to bucket (timestamp/timestamptz/date)

Optional Arguments

Name Description
start The start of the gapfill period (timestamp/timestamptz/date)
finish The end of the gapfill period (timestamp/timestamptz/date)

Note that explicitly providedstart andstop or derived from WHERE clause values need to be simple expressions. Such expressions should be evaluated to constants at the query planning. For example, simple expressions can contain constants or call tonow(), but cannot reference to columns of a table.

Required Arguments

Name Description
bucket_ integer interval for how long each bucket is (int2/int4/int8)
time The timestamp to bucket (int2/int4/int8)

Optional Arguments

Name Description
start The start of the gapfill period (int2/int4/int8)
finish The end of the gapfill period (int2/int4/int8)

Starting with version 1.3.0start andfinish are optional arguments and will be inferred from the WHERE clause if not supplied as arguments.

Sample Usage

Get the metric value every day over the last 7 days:

  1. SELECT
  2. time_bucket_gapfill('1 day', time) AS day,
  3. device_id,
  4. avg(value) AS value
  5. FROM metrics
  6. WHERE time > now() - INTERVAL '1 week' AND time < now()
  7. GROUP BY day, device_id
  8. ORDER BY day;
  9. day | device_id | value
  10. ------------------------+-----------+-------
  11. 2019-01-10 01:00:00+01 | 1 |
  12. 2019-01-11 01:00:00+01 | 1 | 5.0
  13. 2019-01-12 01:00:00+01 | 1 |
  14. 2019-01-13 01:00:00+01 | 1 | 7.0
  15. 2019-01-14 01:00:00+01 | 1 |
  16. 2019-01-15 01:00:00+01 | 1 | 8.0
  17. 2019-01-16 01:00:00+01 | 1 | 9.0
  18. (7 row)

Get the metric value every day over the last 7 days carrying forward the previous seen value if none is available in an interval:

  1. SELECT
  2. time_bucket_gapfill('1 day', time) AS day,
  3. device_id,
  4. avg(value) AS value,
  5. locf(avg(value))
  6. FROM metrics
  7. WHERE time > now() - INTERVAL '1 week' AND time < now()
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | locf
  11. ------------------------+-----------+-------+------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 5.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.0
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0

Get the metric value every day over the last 7 days interpolating missing values:

  1. SELECT
  2. time_bucket_gapfill('5 minutes', time) AS day,
  3. device_id,
  4. avg(value) AS value,
  5. interpolate(avg(value))
  6. FROM metrics
  7. WHERE time > now() - INTERVAL '1 week' AND time < now()
  8. GROUP BY day, device_id
  9. ORDER BY day;
  10. day | device_id | value | interpolate
  11. ------------------------+-----------+-------+-------------
  12. 2019-01-10 01:00:00+01 | 1 | |
  13. 2019-01-11 01:00:00+01 | 1 | 5.0 | 5.0
  14. 2019-01-12 01:00:00+01 | 1 | | 6.0
  15. 2019-01-13 01:00:00+01 | 1 | 7.0 | 7.0
  16. 2019-01-14 01:00:00+01 | 1 | | 7.5
  17. 2019-01-15 01:00:00+01 | 1 | 8.0 | 8.0
  18. 2019-01-16 01:00:00+01 | 1 | 9.0 | 9.0

Utilities/Statistics

timescaledb_information.data_nodes

Get information on data nodes. This function is specific to running TimescaleDB in a multi-node setup.

Available Columns

Name Description
node_name Data node name.
owner Oid of the user, who added the data node.
options Options used when creating the data node.

Sample Usage

Get metadata related to data nodes.

  1. SELECT * FROM timescaledb_information.data_nodes;
  2. node_name | owner | options
  3. --------------+------------+--------------------------------
  4. dn1 | postgres | {host=localhost,port=15431,dbname=test}
  5. dn2 | postgres | {host=localhost,port=15432,dbname=test}
  6. (2 rows)

timescaledb_information.hypertables

Get metadata information about hypertables.

Available Columns

Name Description
hypertable_schema (NAME) Schema name of the hypertable
hypertable_name (NAME) Table name of the hypertable
owner (NAME) Owner of the hypertable
num_dimensions (SMALLINT) Number of dimensions
num_chunks (BIGINT) Number of chunks
compression_enabled (BOOLEAN) Is compression enabled on the hypertable?
is_distributed (BOOLEAN) Is the hypertable distributed?
replication_factor (SMALLINT) Replication factor for a distributed hypertable
data_nodes (NAME[]) Nodes on which hypertable is distributed
tablespaces (NAME[]) Tablespaces attached to the hypertable

Sample Usage

Get information about a hypertable.

  1. CREATE TABLE dist_table(time timestamptz, device int, temp float);
  2. SELECT create_distributed_hypertable('dist_table', 'time', 'device', replication_factor => 2);
  3. SELECT * FROM timescaledb_information.hypertables
  4. WHERE hypertable_name = 'dist_table';
  5. -[ RECORD 1 ]-------+-----------
  6. hypertable_schema | public
  7. hypertable_name | dist_table
  8. owner | postgres
  9. num_dimensions | 2
  10. num_chunks | 3
  11. compression_enabled | f
  12. is_distributed | t
  13. data_nodes | {node_1, node_2}
  14. tablespaces |

timescaledb_information.dimensions

Get metadata about the dimensions of hypertables, returning one row of metadata for each dimension of a hypertable. For a time-and-space-partitioned hypertable, for example, two rows of metadata will be returned for the hypertable.

A time-based dimension column has either an integer datatype (bigint, integer, smallint) or a time related datatype (timestamptz, timestamp, date). Thetime_interval column is defined for hypertables that use time datatypes. Alternatively, for hypertables that use integer datatypes, theinteger_interval andinteger_now_func columns are defined.

For space based dimensions, metadata is returned that specifies their number ofnum_partitions. Thetime_interval andinteger_interval columns are not applicable for space based dimensions.

Available Columns

Name Description
hypertable_schema (NAME) Schema name of the hypertable
hypertable_name (NAME) Table name of the hypertable
dimension_number (BIGINT) Dimension number of the hypertable, starting from 1
column_name (NAME) Name of the column used to create this dimension
column_type (REGTYPE) Type of the column used to create this dimension
dimension_type (TEXT) Is this time based or space based dimension?
time_interval (INTERVAL) Time interval for primary dimension if the column type is based on Postgres time datatypes
integer_interval (BIGINT) Integer interval for primary dimension if the column type is an integer datatype
integer_now_func (NAME) integer_now function for primary dimension if the column type is integer based datatype
num_partitions (SMALLINT) Number of partitions for the dimension

Sample Usage

Get information about the dimensions of hypertables.

  1. --Create a time and space partitioned hypertable
  2. CREATE TABLE dist_table(time timestamptz, device int, temp float);
  3. SELECT create_hypertable('dist_table', 'time', 'device', chunk_time_interval=> INTERVAL '7 days', number_partitions=>3);
  4. SELECT * from timescaledb_information.dimensions
  5. ORDER BY hypertable_name, dimension_number;
  6. -[ RECORD 1 ]-----+-------------------------
  7. hypertable_schema | public
  8. hypertable_name | dist_table
  9. dimension_number | 1
  10. column_name | time
  11. column_type | timestamp with time zone
  12. dimension_type | Time
  13. time_interval | 7 days
  14. integer_interval |
  15. integer_now_func |
  16. num_partitions |
  17. -[ RECORD 2 ]-----+-------------------------
  18. hypertable_schema | public
  19. hypertable_name | dist_table
  20. dimension_number | 2
  21. column_name | device
  22. column_type | integer
  23. dimension_type | Space
  24. time_interval |
  25. integer_interval |
  26. integer_now_func |
  27. num_partitions | 2

Get information about dimensions of a hypertable that has 2 time based dimensions

  1. CREATE TABLE hyper_2dim (a_col date, b_col timestamp, c_col integer);
  2. SELECT table_name from create_hypertable('hyper_2dim', 'a_col');
  3. SELECT add_dimension('hyper_2dim', 'b_col', chunk_time_interval=> '7 days');
  4. SELECT * FROM timescaledb_information.dimensions WHERE hypertable_name = 'hyper_2dim';
  5. -[ RECORD 1 ]-----+----------------------------
  6. hypertable_schema | public
  7. hypertable_name | hyper_2dim
  8. dimension_number | 1
  9. column_name | a_col
  10. column_type | date
  11. dimension_type | Time
  12. time_interval | 7 days
  13. integer_interval |
  14. integer_now_func |
  15. num_partitions |
  16. -[ RECORD 2 ]-----+----------------------------
  17. hypertable_schema | public
  18. hypertable_name | hyper_2dim
  19. dimension_number | 2
  20. column_name | b_col
  21. column_type | timestamp without time zone
  22. dimension_type | Time
  23. time_interval | 7 days
  24. integer_interval |
  25. integer_now_func |
  26. num_partitions |

timescaledb_information.chunks

Get metadata about the chunks of hypertables.

This view shows metadata for the chunk’s primary time-based dimension. For information about a hypertable’s secondary dimensions, thedimensions view should be used instead.

If the chunk’s primary dimension is of a time datatype,range_start andrange_end are set. Otherwise, if the primary dimension type is integer based,range_start_integer andrange_end_integer are set.

Available Columns

Name Description
hypertable_schema (NAME) Schema name of the hypertable
hypertable_name (NAME) Table name of the hypertable
chunk_schema (NAME) Schema name of the chunk
chunk_name (NAME) Name of the chunk
primary_dimension (NAME) Name of the column that is the primary dimension
primary_dimension_type (REGTYPE) Type of the column that is the primary dimension
range_start (TIMESTAMP WITH TIME ZONE) Start of the range for the chunk’s dimension
range_end (TIMESTAMP WITH TIME ZONE) End of the range for the chunk’s dimension
range_start_integer (BIGINT) Start of the range for the chunk’s dimension, if the dimension type is integer based
range_end_integer (BIGINT) End of the range for the chunk’s dimension, if the dimension type is integer based
is_compressed (BOOLEAN) Is the data in the chunk compressed? NULL for distributed chunks. Usechunk_compression_stats() function to get compression status for distributed chunks.
chunk_tablespace (NAME) Tablespace used by the chunk
data_nodes (NAME[]) Nodes on which the chunk is replicated. This is applicable only to chunks for distributed hypertables

Sample Usage

Get information about the chunks of a hypertable.

  1. CREATE TABLESPACE tablespace1 location '/usr/local/pgsql/data1';
  2. CREATE TABLE hyper_int (a_col integer, b_col integer, c integer);
  3. SELECT table_name from create_hypertable('hyper_int', 'a_col', chunk_time_interval=> 10);
  4. CREATE OR REPLACE FUNCTION integer_now_hyper_int() returns int LANGUAGE SQL STABLE as $$ SELECT coalesce(max(a_col), 0) FROM hyper_int $$;
  5. SELECT set_integer_now_func('hyper_int', 'integer_now_hyper_int');
  6. INSERT INTO hyper_int SELECT generate_series(1,5,1), 10, 50;
  7. SELECT attach_tablespace('tablespace1', 'hyper_int');
  8. INSERT INTO hyper_int VALUES( 25 , 14 , 20), ( 25, 15, 20), (25, 16, 20);
  9. SELECT * FROM timescaledb_information.chunks WHERE hypertable_name = 'hyper_int';
  10. -[ RECORD 1 ]----------+----------------------
  11. hypertable_schema | public
  12. hypertable_name | hyper_int
  13. chunk_schema | _timescaledb_internal
  14. chunk_name | _hyper_7_10_chunk
  15. primary_dimension | a_col
  16. primary_dimension_type | integer
  17. range_start |
  18. range_end |
  19. range_start_integer | 0
  20. range_end_integer | 10
  21. is_compressed | f
  22. chunk_tablespace |
  23. data_nodes |
  24. -[ RECORD 2 ]----------+----------------------
  25. hypertable_schema | public
  26. hypertable_name | hyper_int
  27. chunk_schema | _timescaledb_internal
  28. chunk_name | _hyper_7_11_chunk
  29. primary_dimension | a_col
  30. primary_dimension_type | integer
  31. range_start |
  32. range_end |
  33. range_start_integer | 20
  34. range_end_integer | 30
  35. is_compressed | f
  36. chunk_tablespace | tablespace1
  37. data_nodes |


timescaledb_information.continuous_aggregates

Get metadata and settings information for continuous aggregates.

Available Columns

Name Description
hypertable_schema (NAME) Schema of the hypertable from the continuous aggregate view
hypertable_name (NAME) Name of the hypertable from the continuous aggregate view
view_schema (NAME) Schema for continuous aggregate view
view_name (NAME) User supplied name for continuous aggregate view
view_owner (NAME) Owner of the continuous aggregate view
materialized_only (BOOLEAN) Return only materialized data when querying the continuous aggregate view.
materialization_hypertable_schema (NAME) Schema of the underlying materialization table
materialization_hypertable_name (NAME) Name of the underlying materialization table
view_definition (TEXT)SELECT query for continuous aggregate view

Sample Usage

  1. SELECT * FROM timescaledb_information.continuous_aggregates;
  2. -[ RECORD 1 ]---------------------+-------------------------------------------------
  3. hypertable_schema | public
  4. hypertable_name | foo
  5. view_schema | public
  6. view_name | contagg_view
  7. view_owner | postgres
  8. materialized_only | f
  9. materialization_hypertable_schema | _timescaledb_internal
  10. materialization_hypertable_name | _materialized_hypertable_2
  11. view_definition | SELECT foo.a, +
  12. | COUNT(foo.b) AS countb +
  13. | FROM foo +
  14. | GROUP BY (time_bucket('1 day', foo.a)), foo.a;

timescaledb_information.compression_settings

Get information about compression-related settings for hypertables. Each row of the view provides information about individual orderby and segmentby columns used by compression.

Available Columns

Name Description
hypertable_schema (NAME) Schema name of the hypertable
hypertable_name (NAME) Table name of the hypertable
attname (NAME) Name of the column used in the compression settings
segmentby_column_index (SMALLINT) Position of attname in the compress_segmentby list
orderby_column_index (SMALLINT) Position of attname in the compress_orderby list
orderby_asc (BOOLEAN) True if this is used for order by ASC, False for order by DESC
orderby_nullsfirst (BOOLEAN) True if nulls are ordered first for this column, False if nulls are ordered last

Sample Usage

  1. CREATE TABLE hypertab (a_col integer, b_col integer, c_col integer, d_col integer, e_col integer);
  2. SELECT table_name FROM create_hypertable('hypertab', 'a_col');
  3. ALTER TABLE hypertab SET (timescaledb.compress, timescaledb.compress_segmentby = 'a_col,b_col',
  4. timescaledb.compress_orderby = 'c_col desc, d_col asc nulls last');
  5. SELECT * FROM timescaledb_information.compression_settings WHERE hypertable_name = 'hypertab';
  6. hypertable_schema | hypertable_name | attname | segmentby_column_index | orderby_column_in
  7. dex | orderby_asc | orderby_nullsfirst
  8. -------------+------------+---------+------------------------+------------------
  9. ----+-------------+--------------------
  10. public | hypertab | a_col | 1 |
  11. | |
  12. public | hypertab | b_col | 2 |
  13. | |
  14. public | hypertab | c_col | |
  15. 1 | f | t
  16. public | hypertab | d_col | |
  17. 2 | t | f
  18. (4 rows)

timescaledb_information.jobs

Shows information about all jobs registered with the automation framework.

Available Columns

Name Description
job_id (INTEGER) The id of the background job
application_name (NAME) Name of the policy or user defined action
schedule_interval (INTERVAL) The interval at which the job runs
max_runtime (INTERVAL) The maximum amount of time the job will be allowed to run by the background worker scheduler before it is stopped
max_retries (INTEGER) The number of times the job will be retried should it fail
retry_period (INTERVAL) The amount of time the scheduler will wait between retries of the job on failure
proc_schema (NAME) Schema name of the function or procedure executed by the job
proc_name (NAME) Name of the function or procedure executed by the job
owner (NAME) Owner of the job
scheduled (BOOLEAN)
config (JSONB)
next_start (TIMESTAMP WITH TIME ZONE)
hypertable_schema (NAME) Schema name of the hypertable. NULL, if this is a user defined action.
hypertable_name (NAME) Table name of the hypertable. NULL, if this is a user defined action.

Sample Usage

Get information about jobs.

  1. SELECT * FROM timescaledb_information.jobs;
  2. --This shows a job associated with the refresh policy for continuous aggregates
  3. job_id | 1001
  4. application_name | Refresh Continuous Aggregate Policy [1001]
  5. schedule_interval | 01:00:00
  6. max_runtime | 00:00:00
  7. max_retries | -1
  8. retry_period | 01:00:00
  9. proc_schema | _timescaledb_internal
  10. proc_name | policy_refresh_continuous_aggregate
  11. owner | postgres
  12. scheduled | t
  13. config | {"start_offset": "20 days", "end_offset": "10
  14. days", "mat_hypertable_id": 2}
  15. next_start | 2020-10-02 12:38:07.014042-04
  16. hypertable_schema | _timescaledb_internal
  17. hypertable_name | _materialized_hypertable_2

Find all jobs related to compression policies.

  1. SELECT * FROM timescaledb_information.jobs where application_name like 'Compression%';
  2. -[ RECORD 1 ]-----+--------------------------------------------------
  3. job_id | 1002
  4. application_name | Compression Policy [1002]
  5. schedule_interval | 15 days 12:00:00
  6. max_runtime | 00:00:00
  7. max_retries | -1
  8. retry_period | 01:00:00
  9. proc_schema | _timescaledb_internal
  10. proc_name | policy_compression
  11. owner | postgres
  12. scheduled | t
  13. config | {"hypertable_id": 3, "compress_after": "60 days"}
  14. next_start | 2020-10-18 01:31:40.493764-04
  15. hypertable_schema | public
  16. hypertable_name | conditions

Find jobs that are executed by user defined actions.

  1. SELECT * FROM timescaledb_information.jobs where application_name like 'User-Define%';
  2. -[ RECORD 1 ]-----+------------------------------
  3. job_id | 1003
  4. application_name | User-Defined Action [1003]
  5. schedule_interval | 01:00:00
  6. max_runtime | 00:00:00
  7. max_retries | -1
  8. retry_period | 00:05:00
  9. proc_schema | public
  10. proc_name | custom_aggregation_func
  11. owner | postgres
  12. scheduled | t
  13. config | {"type": "function"}
  14. next_start | 2020-10-02 14:45:33.339885-04
  15. hypertable_schema |
  16. hypertable_name |
  17. -[ RECORD 2 ]-----+------------------------------
  18. job_id | 1004
  19. application_name | User-Defined Action [1004]
  20. schedule_interval | 01:00:00
  21. max_runtime | 00:00:00
  22. max_retries | -1
  23. retry_period | 00:05:00
  24. proc_schema | public
  25. proc_name | custom_retention_func
  26. owner | postgres
  27. scheduled | t
  28. config | {"type": "function"}
  29. next_start | 2020-10-02 14:45:33.353733-04
  30. hypertable_schema |
  31. hypertable_name |

timescaledb_information.job_stats

Shows information and statistics about jobs run by the automation framework. This includes jobs set up for user defined actions and jobs run by policies created to manage data retention, continuous aggregates, compression, and other automation policies. (See). The statistics include information useful for administering jobs and determining whether they ought be rescheduled, such as: when and whether the background job used to implement the policy succeeded and when it is scheduled to run next.

Available Columns

Name Description
hypertable_schema (NAME) Schema name of the hypertable
hypertable_name (NAME) Table name of the hypertable
job_id (INTEGER) The id of the background job created to implement the policy
last_run_started_at (TIMESTAMP WITH TIME ZONE) Start time of the last job
last_successful_finish (TIMESTAMP WITH TIME ZONE) Time when the job completed successfully
last_run_status (TEXT) Whether the last run succeeded or failed
job_status (TEXT) Status of the job. Valid values are ‘Running’, ‘Scheduled’ and ‘Paused’
last_run_duration (INTERVAL) Duration of last run of the job
next_scheduled_run (TIMESTAMP WITH TIME ZONE) Start time of the next run
total_runs (BIGINT) The total number of runs of this job
total_successes (BIGINT) The total number of times this job succeeded
total_failures (BIGINT) The total number of times this job failed

Sample Usage

Get job success/failure information for a specific hypertable.

  1. SELECT job_id, total_runs, total_failures, total_successes
  2. FROM timescaledb_information.job_stats
  3. WHERE hypertable_name = 'test_table';
  4. job_id | total_runs | total_failures | total_successes
  5. --------+------------+----------------+-----------------
  6. 1001 | 1 | 0 | 1
  7. 1004 | 1 | 0 | 1
  8. (2 rows)

Get information about continuous aggregate policy related statistics

  1. SELECT js.* FROM
  2. timescaledb_information.job_stats js, timescaledb_information.continuous_aggregates cagg
  3. WHERE cagg.view_name = 'max_mat_view_timestamp'
  4. and cagg.materialization_hypertable_name = js.hypertable_name;
  5. -[ RECORD 1 ]----------+------------------------------
  6. hypertable_schema | _timescaledb_internal
  7. hypertable_name | _materialized_hypertable_2
  8. job_id | 1001
  9. last_run_started_at | 2020-10-02 09:38:06.871953-04
  10. last_successful_finish | 2020-10-02 09:38:06.932675-04
  11. last_run_status | Success
  12. job_status | Scheduled
  13. last_run_duration | 00:00:00.060722
  14. next_scheduled_run | 2020-10-02 10:38:06.932675-04
  15. total_runs | 1
  16. total_successes | 1
  17. total_failures | 0

get_telemetry_report()

If background is enabled, returns the string sent to our servers. If telemetry is not enabled, outputs INFO message affirming telemetry is disabled and returns a NULL report.

Optional Arguments

Name Description
always_display_report Set to true to always view the report, even if telemetry is disabled

Sample Usage

If telemetry is enabled, view the telemetry report.

  1. SELECT get_telemetry_report();

If telemetry is disabled, view the telemetry report locally.

  1. SELECT get_telemetry_report(always_display_report := true);

approximate_row_count()

Get approximate row count for hypertable, distributed hypertable, or regular PostgreSQL table based on catalog estimates. This function support tables with nested inheritance and declarative partitioning.

The accuracy of approximate_row_count depends on the database having up-to-date statistics about the table or hypertable, which are updated by VACUUM, ANALYZE, and a few DDL commands. If you have auto-vacuum configured on your table or hypertable, or changes to the table are relatively infrequent, you might not need to explicitly ANALYZE your table as shown below. Otherwise, if your table statistics are too out-of-date, running this command will update your statistics and yield more accurate approximation results.

Required Arguments

Name Description
relation Hypertable or regular PostgreSQL table to get row count for.

Sample Usage

Get the approximate row count for a single hypertable.

  1. ANALYZE conditions;
  2. SELECT * FROM approximate_row_count('conditions');

The expected output:

  1. approximate_row_count
  2. ----------------------
  3. 240000

hypertable_compression_stats() Community

Get statistics related to hypertable compression. All sizes are in bytes.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to show stats for.

Returns

Column Description
total_chunks (BIGINT) the number of chunks used by the hypertable
number_compressed_chunks (INTEGER) the number of chunks used by the hypertable that are currently compressed
before_compression_table_bytes (BIGINT) Size of the heap before compression (NULL if currently uncompressed)
before_compression_index_bytes (BIGINT) Size of all the indexes before compression (NULL if currently uncompressed)
before_compression_toast_bytes (BIGINT) Size the TOAST table before compression (NULL if currently uncompressed)
before_compression_total_bytes (BIGINT) Size of the entire table (table+indexes+toast) before compression (NULL if currently uncompressed)
after_compression_table_bytes (BIGINT) Size of the heap after compression (NULL if currently uncompressed)
after_compression_index_bytes (BIGINT) Size of all the indexes after compression (NULL if currently uncompressed)
after_compression_toast_bytes (BIGINT) Size the TOAST table after compression (NULL if currently uncompressed)
after_compression_total_bytes (BIGINT) Size of the entire table (table+indexes+toast) after compression (NULL if currently uncompressed)
node_name (NAME) nodes on which the hypertable is located, applicable only to distributed hypertables

Sample Usage

  1. SELECT * FROM hypertable_compression_stats('conditions');
  2. -[ RECORD 1 ]------------------+------
  3. total_chunks | 4
  4. number_compressed_chunks | 1
  5. before_compression_table_bytes | 8192
  6. before_compression_index_bytes | 32768
  7. before_compression_toast_bytes | 0
  8. before_compression_total_bytes | 40960
  9. after_compression_table_bytes | 8192
  10. after_compression_index_bytes | 32768
  11. after_compression_toast_bytes | 8192
  12. after_compression_total_bytes | 49152
  13. node_name |

Usepg_size_pretty get the output in a more human friendly format.

  1. SELECT pg_size_pretty(after_compression_total_bytes) as total
  2. FROM hypertable_compression_stats('conditions');
  3. -[ RECORD 1 ]--+------
  4. total | 48 kB

chunk_compression_stats() Community

Get chunk specific statistics related to hypertable compression. All sizes are in bytes.

Required Arguments

Name Description
hypertable (REGCLASS) Name of the hypertable

Returns

Column Description
chunk_schema (NAME) Schema name of the chunk
chunk_name (NAME) Name of the chunk
number_compressed_chunks (INTEGER) the number of chunks used by the hypertable that are currently compressed
before_compression_table_bytes (BIGINT) Size of the heap before compression (NULL if currently uncompressed)
before_compression_index_bytes (BIGINT) Size of all the indexes before compression (NULL if currently uncompressed)
before_compression_toast_bytes (BIGINT) Size the TOAST table before compression (NULL if currently uncompressed)
before_compression_total_bytes (BIGINT) Size of the entire chunk table (table+indexes+toast) before compression (NULL if currently uncompressed)
after_compression_table_bytes (BIGINT) Size of the heap after compression (NULL if currently uncompressed)
after_compression_index_bytes (BIGINT) Size of all the indexes after compression (NULL if currently uncompressed)
after_compression_toast_bytes (BIGINT) Size the TOAST table after compression (NULL if currently uncompressed)
after_compression_total_bytes (BIGINT) Size of the entire chunk table (table+indexes+toast) after compression (NULL if currently uncompressed)
node_name (NAME) nodes on which the chunk is located, applicable only to distributed hypertables

Sample Usage

  1. SELECT * FROM chunk_compression_stats('conditions')
  2. ORDER BY chunk_name limit 2;
  3. -[ RECORD 1 ]------------------+----------------------
  4. chunk_schema | _timescaledb_internal
  5. chunk_name | _hyper_1_1_chunk
  6. compression_status | Uncompressed
  7. before_compression_table_bytes |
  8. before_compression_index_bytes |
  9. before_compression_toast_bytes |
  10. before_compression_total_bytes |
  11. after_compression_table_bytes |
  12. after_compression_index_bytes |
  13. after_compression_toast_bytes |
  14. after_compression_total_bytes |
  15. node_name |
  16. -[ RECORD 2 ]------------------+----------------------
  17. chunk_schema | _timescaledb_internal
  18. chunk_name | _hyper_1_2_chunk
  19. compression_status | Compressed
  20. before_compression_table_bytes | 8192
  21. before_compression_index_bytes | 32768
  22. before_compression_toast_bytes | 0
  23. before_compression_total_bytes | 40960
  24. after_compression_table_bytes | 8192
  25. after_compression_index_bytes | 32768
  26. after_compression_toast_bytes | 8192
  27. after_compression_total_bytes | 49152
  28. node_name |

Usepg_size_pretty get the output in a more human friendly format.

  1. SELECT pg_size_pretty(after_compression_total_bytes) as total
  2. FROM chunk_compression_stats('conditions')
  3. WHERE compression_status = 'Compressed';
  4. -[ RECORD 1 ]--+------
  5. total | 48 kB

hypertable_detailed_size()

Get size of hypertable likepg_relation_size(hypertable), returning size information for the table itself, any indexes on the table, any toast tables, and the total size of all. All sizes are reported in bytes. If this is a distributed hypertable, the function returns size information as a separate row per node.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to show detailed size of.

Returns

Column Description
table_bytes (BIGINT) Disk space used by main_table (like pg_relation_size(main_table))
index_bytes (BIGINT) Disk space used by indexes
toast_bytes (BIGINT) Disk space of toast tables
total_bytes (BIGINT) Total disk space used by the specified table, including all indexes and TOAST data
node_name (NAME) Node for which size is reported, applicable only to distributed hypertables

Sample Usage

Get size information for a hypertable.

  1. -- disttable is a distributed hypertable --
  2. SELECT * FROM hypertable_detailed_size('disttable') ORDER BY node_name;
  3. table_bytes | index_bytes | toast_bytes | total_bytes | node_name
  4. -------------+-------------+-------------+-------------+-------------
  5. 16384 | 32768 | 0 | 49152 | data_node_1
  6. 8192 | 16384 | 0 | 24576 | data_node_2

chunks_detailed_size()

Get size information about the chunks belonging to a hypertable, returning size information for each chunk table itself, any indexes on the chunk, any toast tables, and the total size associated with the chunk. All sizes are reported in bytes.

If this is a distributed hypertable, the function returns size information as a separate row per node.

Additional metadata associated with a chunk can be accessed via thetimescaledb_information.chunks view.

Required Arguments

Name Description
hypertable (REGCLASS) Name of the hypertable

Returns

Column Description
chunk_schema (NAME) Schema name of the chunk
chunk_name (NAME) Name of the chunk
table_bytes (BIGINT) Disk space used by the chunk table
index_bytes (BIGINT) Disk space used by indexes
toast_bytes (BIGINT) Disk space of toast tables
total_bytes (BIGINT) Total disk space used by the chunk, including all indexes and TOAST data
node_name (NAME) Node for which size is reported, applicable only to distributed hypertables

Sample Usage

  1. SELECT * FROM chunks_detailed_size('dist_table')
  2. ORDER BY chunk_name, node_name;
  3. chunk_schema | chunk_name | table_bytes | index_bytes | toast_bytes | total_bytes | node_name
  4. -----------------------+-----------------------+-------------+-------------+-------------+-------------+-----------------------
  5. _timescaledb_internal | _dist_hyper_1_1_chunk | 8192 | 32768 | 0 | 40960 | db_node1
  6. _timescaledb_internal | _dist_hyper_1_2_chunk | 8192 | 32768 | 0 | 40960 | db_node2
  7. _timescaledb_internal | _dist_hyper_1_3_chunk | 8192 | 32768 | 0 | 40960 | db_node3

hypertable_size()

Get total size of hypertable i.e. the sum of the size for the table itself, any indexes on the table, and any toast tables. The size is reported in bytes. This is equivalent to computing the sum oftotal_bytes column from the output ofhypertable_detailed_size function.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to show size of.

Returns

(BIGINT) Total disk space used by the specified table, including all indexes and TOAST data|

Sample Usage

Get size information for a hypertable.

  1. SELECT hypertable_size('devices') ;
  2. hypertable_size
  3. -----------------
  4. 73728

hypertable_index_size()

Get size of an index on a hypertable. The size is reported in bytes.

Required Arguments

Returns

(BIGINT) Returns disk space used by the index.

Sample Usage

Get size of a specific index on a hypertable.

  1. \d conditions_table
  2. Table "public.test_table"
  3. Column | Type | Collation | Nullable | Default
  4. --------+--------------------------+-----------+----------+---------
  5. time | timestamp with time zone | | not null |
  6. device | integer | | |
  7. volume | integer | | |
  8. Indexes:
  9. "second_index" btree ("time")
  10. "test_table_time_idx" btree ("time" DESC)
  11. "third_index" btree ("time")
  12. SELECT hypertable_index_size('second_index');
  13. hypertable_index_size
  14. -----------------------
  15. 163840
  16. SELECT pg_size_pretty(hypertable_index_size('second_index'));
  17. pg_size_pretty
  18. ----------------
  19. 160 kB

show_tablespaces()

Show the tablespaces attached to a hypertable.

Required Arguments

Name Description
hypertable (REGCLASS) Hypertable to show attached tablespaces for.

Sample Usage

  1. SELECT * FROM show_tablespaces('conditions');
  2. show_tablespaces
  3. ------------------
  4. disk1
  5. disk2

timescaledb_pre_restore()

Perform the proper operations to allow restoring of the database viapg_restore to commence. Specifically this sets thetimescaledb.restoring GUC toon and stops any background workers which may have been performing tasks until the function is run following the restore. Seebackup/restore docs for more information.

WARNING:Using this function when doing an upgrade could cause issues in TimescaleDB versions before 1.7.1.

WARNING:After runningSELECT timescaledb_pre_restore() you must run the function before using the database normally.

Sample Usage

  1. SELECT timescaledb_pre_restore();

timescaledb_post_restore()

Perform the proper operations after restoring the database has completed. Specifically this resets thetimescaledb.restoring GUC and restarts any background workers. Seebackup/restore docs for more information.

Sample Usage

  1. SELECT timescaledb_post_restore();

timescaledb.license

View or set currently used TimescaleDB license.

It is possible to limit access to features by license by changing thetimescaledb.license settings parameter in the server configuration file or on the server command line. For instance, by settingtimescaledb.license toapache, it is only possible to use TimescaleDB features that are implemented under the Apache 2 license. The default value istimescale, however, which allows access to all features.

Sample Usage

View current license.


To help when asking for support and reporting bugs, TimescaleDB includes a SQL script that outputs metadata from the internal TimescaleDB tables as well as version information. The script is available in the source distribution inscripts/ but can also bedownloaded separately. To use it, run:

  1. psql [your connect flags] -d your_timescale_db < dump_meta_data.sql > dumpfile.txt

and then inspect before sending it together with a bug report or support question.