In this tutorial, you will learn how to

As an example, we will look at hypothetical temperature readings from a variety of sensors.

:::info

All commands are run through the Web Console accessible at .

You can also run the same SQL via the Postgres endpoint or the .

:::

The first step is to create tables. One will contain the metadata of our sensors, the other will contain the readings from these sensors.

For more information about this statement, please refer to the CREATE TABLE reference documentation.

Let’s populate our sensors table with procedurally-generated data:

  1. INSERT INTO sensors
  2. SELECT
  3. x ID, --increasing integer
  4. rnd_str('Eberle', 'Honeywell', 'Omron', 'United Automation', 'RS Pro') make,
  5. rnd_str('New York', 'Miami', 'Boston', 'Chicago', 'San Francisco') city
  6. FROM long_sequence(10000) x

For more information about this statement, please refer to the reference documentation. About the functions, please refer to the random generator and the pages.

Our sensors table now contains 10,000 randomly generated sensor values of different makes and in various cities. It should look like the below:

Let’s now create some sensor readings. In this case, we will generate the table and the data at the same time:

  1. CREATE TABLE readings
  2. AS(
  3. SELECT
  4. x ID,
  5. timestamp_sequence(to_timestamp('2019-10-17T00:00:00', 'yyyy-MM-ddTHH:mm:ss'), rnd_long(1,10,2) * 100000L) ts,
  6. rnd_long(0, 10000, 0) sensorId
  7. FROM long_sequence(10000000) x)
  8. TIMESTAMP(ts)
  9. PARTITION BY MONTH;

While creating this table, we did the following:

  • TIMESTAMP(ts) elected ts as designated timestamp. This will enable time partitioning.
  • PARTITION BY MONTH created a monthly partition strategy. Our data will be sharded in monthly files.

The generated data will look like the below:

IDtstempsensorId
12019-10-17T00:00:00.000000Z19.373739119160
22019-10-17T00:00:00.600000Z21.911846179671
32019-10-17T00:00:01.400000Z16.583678348731
42019-10-17T00:00:01.500000Z16.693088153447
52019-10-17T00:00:01.600000Z19.679915697985

Let’s also select the count of records from readings:

  1. SELECT count() FROM readings;

and the average reading:

    average
    18.997

    We can now leverage our sensors table to get more interesting data:

    Results should look like the data below:

    1. FROM readings
    2. JOIN(
    3. SELECT ID sensId, city
    4. FROM sensors)
    5. ON readings.sensorId = sensId;

    Results should look like the data below:

    citymax
    Boston22.99999233
    New York22.99999631
    Miami22.99999673
    San Francisco22.99999531
    Chicago22.9999988
    1. SELECT ts, city, make, avg(temp)
    2. FROM readings
    3. JOIN (
    4. SELECT ID sensId, city, make
    5. FROM sensors
    6. WHERE city='Miami' AND make='Omron')
    7. ON readings.sensorId = sensId
    8. SAMPLE BY 1h;

    Results should look like the data below:

    For more information about these statements, please refer to the and JOIN pages.