Downsample data with InfluxDB
This article walks through creating a continuous-query-like task that downsamples data by aggregating data within windows of time, then storing the aggregate value in a new bucket.
To perform a downsampling task, you need to the following:
A “source” bucket
A “destination” bucket
A separate bucket where aggregated, downsampled data is stored.
Some type of aggregation
To downsample data, it must be aggregated in some way. What specific method of aggregation you use depends on your specific use case, but examples include mean, median, top, bottom, etc. View Flux’s aggregate functions for more information and ideas.
- Defines a variable that represents all data from the last 2 weeks in the
mem
measurement of the bucket. - Uses the to window the data into 1 hour intervals and calculate the average of each interval.
Stores the aggregated data in the bucket under the
my-org
organization.
Again, this is a very basic example, but it should provide you with a foundation to build more complex downsampling tasks.
- If running a task against a bucket with a finite retention policy, do not schedule tasks to run too closely to the end of the retention policy. Always provide a “cushion” for downsampling tasks to complete before the data is dropped by the retention policy.