Segment metadata queries return per-segment information about:

  • Number of rows stored inside the segment
  • Interval the segment covers
  • Estimated total segment byte size in if it was stored in a ‘flat format’ (e.g. a csv file)
  • Segment id
  • Is the segment rolled up
  • Detailed per column information such as:
    • type
    • cardinality
    • min/max values
    • presence of null values
    • estimated ‘flat format’ byte size

There are several main parts to a segment metadata query:

The format of the result is:

Dimension columns will have type , FLOAT, DOUBLE, or LONG. Metric columns will have type FLOAT, DOUBLE, or LONG, or the name of the underlying complex type such as hyperUnique in case of COMPLEX metric. Timestamp column will have type .

If the errorMessage field is non-null, you should not trust the other fields in the response. Their contents are undefined.

Only columns which are dictionary encoded (i.e., have type STRING) will have any cardinality. Rest of the columns (timestamp and metric columns) will show cardinality as null.

If an interval is not specified, the query will use a default interval that spans a configurable period before the end time of the most recent segment.

There are 3 types of toInclude objects.

The grammar is as follows:

None

The grammar is as follows:

List

The grammar is as follows:

This is a list of properties that determines the amount of information returned about the columns, i.e. analyses to be performed on the columns.

By default, the “cardinality”, “interval”, and “minmax” types will be used. If a property is not needed, omitting it from this list will result in a more efficient query.

The default analysis types can be set in the Broker configuration via: druid.query.segmentMetadata.defaultAnalysisTypes

Druid examines the size of string column dictionaries to compute the cardinality value. There is one dictionary per column per segment. If merge is off (false), this reports the cardinality of each column of each segment individually. If merge is on (true), this reports the highest cardinality encountered for a particular column across all relevant segments.

minmax

  • Estimated min/max values for each column. Only reported for string columns.

size

  • is the estimated total byte size as if the data were stored in text format. This is not the actual storage size of the column in Druid. If you want the actual storage size in bytes of a segment, look elsewhere. Some pointers:

  • To get the storage size in bytes of an entire segment, check the size field in the sys.segments table. This is the size of the memory-mappable content.

  • To get the storage size in bytes of a particular column in a particular segment, unpack the segment and look at the meta.smoosh file inside the archive. The difference between the third and fourth columns is the size in bytes. Currently, there is no API for retrieving this information.
  • intervals in the result will contain the list of intervals associated with the queried segments.

timestampSpec

  • timestampSpec in the result will contain timestampSpec of data stored in segments. this can be null if timestampSpec of segments was unknown or unmergeable (if merging is enabled).

queryGranularity

  • queryGranularity in the result will contain query granularity of data stored in segments. this can be null if query granularity of segments was unknown or unmergeable (if merging is enabled).
  • aggregators in the result will contain the list of aggregators usable for querying metric columns. This may be null if the aggregators are unknown or unmergeable (if merging is enabled).

  • Merging can be strict or lenient. See lenientAggregatorMerge below for details.

  • The form of the result is a map of column name to aggregator.

rollup

  • rollup in the result is true/false/null.
  • When merging is enabled, if some are rollup, others are not, result is null.

Conflicts between aggregator metadata across segments can occur if some segments have unknown aggregators, or if two segments use incompatible aggregators for the same column (e.g. longSum changed to doubleSum).

In particular, with lenient merging, it is possible for an individual column’s aggregator to be null. This will not occur with strict merging.