13.35. Release 0.208

    This release has the potential for data loss in the Hive connector when writing bucketed sorted tables.

    • Fix an issue with memory accounting that would lead to garbage collection pauses and out of memory exceptions.
    • Fix an issue that produces incorrect results when is enabled (#10724).
    • Make the cluster out of memory killer more resilient to memory accounting leaks. Previously, memory accounting leaks on the workers could effectively disable the out of memory killer.
    • Improve planning time for queries over tables with high column count.
    • Add a limit on the number of stages in a query. The default is 100 and can be changed with the configuration property and the query_max_stage_count session property.
    • Add and functions.
    • Add a cluster memory leak detector that logs queries that have possibly accounted for memory usage incorrectly on workers. This is a tool to for debugging internal errors.
    • Add support for correlated subqueries requiring coercions.
    • Fix creation of the history file when it does not exist.
    • Add PRESTO_HISTORY_FILE environment variable to override location of history file.
    • Remove size limit for writing bucketed sorted tables.
    • Support writer scaling for Parquet.
    • Improve stripe size estimation for the optimized ORC writer. This reduces the number of cases where tiny ORC stripes will be written.
    • Provide the actual size of CHAR, VARCHAR, and VARBINARY columns to the cost based optimizer.
    • Include error message from remote server in query failure message.