第一阶段,不考虑数据的物理分布,生成所有基于本地关系优化的最优执行计划。在本地计划生成后,优化器会检查数据是否访问了多个分区,或者是否是本地单分区表但是用户用 hint 强制指定了采用并行查询执行。

    第二阶段,生成分布式计划。根据执行计划树,在需要进行数据重分布的地方,插入 exchange 节点,从而将原先的本地计划树变成分布式计划。

    生成分布式计划的过程就是在原始计划树上寻找恰当位置插入 exchange 算子的过程,在自顶向下遍历计划树的时候,需要根据相应算子的数据处理的情况以及输入算子的数据分区情况,决定是否需要插入 exchange 算子。

    当 t2 是一个分区表,可以在 table scan 上插入配对的 exchange 算子,从而将 table scan 和 exchange out 封装成一个 job,可以用于并行的执行。

    单输入可下压算子

    单输入可下压算子主要包括 aggregation, sort, group by 和 limit 算子等,除了 limit 算子以外,其余所列举的算子都会有一个操作的键,如果操作的键和输入数据的数据分布是一致的,则可以做一阶段聚合操作,也即 Partition Wise Aggregation。如果操作的键和输入数据的数据分布是不一致的,则需要做两阶段聚合操作,聚合算子需要做下压操作。

    一阶段聚合操作如下例所示:

    1. | ======================================================
    2. |ID|OPERATOR |NAME |EST. ROWS|COST |
    3. ------------------------------------------------------
    4. |0 |PX COORDINATOR | |101 |357302|
    5. |1 | EXCHANGE OUT DISTR |:EX10000|101 |357297|
    6. |2 | PX PARTITION ITERATOR| |101 |357297|
    7. |3 | MERGE GROUP BY | |101 |357297|
    8. |4 | TABLE SCAN |t2 |400000 |247403|
    9. ======================================================
    10. Outputs & filters:
    11. -------------------------------------
    12. 0 - output([T_FUN_SUM(t2.v1)]), filter(nil)
    13. 1 - output([T_FUN_SUM(t2.v1)]), filter(nil), dop=1
    14. 2 - output([T_FUN_SUM(t2.v1)]), filter(nil)
    15. 3 - output([T_FUN_SUM(t2.v1)]), filter(nil),
    16. group([t2.v1]), agg_func([T_FUN_SUM(t2.v1)])
    17. 4 - output([t2.v1]), filter(nil),
    18. access([t2.v1]), partitions(p[0-3])
    19. |

    二元输入算子

    二元输入算子主要考虑 join 算子的情况。对于 join 算子来说,主要基于规则来生成分布式的计划和数据重分布方法,主要有三种方式。

    • partition-wise join
      当左右表都是分区表且分区方式相同,物理分布一样,且 join 的连接条件为分区键时,可以使用以分区为单位的连接方法。如下例所示:
    1. explain select * from t2, t3 where t2.v1 = t3.v1\G
    2. Query Plan: ===========================================================
    3. |ID|OPERATOR |NAME |EST. ROWS |COST |
    4. |0 |PX COORDINATOR | |1568160000|1227554264|
    5. |1 | EXCHANGE OUT DISTR |:EX10000|1568160000|930670004 |
    6. |2 | PX PARTITION ITERATOR| |1568160000|930670004 |
    7. |3 | MERGE JOIN | |1568160000|930670004 |
    8. |4 | TABLE SCAN |t2 |400000 |256226 |
    9. |5 | TABLE SCAN |t3 |400000 |256226 |
    10. ===========================================================
    11. Outputs & filters:
    12. 0 - output([t2.v1], [t2.v2], [t3.v1], [t3.v2]), filter(nil)
    13. 2 - output([t2.v1], [t2.v2], [t3.v1], [t3.v2]), filter(nil)
    14. 3 - output([t2.v1], [t2.v2], [t3.v1], [t3.v2]), filter(nil),
    15. equal_conds([t2.v1 = t3.v1]), other_conds(nil)
    16. 4 - output([t2.v1], [t2.v2]), filter(nil),
    17. access([t2.v1], [t2.v2]), partitions(p[0-3])
    18. 5 - output([t3.v1], [t3.v2]), filter(nil),
    19. access([t3.v1], [t3.v2]), partitions(p[0-3])
    • partial partition-wise join

    当左右表中一个表为分区表,另一个表为非分区表,或者两者皆为分区表但是连接键仅和其中一个分区表的分区键相同的情况下,会以该分区表的分区分布为基准,重新分布另一个表的数据。计划如下例所示:

    • 数据重分布

    连接键和左右表的分区键都没有关系的情况下, 可以根据规则计算来选择使用 broadcast 还是 hash hash 的数据重分布方式,如下例所示:

    1. explain select /*+ parallel(2)*/* from t4, t2 where t2.v2 = t4.v2
    2. Query Plan: =================================================================
    3. |ID|OPERATOR |NAME |EST. ROWS|COST |
    4. -----------------------------------------------------------------
    5. |0 |PX COORDINATOR | |11880 |396863|
    6. |1 | EXCHANGE OUT DISTR |:EX10001|11880 |394614|
    7. |2 | HASH JOIN | |11880 |394614|
    8. |3 | EXCHANGE IN DISTR | |3 |37 |
    9. |4 | EXCHANGE OUT DISTR (BROADCAST)|:EX10000|3 |37 |
    10. |5 | PX BLOCK ITERATOR | |3 |37 |
    11. |6 | TABLE SCAN |t4 |3 |37 |
    12. |7 | PX PARTITION ITERATOR | |400000 |256226|
    13. |8 | TABLE SCAN |t2 |400000 |256226|
    14. =================================================================
    15. Outputs & filters:
    16. -------------------------------------
    17. 0 - output([t4.v1], [t4.v2], [t2.v1], [t2.v2]), filter(nil)
    18. 1 - output([t4.v1], [t4.v2], [t2.v1], [t2.v2]), filter(nil), dop=2
    19. 2 - output([t4.v1], [t4.v2], [t2.v1], [t2.v2]), filter(nil),
    20. equal_conds([t2.v2 = t4.v2]), other_conds(nil)
    21. 3 - output([t4.v1], [t4.v2]), filter(nil)
    22. 4 - output([t4.v1], [t4.v2]), filter(nil), dop=2
    23. 5 - output([t4.v1], [t4.v2]), filter(nil)
    24. 6 - output([t4.v1], [t4.v2]), filter(nil),
    25. access([t4.v1], [t4.v2]), partitions(p[0-2])
    26. 8 - output([t2.v1], [t2.v2]), filter(nil),
    27. explain select /*+ pq_distribute(t2 hash hash) parallel(2)*/* from t4, t2 where t2.v2 = t4.v2\G
    28. *************************** 1. row ***************************
    29. Query Plan: ============================================================
    30. |ID|OPERATOR |NAME |EST. ROWS|COST |
    31. ------------------------------------------------------------
    32. |0 |PX COORDINATOR | |11880 |434727|
    33. |1 | EXCHANGE OUT DISTR |:EX10002|11880 |432478|
    34. |2 | HASH JOIN | |11880 |432478|
    35. |3 | EXCHANGE IN DISTR | |3 |37 |
    36. |4 | EXCHANGE OUT DISTR (HASH)|:EX10000|3 |37 |
    37. |5 | PX BLOCK ITERATOR | |3 |37 |
    38. |6 | TABLE SCAN |t4 |3 |37 |
    39. |7 | EXCHANGE IN DISTR | |400000 |294090|
    40. |8 | EXCHANGE OUT DISTR (HASH)|:EX10001|400000 |256226|
    41. |9 | PX PARTITION ITERATOR | |400000 |256226|
    42. |10| TABLE SCAN |t2 |400000 |256226|
    43. ============================================================
    44. Outputs & filters:
    45. -------------------------------------
    46. 0 - output([t4.v1], [t4.v2], [t2.v1], [t2.v2]), filter(nil)
    47. 1 - output([t4.v1], [t4.v2], [t2.v1], [t2.v2]), filter(nil), dop=2
    48. 2 - output([t4.v1], [t4.v2], [t2.v1], [t2.v2]), filter(nil),
    49. equal_conds([t2.v2 = t4.v2]), other_conds(nil)
    50. 3 - output([t4.v1], [t4.v2]), filter(nil)
    51. 4 - (#keys=1, [t4.v2]), output([t4.v1], [t4.v2]), filter(nil), dop=2
    52. 5 - output([t4.v1], [t4.v2]), filter(nil)
    53. 6 - output([t4.v1], [t4.v2]), filter(nil),
    54. access([t4.v1], [t4.v2]), partitions(p[0-2])
    55. 7 - output([t2.v1], [t2.v2]), filter(nil)
    56. 8 - (#keys=1, [t2.v2]), output([t2.v1], [t2.v2]), filter(nil), dop=2
    57. 9 - output([t2.v1], [t2.v2]), filter(nil)
    58. 10 - output([t2.v1], [t2.v2]), filter(nil),
    59. access([t2.v1], [t2.v2]), partitions(p[0-3])