== Physical Plan ==
AdaptiveSparkPlan (9)
+- == Final Plan ==
* HashAggregate (5)
+- ShuffleQueryStage (4), Statistics(sizeInBytes=16.0 B, rowCount=1)
+- Exchange (3)
+- * HashAggregate (2)
+- Scan csv (1)
+- == Initial Plan ==
HashAggregate (8)
+- Exchange (7)
+- HashAggregate (6)
+- Scan csv (1)
(1) Scan csv
Output: []
Batched: false
Location: InMemoryFileIndex [file:/data/input/depot/csv/execution/empty.csv]
ReadSchema: struct<>
(2) HashAggregate [codegen id : 1]
Input: []
Keys: []
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#65662L]
Results [1]: [count#65663L]
(3) Exchange
Input [1]: [count#65663L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [plan_id=12608]
(4) ShuffleQueryStage
Output [1]: [count#65663L]
Arguments: 0
(5) HashAggregate [codegen id : 2]
Input [1]: [count#65663L]
Keys: []
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#65659L]
Results [1]: [count(1)#65659L AS count#65660L]
(6) HashAggregate
Input: []
Keys: []
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#65662L]
Results [1]: [count#65663L]
(7) Exchange
Input [1]: [count#65663L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [plan_id=12600]
(8) HashAggregate
Input [1]: [count#65663L]
Keys: []
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#65659L]
Results [1]: [count(1)#65659L AS count#65660L]
(9) AdaptiveSparkPlan
Output [1]: [count#65660L]
Arguments: isFinalPlan=true