== Physical Plan ==
AdaptiveSparkPlan (9)
+- == Final Plan ==
* HashAggregate (5)
+- ShuffleQueryStage (4), Statistics(sizeInBytes=16.0 B, rowCount=1)
+- Exchange (3)
+- * HashAggregate (2)
+- Scan csv (1)
+- == Initial Plan ==
HashAggregate (8)
+- Exchange (7)
+- HashAggregate (6)
+- Scan csv (1)
(1) Scan csv
Output: []
Batched: false
Location: InMemoryFileIndex [file:/data/input/depot/csv/execution/empty.csv]
ReadSchema: struct<>
(2) HashAggregate [codegen id : 1]
Input: []
Keys: []
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#72375L]
Results [1]: [count#72376L]
(3) Exchange
Input [1]: [count#72376L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [plan_id=12924]
(4) ShuffleQueryStage
Output [1]: [count#72376L]
Arguments: 0
(5) HashAggregate [codegen id : 2]
Input [1]: [count#72376L]
Keys: []
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#72372L]
Results [1]: [count(1)#72372L AS count#72373L]
(6) HashAggregate
Input: []
Keys: []
Functions [1]: [partial_count(1)]
Aggregate Attributes [1]: [count#72375L]
Results [1]: [count#72376L]
(7) Exchange
Input [1]: [count#72376L]
Arguments: SinglePartition, ENSURE_REQUIREMENTS, [plan_id=12916]
(8) HashAggregate
Input [1]: [count#72376L]
Keys: []
Functions [1]: [count(1)]
Aggregate Attributes [1]: [count(1)#72372L]
Results [1]: [count(1)#72372L AS count#72373L]
(9) AdaptiveSparkPlan
Output [1]: [count#72373L]
Arguments: isFinalPlan=true