-
Notifications
You must be signed in to change notification settings - Fork 262
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Describe the bug
If the input partition count is 0 for GpuOptimizeWriteExchangeExec, it can throw an ArithmeticException as shown below. This stack trace is from 25.08.
Caused by: java.lang.ArithmeticException: / by zero
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.actualNumPartitions$lzycompute(GpuOptimizeWriteExchangeExec.scala:116)
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.actualNumPartitions(GpuOptimizeWriteExchangeExec.scala:113)
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.actualPartitioning$lzycompute(GpuOptimizeWriteExchangeExec.scala:126)
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.actualPartitioning(GpuOptimizeWriteExchangeExec.scala:123)
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.shuffleDependency$lzycompute(GpuOptimizeWriteExchangeExec.scala:134)
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.shuffleDependency(GpuOptimizeWriteExchangeExec.scala:130)
at org.apache.spark.sql.rapids.delta.GpuOptimizeWriteExchangeExec.internalDoExecuteColumnar(GpuOptimizeWriteExchangeExec.scala:158)
...
Steps/Code to reproduce bug
You can run some simple code that filters out every row before writing.
df.filter("col == non_exist_val").write.option("optimizeWrite", "true").format("delta").save(path)Expected behavior
The operation should finish successfully.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working