Warm tip: This article is reproduced from stackoverflow.com, please click
apache-spark

Are ClosedByInterruptException exceptions expected when spark speculation kills tasks?

发布于 2020-04-07 10:11:34

I'm looking into enabling spark speculation on a spark structured streaming application. When speculation kills tasks spark is logging a lot of ClosedByInterruptException exceptions. Most of these exceptions are from inside org.apache.spark.storage.DiskBlockObjectWriter.revertPartialWritesAndClose method.

Are these exceptions safe to ignore? Not seeing these exceptions when speculation is turned off. I'm using Spark 2.4.3.

Example Exception:

2019-07-02 03:38:07,195 [Executor task launch worker for task 667] ERROR org.apache.spark.storage.DiskBlockObjectWriter - Uncaught exception while reverting partial writes to file /data/vol/nodemanager/usercache/spark_user/appcache/application_1556810695108_1045638/blockmgr-54340b28-723b-46a3-b58a-c8598d75e4a2/3f/temp_shuffle_763d619b-26c8-4a0d-bc99-6d4661b42eba
java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
    at sun.nio.ch.FileChannelImpl.truncate(FileChannelImpl.java:370)
    at org.apache.spark.storage.DiskBlockObjectWriter$$anonfun$revertPartialWritesAndClose$2.apply$mcV$sp(DiskBlockObjectWriter.scala:218)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1369)
    at org.apache.spark.storage.DiskBlockObjectWriter.revertPartialWritesAndClose(DiskBlockObjectWriter.scala:214)
    at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.stop(BypassMergeSortShuffleWriter.java:237)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:105)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
Questioner
arunpandianp
Viewed
100
cringineer 2020-01-31 21:45

I suppose here is the answer. This issue is fixed in spark 3.0

https://issues.apache.org/jira/browse/SPARK-28340