This repository was archived by the owner on Jan 9, 2020. It is now read-only.

Description
On launching a Spark job and killing the launched executors pods one by one till no executors are left, Driver didn't seem to recognize that executors terminated. Driver seemed to be stuck when all executors were killed/terminated instead of reporting a fatal failure or making job progress by launching more executors/pods.
Also, the Driver logs don't seem to give the whole picture when driver encounters executor pod failures.
spark on k8s upon killing around 5 pods):
└─[1] <> cat /tmp/driver-log | grep -i lost
2017-02-15 13:44:36 ERROR TaskSchedulerImpl:70 - Lost an executor 1 (already removed): Executor heartbeat timed out after 148604 ms
spark on yarn upon killing 4 executors: (on yarn, spark requests new executors when they terminate unexpectedly)
17/02/15 05:54:52 INFO DAGScheduler: Executor lost: 1 (epoch 0)
17/02/15 05:55:46 INFO DAGScheduler: Executor lost: 3 (epoch 0)
17/02/15 05:56:01 INFO DAGScheduler: Executor lost: 4 (epoch 0)
17/02/15 05:57:05 INFO DAGScheduler: Executor lost: 5 (epoch 0)
17/02/15 05:57:17 INFO DAGScheduler: Executor lost: 6 (epoch 0)
17/02/15 05:57:34 INFO DAGScheduler: Executor lost: 7 (epoch 0)
17/02/15 05:57:47 INFO DAGScheduler: Executor lost: 2 (epoch 0)
17/02/15 05:57:52 INFO DAGScheduler: Executor lost: 9 (epoch 0)