Skip to content
This repository was archived by the owner on Nov 15, 2024. It is now read-only.

Commit 6949a9c

Browse files
Sital KediaMarcelo Vanzin
authored andcommitted
[SPARK-21834] Incorrect executor request in case of dynamic allocation
## What changes were proposed in this pull request? killExecutor api currently does not allow killing an executor without updating the total number of executors needed. In case of dynamic allocation is turned on and the allocator tries to kill an executor, the scheduler reduces the total number of executors needed ( see https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala#L635) which is incorrect because the allocator already takes care of setting the required number of executors itself. ## How was this patch tested? Ran a job on the cluster and made sure the executor request is correct Author: Sital Kedia <[email protected]> Closes apache#19081 from sitalkedia/skedia/oss_fix_executor_allocation.
1 parent 235d283 commit 6949a9c

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -446,6 +446,9 @@ private[spark] class ExecutorAllocationManager(
446446
} else {
447447
client.killExecutors(executorIdsToBeRemoved)
448448
}
449+
// [SPARK-21834] killExecutors api reduces the target number of executors.
450+
// So we need to update the target with desired value.
451+
client.requestTotalExecutors(numExecutorsTarget, localityAwareTasks, hostToLocalTaskCount)
449452
// reset the newExecutorTotal to the existing number of executors
450453
newExecutorTotal = numExistingExecutors
451454
if (testing || executorsRemoved.nonEmpty) {

0 commit comments

Comments
 (0)