Fix the distribute CPU training error when validation data can't be well divided #399
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Issue #, if available:
tt: V933345598
Description of changes:
Here's the doc about the requirements on input data for Distributed CPU training:
https://docs.aws.amazon.com/sagemaker/latest/dg/xgboost.html#Instance-XGBoost-distributed-training-cpu
We already have the logic to exclude instances from training when they don't have training data.
Let's say, 5 files of training data but customer launched 6 instances so that the one instance without training data will be excluded from this distributed training job.
This change is to add the same logic for validation data when validation channel is already set by customer. The error happens when some instances have validation data but some don't. This will crash the eval metric calculation and MapReduce process across all instances. With this change the above failing scenario will be handled.
Testing:
Tested with customer notebook
All references can be found in tt: V933345598
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.