-
Notifications
You must be signed in to change notification settings - Fork 442
Improve best feasible objective #3011
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Improve best feasible objective #3011
Conversation
If reviewers are happy with these changes, I will also implement them for the non-log version, as well as investigating the multi objective acqfs to see if similar changes are needed. I will also write tests in this PR, once the ideas in this PR have been approved. |
tau_relu: float = TAU_RELU, | ||
marginalize_dim: int | None = None, | ||
incremental: bool = True, | ||
infeasible_obj: Tensor | float | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this also be added to qNoisyExpectedImprovement
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - if the changes look good to whoever reviews, then I will also apply them to qNEI, as well as investigating q(Log)NEHVI too. Just wanted to get confirmation that the changes were good before making them elsewhere :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. At a high level I second @esantorella's point about using qLogProbabilityOfFeasibility When nothing is feasible.
But it also makes sense to improve the behavior for (q)(Log)(N)EI if the user doesn't do an pre-processing. For this, the changes here overall seem quite reasonable to me. Do you have some results that show the effect of this?
cc @SebastianAment re LogEI and @dme65 who has been thinking about similar issues in the case of largely infeasible data.
dim=0, | ||
) | ||
|
||
if lb.ndim - 1 < posterior.mean.ndim: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We add an extra dimension in the line above, by stacking the mean - 6*std
and mean + 6*std
. This then changes the check below. Maybe it would be better for me to be explicit above, taking the minimum of mean +- 6*std
instead of stacking, and then leaving this line unchanged?
Co-authored-by: Max Balandat <[email protected]>
Co-authored-by: Max Balandat <[email protected]>
Hi @Balandat, thanks for the review! I don't have any results showing the effect yet. Would you be interested in seeing the effect of these changes on |
@TobyBoyne ideally both - it seems that the behavior in this setting is degenerate enough that I'd expect to see pretty clear effects on the downstream BO tasks even without having to run many replications. And thanks a lot for your effort on improving this! |
Motivation
See #3009.
Have you read the Contributing Guidelines on pull requests?
Yes
Test Plan
After feedback on the initial draft of the PR, I will implement tests similar to the problem structure in #3009, confirming that the behaviour of the modified code behaves as desired.
Related PRs
None