-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
Overhaul picking benchmarks #17033
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overhaul picking benchmarks #17033
Conversation
Also rename the groups so that they don't repeat `ray_mesh_intersection` twice, since that is the module name.
This reduces duplication and allows us to easily configure new and existing variants of the benchmarks.
|
cc @aevyrie, since you're the person who I know is most familiar with picking, and most likely to catch any mistakes in the documentation! cc @mockersf, since you wrote the original version of these benchmarks in aevyrie/bevy_mod_raycast#17. No pressure if neither of you want to review, just thought you might be interested! |
|
@BD103 what do you think still needs to be done before we merge this? Looks like the CI failure is just |
I also wanted to add one more test case to verify that the ray does / doesn't intersect the mesh when |
# Objective - Part of bevyengine#16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" />
# Objective - Part of bevyengine#16647. - This PR goes through our `ray_cast::ray_mesh_intersection()` benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated! ## Solution - Create a `Benchmarks` enum that describes all of the different kind of scenarios we want to benchmark. - Merge all of our existing benchmark functions into a single one, `bench()`, which sets up the scenarios all at once. - Add comments to `mesh_creation()` and `ptoxznorm()`, and move some lines around to be a bit clearer. - Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647. - Rename many functions and benchmarks to be clearer. ## For reviewers I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes. None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR. ## Testing List all picking benchmarks: `cargo bench -p benches --bench picking -- --list` Run the benchmarks once in debug mode: `cargo test -p benches --bench picking` Run the benchmarks and analyze their performance: `cargo bench -p benches --bench picking` - Check out the generated HTML report in `./target/criterion/report/index.html` once you're done! --- ## Showcase List of all picking benchmarks, after having been renamed: <img width="524" alt="image" src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1" /> Example report for `picking::ray_mesh_intersection::cull_intersect/100_vertices`: <img width="992" alt="image" src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d" />
This reverts the benchmarks back to how they were in bevyengine#17033.
Objective
ray_cast::ray_mesh_intersection()benchmarks and overhauls them with more comments and better extensibility. The code is also a lot less duplicated!Solution
Benchmarksenum that describes all of the different kind of scenarios we want to benchmark.bench(), which sets up the scenarios all at once.mesh_creation()andptoxznorm(), and move some lines around to be a bit clearer.bench!macro, as part of Prefix benchmark names with module path #16647.For reviewers
I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes.
None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR.
Testing
List all picking benchmarks:
cargo bench -p benches --bench picking -- --listRun the benchmarks once in debug mode:
cargo test -p benches --bench pickingRun the benchmarks and analyze their performance:
cargo bench -p benches --bench picking./target/criterion/report/index.htmlonce you're done!Showcase
List of all picking benchmarks, after having been renamed:
Example report for
picking::ray_mesh_intersection::cull_intersect/100_vertices: