Skip to content

Conversation

@BD103
Copy link
Member

@BD103 BD103 commented Dec 30, 2024

Objective

Solution

  • Create a Benchmarks enum that describes all of the different kind of scenarios we want to benchmark.
  • Merge all of our existing benchmark functions into a single one, bench(), which sets up the scenarios all at once.
  • Add comments to mesh_creation() and ptoxznorm(), and move some lines around to be a bit clearer.
  • Make the benchmarks use the new bench! macro, as part of Prefix benchmark names with module path #16647.
  • Rename many functions and benchmarks to be clearer.

For reviewers

I split this PR up into several, easier to digest commits. You might find it easier to review by looking through each commit, instead of the complete file changes.

None of my changes actually modifies the behavior of the benchmarks; they still track the exact same test cases. There shouldn't be significant changes in benchmark performance before and after this PR.

Testing

List all picking benchmarks: cargo bench -p benches --bench picking -- --list

Run the benchmarks once in debug mode: cargo test -p benches --bench picking

Run the benchmarks and analyze their performance: cargo bench -p benches --bench picking

  • Check out the generated HTML report in ./target/criterion/report/index.html once you're done!

Showcase

List of all picking benchmarks, after having been renamed:

image

Example report for picking::ray_mesh_intersection::cull_intersect/100_vertices:

image

@BD103 BD103 added C-Code-Quality A section of code that is hard to understand or change C-Benchmarks Stress tests and benchmarks used to measure how fast things are D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Needs-Review Needs reviewer attention (from anyone!) to move forward A-Picking Pointing at and selecting objects of all sorts labels Dec 30, 2024
@BD103
Copy link
Member Author

BD103 commented Dec 30, 2024

cc @aevyrie, since you're the person who I know is most familiar with picking, and most likely to catch any mistakes in the documentation!

cc @mockersf, since you wrote the original version of these benchmarks in aevyrie/bevy_mod_raycast#17.

No pressure if neither of you want to review, just thought you might be interested!

@BenjaminBrienen BenjaminBrienen added S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it and removed S-Needs-Review Needs reviewer attention (from anyone!) to move forward labels Dec 30, 2024
@BD103 BD103 added S-Waiting-on-Author The author needs to make changes or address concerns before this can be merged and removed S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it labels Dec 30, 2024
@alice-i-cecile
Copy link
Member

@BD103 what do you think still needs to be done before we merge this? Looks like the CI failure is just cargo fmt.

@BD103 BD103 added S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it and removed S-Waiting-on-Author The author needs to make changes or address concerns before this can be merged labels Dec 30, 2024
@BD103
Copy link
Member Author

BD103 commented Dec 30, 2024

@BD103 what do you think still needs to be done before we merge this? Looks like the CI failure is just cargo fmt.

I also wanted to add one more test case to verify that the ray does / doesn't intersect the mesh when cargo test --bench picking was run, but it's ready now!

@alice-i-cecile alice-i-cecile added this pull request to the merge queue Dec 30, 2024
Merged via the queue into bevyengine:main with commit c890cc9 Dec 30, 2024
29 checks passed
ecoskey pushed a commit to ecoskey/bevy that referenced this pull request Jan 6, 2025
# Objective

- Part of bevyengine#16647.
- This PR goes through our `ray_cast::ray_mesh_intersection()`
benchmarks and overhauls them with more comments and better
extensibility. The code is also a lot less duplicated!

## Solution

- Create a `Benchmarks` enum that describes all of the different kind of
scenarios we want to benchmark.
- Merge all of our existing benchmark functions into a single one,
`bench()`, which sets up the scenarios all at once.
- Add comments to `mesh_creation()` and `ptoxznorm()`, and move some
lines around to be a bit clearer.
- Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647.
- Rename many functions and benchmarks to be clearer.

## For reviewers

I split this PR up into several, easier to digest commits. You might
find it easier to review by looking through each commit, instead of the
complete file changes.

None of my changes actually modifies the behavior of the benchmarks;
they still track the exact same test cases. There shouldn't be
significant changes in benchmark performance before and after this PR.

## Testing

List all picking benchmarks: `cargo bench -p benches --bench picking --
--list`

Run the benchmarks once in debug mode: `cargo test -p benches --bench
picking`

Run the benchmarks and analyze their performance: `cargo bench -p
benches --bench picking`

- Check out the generated HTML report in
`./target/criterion/report/index.html` once you're done!

---

## Showcase

List of all picking benchmarks, after having been renamed:

<img width="524" alt="image"
src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1"
/>

Example report for
`picking::ray_mesh_intersection::cull_intersect/100_vertices`:

<img width="992" alt="image"
src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d"
/>
tbillington added a commit to tbillington/bevy that referenced this pull request Jan 10, 2025
mrchantey pushed a commit to mrchantey/bevy that referenced this pull request Feb 4, 2025
# Objective

- Part of bevyengine#16647.
- This PR goes through our `ray_cast::ray_mesh_intersection()`
benchmarks and overhauls them with more comments and better
extensibility. The code is also a lot less duplicated!

## Solution

- Create a `Benchmarks` enum that describes all of the different kind of
scenarios we want to benchmark.
- Merge all of our existing benchmark functions into a single one,
`bench()`, which sets up the scenarios all at once.
- Add comments to `mesh_creation()` and `ptoxznorm()`, and move some
lines around to be a bit clearer.
- Make the benchmarks use the new `bench!` macro, as part of bevyengine#16647.
- Rename many functions and benchmarks to be clearer.

## For reviewers

I split this PR up into several, easier to digest commits. You might
find it easier to review by looking through each commit, instead of the
complete file changes.

None of my changes actually modifies the behavior of the benchmarks;
they still track the exact same test cases. There shouldn't be
significant changes in benchmark performance before and after this PR.

## Testing

List all picking benchmarks: `cargo bench -p benches --bench picking --
--list`

Run the benchmarks once in debug mode: `cargo test -p benches --bench
picking`

Run the benchmarks and analyze their performance: `cargo bench -p
benches --bench picking`

- Check out the generated HTML report in
`./target/criterion/report/index.html` once you're done!

---

## Showcase

List of all picking benchmarks, after having been renamed:

<img width="524" alt="image"
src="https://github.com/user-attachments/assets/a1b53daf-4a8b-4c45-a25a-c6306c7175d1"
/>

Example report for
`picking::ray_mesh_intersection::cull_intersect/100_vertices`:

<img width="992" alt="image"
src="https://github.com/user-attachments/assets/a1aaf53f-ce21-4bef-89c4-b982bb158f5d"
/>
@BD103 BD103 deleted the picking-benches branch February 11, 2025 19:25
BD103 added a commit to BD103/bevy that referenced this pull request Mar 10, 2025
This reverts the benchmarks back to how they were in bevyengine#17033.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

A-Picking Pointing at and selecting objects of all sorts C-Benchmarks Stress tests and benchmarks used to measure how fast things are C-Code-Quality A section of code that is hard to understand or change D-Modest A "normal" level of difficulty; suitable for simple features or challenging fixes S-Ready-For-Final-Review This PR has been approved by the community. It's ready for a maintainer to consider merging it

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants