Skip to content

Give a friendlier failure when lacking knitr #1267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jul 28, 2025

Conversation

MichaelChirico
Copy link
Contributor

Closes #1266

Copy link
Contributor

This is how benchmark results would change (along with a 95% confidence interval in relative change) if b7ad523 is merged into main:

  • ✔️cache_applying: 28.5ms -> 29.4ms [-0.28%, +6.68%]
  • ✔️cache_recording: 433ms -> 435ms [-0.67%, +1.38%]
  • ✔️without_cache: 1.02s -> 1.02s [-1.86%, +0.35%]

Further explanation regarding interpretation and methodology can be found in the documentation.

Copy link
Collaborator

@lorenzwalthert lorenzwalthert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks.

Copy link
Contributor

This is how benchmark results would change (along with a 95% confidence interval in relative change) if 2f971a8 is merged into main:

  • ❗🐌cache_applying: 26.9ms -> 38.4ms [+38.73%, +46.91%]
  • ❗🐌cache_recording: 413ms -> 425ms [+1.42%, +4.55%]
  • ❗🐌without_cache: 929ms -> 939ms [+0.51%, +1.75%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@lorenzwalthert
Copy link
Collaborator

Just weird that this seems to slow down things…

@lorenzwalthert
Copy link
Collaborator

lorenzwalthert commented Jul 24, 2025

I triggered touchstone again to see if the problem persist.

Copy link
Contributor

This is how benchmark results would change (along with a 95% confidence interval in relative change) if 2f971a8 is merged into main:

  • ❗🐌cache_applying: 26.9ms -> 39.9ms [+45.76%, +50.52%]
  • ❗🐌cache_recording: 403ms -> 414ms [+2.12%, +3.44%]
  • ✔️without_cache: 1000ms -> 1s [-1.15%, +1.55%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@lorenzwalthert
Copy link
Collaborator

Hmm... I think the speed test might actually be correct. Can we do the test for {knitr} more high level once, instead of every time we call get_knitr_pattern(), which seems quite a lot?

Copy link
Collaborator

@lorenzwalthert lorenzwalthert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I just realised we should not merge this PR as is.

@lorenzwalthert
Copy link
Collaborator

Also, @MichaelChirico, it would be great to fix this in the next 5 days, as I am planning to submit a new version of {styler} to CRAN.

@MichaelChirico
Copy link
Contributor Author

I'm not too familiar with package internals so I spent only a few minutes trying to pin down where is the right place to move the check, I guess this is good because it avoids the iterated calls here:

purrr::map2(starts, ends, finalize_raw_chunks,

Copy link
Contributor

This is how benchmark results would change (along with a 95% confidence interval in relative change) if 377d573 is merged into main:

  • ❗🐌cache_applying: 26.9ms -> 28.2ms [+2.65%, +6.73%]
  • ✔️cache_recording: 403ms -> 405ms [-0.43%, +1.36%]
  • ✔️without_cache: 949ms -> 946ms [-0.81%, +0.33%]

Further explanation regarding interpretation and methodology can be found in the documentation.

@lorenzwalthert
Copy link
Collaborator

lorenzwalthert commented Jul 28, 2025

Much better, thanks @MichaelChirico 🥳. +2% for the cache_applying we can tolerate I think, since it's ultra fast anyways.

@lorenzwalthert lorenzwalthert merged commit d5190a8 into r-lib:main Jul 28, 2025
17 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Styler crashes
3 participants