-
-
Notifications
You must be signed in to change notification settings - Fork 10.8k
[torch.compile] Make inductor partition rules respect splitting_ops #25691 #25845
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
ProExpertProg
merged 111 commits into
vllm-project:main
from
baonudesifeizhai:feature/dynamic-inductor-partition-rules
Oct 10, 2025
Merged
[torch.compile] Make inductor partition rules respect splitting_ops #25691 #25845
ProExpertProg
merged 111 commits into
vllm-project:main
from
baonudesifeizhai:feature/dynamic-inductor-partition-rules
Oct 10, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
- Add _user_specified_splitting_ops field to store user configuration - Modify set_splitting_ops_for_inductor_graph_partition to respect user settings - Add debug logging to track splitting_ops handling - Addresses issue vllm-project#25691 - partial implementation for dynamic partitioning This change preserves user-specified splitting_ops when use_inductor_graph_partition=True, laying groundwork for future PyTorch 2.9+ register_should_partition_rule integration.
- Add _setup_dynamic_partition_rules() method - Implement register_should_partition_rule integration - Support both attention ops and user-specified splitting_ops - Add comprehensive debug logging for partition decisions - Graceful fallback if PyTorch API not available This completes the implementation for issue vllm-project#25691
263f0b6 to
56ae27d
Compare
|
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: baonudesifeizhai <[email protected]>
Co-authored-by: Luka Govedič <[email protected]> Signed-off-by: baonudesifeizhai <[email protected]>
huydhn
added a commit
to pytorch/pytorch
that referenced
this pull request
Oct 11, 2025
Signed-off-by: Huy Do <[email protected]>
huydhn
added a commit
to huydhn/pytorch
that referenced
this pull request
Oct 12, 2025
Signed-off-by: Huy Do <[email protected]>
1 task
Dhruvilbhatt
pushed a commit
to Dhruvilbhatt/vllm
that referenced
this pull request
Oct 14, 2025
…llm-project#25691 (vllm-project#25845) Signed-off-by: baonudesifeizhai <[email protected]> Signed-off-by: baonudesifeizhai <[email protected]> Co-authored-by: Luka Govedič <[email protected]> Signed-off-by: Dhruvil Bhatt <[email protected]>
bbartels
pushed a commit
to bbartels/vllm
that referenced
this pull request
Oct 16, 2025
…llm-project#25691 (vllm-project#25845) Signed-off-by: baonudesifeizhai <[email protected]> Signed-off-by: baonudesifeizhai <[email protected]> Co-authored-by: Luka Govedič <[email protected]> Signed-off-by: bbartels <[email protected]>
lywa1998
pushed a commit
to lywa1998/vllm
that referenced
this pull request
Oct 20, 2025
…llm-project#25691 (vllm-project#25845) Signed-off-by: baonudesifeizhai <[email protected]> Signed-off-by: baonudesifeizhai <[email protected]> Co-authored-by: Luka Govedič <[email protected]>
alhridoy
pushed a commit
to alhridoy/vllm
that referenced
this pull request
Oct 24, 2025
…llm-project#25691 (vllm-project#25845) Signed-off-by: baonudesifeizhai <[email protected]> Signed-off-by: baonudesifeizhai <[email protected]> Co-authored-by: Luka Govedič <[email protected]>
wangxiyuan
pushed a commit
to vllm-project/vllm-ascend
that referenced
this pull request
Oct 24, 2025
### What this PR does / why we need it? This is the step 1 of refactoring code to adapt with vllm main, and this pr aligned with vllm-project/vllm@17c540a 1. refactor deepseek to the latest code arch as of vllm-project/vllm@17c540a 2. bunches of fixes due to vllm changes - Fix `AscendScheduler` `__post_init__`, caused by vllm-project/vllm#25075 - Fix `AscendScheduler` init got an unexpected arg `block_size`, caused by vllm-project/vllm#26296 - Fix `KVCacheManager` `get_num_common_prefix_blocks` arg, caused by vllm-project/vllm#23485 - Fix `MLAAttention` import,caused by vllm-project/vllm#25103 - Fix `SharedFusedMoE` import, caused by vllm-project/vllm#26145 - Fix `LazyLoader` improt, caused by vllm-project/vllm#27022 - Fix `vllm.utils.swap_dict_values` improt, caused by vllm-project/vllm#26990 - Fix `Backend` enum import, caused by vllm-project/vllm#25893 - Fix `CompilationLevel` renaming to `CompilationMode` issue introduced by vllm-project/vllm#26355 - Fix fused_moe ops, caused by vllm-project/vllm#24097 - Fix bert model because of `inputs_embeds`, caused by vllm-project/vllm#25922 - Fix MRope because of `get_input_positions_tensor` to `get_mrope_input_positions`, caused by vllm-project/vllm#24172 - Fix `splitting_ops` changes introduced by vllm-project/vllm#25845 - Fix multi-modality changes introduced by vllm-project/vllm#16229 - Fix lora bias dropping issue introduced by vllm-project/vllm#25807 - Fix structured ouput break introduced by vllm-project/vllm#26737 ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? CI passed with existing test. - vLLM version: v0.11.0rc3 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.0 --------- Signed-off-by: MengqingCao <[email protected]> Signed-off-by: Icey <[email protected]> Co-authored-by: Icey <[email protected]>
xuebwang-amd
pushed a commit
to xuebwang-amd/vllm
that referenced
this pull request
Oct 24, 2025
…llm-project#25691 (vllm-project#25845) Signed-off-by: baonudesifeizhai <[email protected]> Signed-off-by: baonudesifeizhai <[email protected]> Co-authored-by: Luka Govedič <[email protected]> Signed-off-by: xuebwang-amd <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
llama
Related to Llama models
performance
Performance-related issues
ready
ONLY add when PR is ready to merge/full CI is needed
torch.compile
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Key Changes
splitting_ops: Whenuse_inductor_graph_partition=True, user-providedsplitting_opsare now preserved in_user_specified_splitting_opsinstead of being cleared_setup_dynamic_partition_rules()using PyTorch 2.9+'sregister_should_partition_ruleAPI to register custom partition pointssplitting_opsTechnical Implementation
splitting_ops(including aliases like "flash_attention") totorch._ops.OpOverloadobjectsregister_should_partition_rule(op_overload, partition_function)for each resolved operationTest Plan
Unit Tests
test_splitting_ops_dynamic()to verifysplitting_opspreservation behaviorIntegration Tests
splitting_opsconfigurations:python -m vllm.entrypoints.openai.api_server \ --model Qwen/Qwen2.5-7B-Instruct \ --compilation-config '{"use_inductor_graph_partition": true, "splitting_ops": ["flash_attention", "addmm", "aten.bmm.default"]}' \ --host 0.0.0.0 --port 8000Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.