Skip to content

Conversation

bnellnm
Copy link
Contributor

@bnellnm bnellnm commented Oct 31, 2024

Add an inductor pass to rewrite and fuse collective communication ops with gemms

See #9883 for version that includes llama hacks.

TODO:

  • find workaround for Infinite recursion in torch._inductor.ir.ExternKernel.__str__ pytorch/pytorch#139501
  • does not work with graph splitting because first/last subgraphs need special treatment for residual splitting. this could be worked around if we decide to not split the residual and compute on junk data.
  • try to support non-custom rms norm (nice to have)
  • does not work with new inductor caching mechanism
  • fix deadlock in benchmark serving mode.

cc @tlrmchlsmth , @ProExpertProg , @SageMoore , @youkaichao

Requires a special config to run:

config = CompilationConfig(
    level=3,
    custom_ops = ["+rms_norm"],
    splitting_ops = [],
)
config.pass_config.enable_collective_fusion = True

llm = LLM(model=model,
          enforce_eager=eager,
          tensor_parallel_size=tp_size,
          disable_custom_all_reduce=not custom_ar,
          dtype=torch.float16,
          max_num_batched_tokens=2048,
          compilation_config=config)

Some benchmark results:

model = meta-llama/Llama-3.1-70B-Instruct
tp_size = 4
chunked prefill size = 2048
batch_size = 1
input_len=2048
output_len=1
Eager mode + torch.compile

Avg latency: 0.16625802051508798 seconds
10% percentile latency: 0.16468927392270416 seconds
25% percentile latency: 0.16511811560485512 seconds
50% percentile latency: 0.16571794101037085 seconds
75% percentile latency: 0.16671031567966565 seconds
90% percentile latency: 0.1675790420267731 seconds
99% percentile latency: 0.17226817809045325 seconds

Eager mode + torch.compile + flux

Avg latency: 0.1583265809295699 seconds
10% percentile latency: 0.15630255101714283 seconds
25% percentile latency: 0.15688058221712708 seconds
50% percentile latency: 0.15789097198285162 seconds
75% percentile latency: 0.15932484721997753 seconds
90% percentile latency: 0.16147575441282241 seconds
99% percentile latency: 0.16223905643215403 seconds

cudagraphs + torch.compile

Avg latency: 0.17894838895183057 seconds
10% percentile latency: 0.17591054290533065 seconds
25% percentile latency: 0.176349236513488 seconds
50% percentile latency: 0.17722250788938254 seconds
75% percentile latency: 0.17862555047031492 seconds
90% percentile latency: 0.18074012212455273 seconds
99% percentile latency: 0.2171030258946121 seconds

cudagraphs + torch.compile + flux

Avg latency: 0.17262270329520107 seconds
10% percentile latency: 0.17164990142919123 seconds
25% percentile latency: 0.17196793673792854 seconds
50% percentile latency: 0.1724927049363032 seconds
75% percentile latency: 0.1730666920193471 seconds
90% percentile latency: 0.17406681017018855 seconds
99% percentile latency: 0.1758251654729247 seconds

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify
Copy link

mergify bot commented Oct 31, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. @bnellnm please rebase it. https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Member

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looking forward to this one!

@bnellnm bnellnm force-pushed the collective-fusion branch 2 times, most recently from 0a1f637 to 1c9d79c Compare November 8, 2024 23:36
@mergify mergify bot removed the needs-rebase label Nov 8, 2024
@bnellnm bnellnm marked this pull request as ready for review November 9, 2024 23:10
@mergify
Copy link

mergify bot commented Nov 11, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @bnellnm.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify
Copy link

mergify bot commented Nov 25, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @bnellnm.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify
Copy link

mergify bot commented Nov 26, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @bnellnm.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify
Copy link

mergify bot commented Nov 26, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @bnellnm.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

device_group = group.device_group
rank = group.rank_in_group

if use_flux:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we maybe use a better abstraction than if statements based on use_flux?

Comment on lines +402 to +506
fused_node = graph.call_function(fused_gemm_func,
kwargs=kwargs)

graph.inserting_after(fused_node)
result_node_new = graph.call_function(operator.getitem,
(fused_node, 0))
residual_node_new = graph.call_function(
operator.getitem, (fused_node, 1))
my_residual_node_new = graph.call_function(
operator.getitem, (fused_node, 2))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think multi-output match has a utility that emits a function and tuple accessors.

Comment on lines +412 to +509
res_replacements.append(residual_node_new)
my_res_replacements.append(my_residual_node_new)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we save all of the residuals instead of just the previous one?

Comment on lines +388 to +484
if gemm_1 is None or gemm_2 is None:
raise ValueError("Missing 'val' in gemm weights meta data")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't it be simpler if you just do meta["val"]

bnellnm added 21 commits January 7, 2025 17:19
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
Signed-off-by: Bill Nell <[email protected]>
@github-actions
Copy link

This pull request has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this pull request should remain open. Thank you!

@github-actions github-actions bot added the stale Over 90 days of inactivity label Apr 18, 2025
@github-actions
Copy link

This pull request has been automatically closed due to inactivity. Please feel free to reopen if you intend to continue working on it. Thank you!

@github-actions github-actions bot closed this May 18, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend needs-rebase stale Over 90 days of inactivity

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants