Skip to content

Conversation

@annanyapr
Copy link
Contributor

Have refactored _attention_prefill_ragged to allow for different v dimension from q/k dimension. This can be used for MLA attention in deepseek models.

@annanyapr
Copy link
Contributor Author

@MasterJH5574 can you take a look?

@annanyapr annanyapr changed the title Refactored code to allow for different v dimension from q/k dimension Added support for normal MLA kernel Feb 17, 2025
@annanyapr
Copy link
Contributor Author

@MasterJH5574 TVM seems to building correctly and tvm/tests/python/relax/test_runtime_builtin_paged_attention_kv_cache_tir.py seems to be working fine

Copy link
Contributor

@MasterJH5574 MasterJH5574 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks! We are good to go after CI passes.

@MasterJH5574 MasterJH5574 merged commit 6d92f2a into apache:main Feb 20, 2025
10 checks passed
ShiboXing pushed a commit to ShiboXing/tvm that referenced this pull request Aug 10, 2025
* Refactored code to allow for different v dimension from q/k dimension

* Made a small fix after the rebase

* Made changes to the runtime to support normal kernel

* Fixed a compilation issue

* Fix lint

---------

Co-authored-by: Ruihang Lai <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants