Skip to content

[Refactor] Remove Unused Naive Moe Kernels #21125

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

yewentao256
Copy link
Contributor

@yewentao256 yewentao256 commented Jul 17, 2025

Purpose

Fixed #21124

Signed-off-by: yewentao256 <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the performance Performance-related issues label Jul 17, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request is a refactoring effort to remove unused "naive" Mixture-of-Experts (MoE) kernels. The changes are clean and involve the deletion of C++ kernels, Python wrappers, benchmarks, and tests related to the old moe_permute and moe_unpermute operations. This cleanup improves the maintainability of the codebase. The changes appear to be correct and self-contained. I have reviewed the removals and found no issues.

Copy link

mergify bot commented Jul 21, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @yewentao256.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 21, 2025
@mergify mergify bot removed the needs-rebase label Jul 21, 2025
@yewentao256
Copy link
Contributor Author

Seems stagnant, could you take a look? Thanks! @WoosukKwon

Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're going to integrate them here https://github.com/vllm-project/vllm/pull/17934/files

@yewentao256
Copy link
Contributor Author

@tlrmchlsmth Thanks for letting me know.
Seems that the pr is same with the #20903 @varun-sundar-rabindranath ?
Do you know which one is faster?

@varun-sundar-rabindranath
Copy link
Contributor

Hi @yewentao256 - I believe there is value is having the cuda permute/unpermute kernels.

I experimented with both the triton kernels (introduced in #20903) and the cuda kernels (#20982) - and decided to merge the triton kernels due to slightly better TTFT.
Arguments for having the cuda kernel:

  • I think there will be regimes where the cuda kernels will be faster
  • In the context of DeepGemmExperts, the cuda kernels are slower because they do some unnecessary extra work - eliminating that will make them faster.
  • The cuda kernels would probably be useful in Cutlass grouped gemm data-prep (permute/unpermute)

The goal is to have a PermuteUnpermute abstract class and have both Triton and CUDA implementations - that way we can have a dispatching logic based on the number of tokens to decide which implementation to use.

Copy link
Contributor Author

@yewentao256 yewentao256 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@varun-sundar-rabindranath
Make sense, thanks for letting me know the context!

@yewentao256
Copy link
Contributor Author

Close this pr as not needed

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance Performance-related issues
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature]: Remove Unused Moe Permute / Un-permute
3 participants