Skip to content

Elastic Expert Parallel Initial Support #20775

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 35 commits into from
Jul 19, 2025
Merged

Conversation

ruisearch42
Copy link
Collaborator

@ruisearch42 ruisearch42 commented Jul 10, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

This corresponds to Milestone 1 of #20323 .

Co-authored with @libertyeagle

Supported functionality:

  • Retained engine-core state destroy & reinitialization
    • Distributed environment
    • Distributed communicators
    • Model structure & weights: including EPLB weight reshuffle
  • Scale up: new engine-core startup
    • KV cache initialization: use available GPU memory information from existing engine-core to skip expensive profiling
  • Scale down: unneeded engine-core shutdown
  • Control plane
    • API server endpoint
    • DP engine-core scheduling: e.g. collective operations (from retained and new engine-cores) need to happen at the same time
    • Traffic handling with a simple strategy of draining and dropping during scaling

TODO for this PR:

  • More testing with repeated scale up/down
  • Address FIXME
  • Minor refactors and cleanups
    • e.g., remove/move/cleanup scripts in experimental or examples directory

Follow-ups after this PR

Test Plan

Test with PPLX kernel and DeepSeek-V2-Lite

Test Result

Can alternate scale up and down multiple times (e.g., scale from 4->5->6->7->8->7->6->5->4), and drain/drop traffic

(Optional) Documentation Update

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added deepseek Related to DeepSeek models frontend v1 labels Jul 10, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ruisearch42, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces initial support for Elastic Expert Parallelism (EEP) in vLLM, enabling dynamic scaling of data parallel (DP) engine cores, particularly beneficial for Mixture-of-Experts (MoE) models. The changes encompass re-initializing distributed environments, managing KV cache states across scaling events, and orchestrating expert weight reshuffling to adapt to the new parallel configurations. A new API endpoint is added to trigger these scaling operations, with mechanisms to handle in-flight requests during transitions.

Highlights

  • Dynamic Data Parallel Scaling: Introduces core functionality to dynamically scale up and down the number of data parallel (DP) engine cores, enabling flexible resource allocation for vLLM deployments.
  • Elastic Expert Parallelism (EPLB) Integration: Implements logic to rebalance and reshuffle expert weights across the new set of available GPUs during scaling operations, ensuring efficient utilization for Mixture-of-Experts (MoE) models.
  • Distributed Environment Re-initialization: Adds mechanisms to gracefully tear down and re-initialize PyTorch distributed process groups and related communication states across engine cores during dynamic scaling events.
  • KV Cache State Management: Enables newly added engine cores to initialize their KV cache based on the available memory information from existing engine cores, optimizing startup time during scale-up.
  • API for Scaling Control: Exposes a new /scale API endpoint on the vLLM server, allowing external systems to programmatically trigger scale-up or scale-down operations.
  • Traffic Handling During Scaling: Incorporates a traffic draining and dropping strategy during scaling transitions to minimize disruption to in-flight requests, ensuring service continuity.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a major new feature: elastic expert parallelism. This is a substantial change that touches many parts of the vLLM engine, from the low-level distributed communication and worker management to the high-level API server. The implementation seems well-thought-out, with a multi-phase approach to scaling to handle the complexities of a distributed system. The use of a scaling flag to gracefully handle traffic during scaling is a good design choice.

My review focuses on a few key areas:

  • Correctness: I found a type hint mismatch that should be fixed. I also pointed out a commented-out assertion that might hide potential issues.
  • Maintainability & Robustness: I've suggested improvements for a magic number and a custom communication protocol to make the code more robust and easier to maintain.

Overall, this is a great step towards elastic inference in vLLM. The changes are complex, and I appreciate the effort that went into this.

Copy link

mergify bot commented Jul 10, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ruisearch42.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 10, 2025
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
@ruisearch42
Copy link
Collaborator Author

Hi @abmfy , could you help review the EPLB part, thanks!

Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
@ruisearch42 ruisearch42 removed the ready ONLY add when PR is ready to merge/full CI is needed label Jul 17, 2025
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
@ruisearch42 ruisearch42 added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 17, 2025
PORT=8006
DATA_PARALLEL_SIZE=4
REDUNDANT_EXPERTS=0
MODEL_NAME="/models/models--deepseek-ai--DeepSeek-V2-Lite/snapshots/604d5664dddd88a0433dbae533b7fe9472482de0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've been recommending Qwen/Qwen3-30B-A3B-FP8 for a small example EP+DP model. It's very strong plus it is a good stand-in for DeepSeek models since they use the same quantization scheme.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the suggestion.

In the initial EEP support we assume presence of EPLB, which is not supported in Qwen3 right now. So I guess we still need DeepSeek V2 Lite for now.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI @mnicely - this PR is important for autoscaling large-scale distributed MoE inference. It would be great to upstream any changes necessary for changing the world_size

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the ping. I'll bring to the team

Comment on lines +38 to +45
wget https://developer.download.nvidia.com/compute/redist/nvshmem/3.2.5/source/nvshmem_src_3.2.5-1.txz
tar -xvf nvshmem_src_3.2.5-1.txz -C nvshmem_src --strip-components=1
pushd nvshmem_src
wget https://github.com/deepseek-ai/DeepEP/raw/main/third-party/nvshmem.patch
git init
git apply -vvv nvshmem.patch
git apply --reject --whitespace=fix ../../eep_nvshmem.patch
else
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you upgrade to 3.3.9, since it has the performance improvements from the DeepEP patch? (BTW please double check performance as well, if you have the bandwidth to do so)

deepseek-ai/DeepEP#267 (comment)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks. Can we do it as a follow up?

Right now in this initial PR we only support PPLX. And the version 3.2.5-1 is consistent with current DeepEP installation script.

The DeepEP nvshmem.patch is applied now for a few reasons: 1) we will support DeepEP eventually; 2) it is consistent with current DeepEP installation script; 3) it removes the need for GDRCOPY, without the patch the nvshmem compilation fails

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, we only need our nvshmem patch that clears out all global communication states during nvshmem_finalize so we can create a new communication group with a new set of participant GPUs.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to get nvshmem + deepep built in the vLLM image

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, can we do it as a follow up?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that was just a sidenote, not something for this PR

Comment on lines +1272 to +1279
class ScalingMiddleware:
"""
Middleware that checks if the model is currently scaling and
returns a 503 Service Unavailable response if it is.

This middleware applies to all HTTP requests and prevents
processing when the model is in a scaling state.
"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long does this take typically? Would it be better to allow requests to queue?

Also we should add an API to return whether the vLLM instance is currently unavailable due to autoscaling, so that external routers can take this into account.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added is_scaling_elastic_ep API

Right now scaling up 4->5 takes ~55 seconds, scaling down 5->4 takes ~40 seconds. At this stage we are using a simple strategy of dropping since this interruption time is expected to be minimized when we optimize in Milestone 2. Maybe better to revisit at that stage?

I think the idea is good though. Were you thinking about buffering requests at API server or at the scheduler?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Buffering requests in the API server seems more natural but I haven't thought about it too hard.

Any idea how far you'll be able to optimize it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had some ideas to reduce this to a few seconds, which requires changes to the communicator reinit, cudagraph etc. Will work on it next.

The ideal target would be very minimal or 0. Will experiment how far the techniques could help us.

@ruisearch42 ruisearch42 removed the ready ONLY add when PR is ready to merge/full CI is needed label Jul 18, 2025
Copy link

mergify bot commented Jul 18, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @ruisearch42.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 18, 2025
@ruisearch42 ruisearch42 added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 18, 2025
@mergify mergify bot removed the needs-rebase label Jul 18, 2025
Comment on lines 650 to 655
if new_data_parallel_size > old_data_parallel_size:
await self.engine_core.scale_up_elastic_ep(
new_data_parallel_size)
else:
await self.engine_core.scale_down_elastic_ep(
new_data_parallel_size)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why have separate scale_up vs scale_down calls?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have different logics for scale up vs down in the backend.
For scale up: allocate new GPUs -> start new workers -> reinit comm -> reshard experts
For scale down: reshard experts -> shutdown workers -> reinit comm
We definitely can integrate the frontend API into a unified one, while only the interaction between EngineCore/workers and CoreClient have separate logics.

Copy link
Collaborator Author

@ruisearch42 ruisearch42 Jul 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it makes sense and is cleaner to have a single API for CoreClient. I've updated the code. We can later refine the implementations.

Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
@simon-mo simon-mo merged commit 2179372 into vllm-project:main Jul 19, 2025
75 of 78 checks passed
hj-mistral pushed a commit to hj-mistral/vllm that referenced this pull request Jul 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deepseek Related to DeepSeek models documentation Improvements or additions to documentation frontend ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants