Skip to content

[Model] Pooling models default to using chunked prefill & prefix caching if supported. #20930

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 37 commits into
base: main
Choose a base branch
from

Conversation

noooop
Copy link
Contributor

@noooop noooop commented Jul 14, 2025

TL;DR

  • The pooling model using LAST Pooling and causal attention defaults to using chunked prefill & prefix caching. e.g.
    • SequenceClassification: tomaarsen/Qwen3-Reranker-0.6B-seq-cls
    • Embedding: Qwen/Qwen3-Embedding-0.6B (Embedding models? using chunked prefill & prefix caching? Weird? But why not!
  • The following types of models do not support chunked prefill & prefix caching
    • Bert like Bidirectional models (non-causal attention models) e.g. Embedding:intfloat/e5-small;SequenceClassification: papluca/xlm-roberta-base-language-detection
    • Use the [LLM2Vec] method to convert causal attention into bidirectional attention. e.g. Embedding:Alibaba-NLP/gte-Qwen2-1.5B-instruct
    • Causal attention models do not use LAST Pooling, such as using MEAN Pooling. There are no examples of this in the models supported by vllm currently.

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

ModelForPooling = _create_pooling_model_cls(
cls,
default_pooling_type=PoolingType.LAST,
default_normalize=False,
default_softmax=True,
)

This piece of code may not affect the main process…

refer to #20012

  • set default_pooling_type in vllm.config

  • Keep Decode-only SequenceClassification models support auto prefix cache

  • The pooler_config in the startup log shows the correct pooling_type instead of None, which will help with debugging

  • LLM.encode() and pooling server uses ALL pooling, which does not support chunked prefill. e.g. jason9693/Qwen2.5-1.5B-apeach

cc @DarkLight1337 @maxdebayser

Test Plan

pytest -s -vvv tests/test_config.py::test_default_pooling_type
pytest -s -vvv tests/models/language/pooling/test_auto_prefix_cache_support.py
pytest -s -vvv tests/entrypoints/llm/test_classify.py::test_encode_api
pytest -s -vvv tests/entrypoints/openai/test_classification.py::test_pooling

Add test for default_pooling_type in tests/models/language/pooling/mteb_utils.py, double check all implementations use the correct default_pooling_type

Test Result

passed

(Optional) Documentation Update

Fix #20894
Fix #19950

implicit conversion part split to #21103

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the llama Related to Llama models label Jul 14, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @noooop, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves how vLLM handles Hugging Face ForSequenceClassification models. It introduces an automatic conversion mechanism that allows any ForCausalLM model to be used for sequence classification without requiring explicit registration or custom wrapper classes. This change streamlines model integration, centralizes classification task detection, and enhances the flexibility of vLLM's model loading and execution pipeline, while also addressing a reported issue with the TRANSFORMERS implementation for these models.

Highlights

  • Automatic ForSequenceClassification Support: I've implemented a new mechanism to automatically convert ForCausalLM models into ForSequenceClassification models by dynamically applying an adapter. This removes the need for explicit registration and custom wrapper classes for each specific model type, streamlining model integration.
  • Centralized Classification Task Logic: I've introduced a new _is_classify_task method and updated the task resolution in vllm/config.py. This provides a more robust way to identify and handle classification tasks, ensuring they are correctly routed to the 'pooling' runner.
  • Refactored Model Registry: The model registry has been cleaned up by removing explicit entries for automatically convertible ForSequenceClassification models. The registry's ability to inspect and normalize these architectures on the fly has been enhanced, making it more flexible.
  • Improved ScoreModel Bias Handling: The _ScoreModel adapter now correctly respects the score_bias configuration from the Hugging Face model config. This allows classification heads to have a bias if specified by the original model.
  • Enhanced Testing Coverage: The test suite for model initialization (test_initialization.py) now includes automatically converted ForSequenceClassification models, ensuring the new automatic support mechanism functions as expected.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added new-model Requests to new models qwen Related to Qwen models labels Jul 14, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the model loading and registration logic to automatically support ForSequenceClassification models, which is a great improvement. The changes involve removing hardcoded model registrations and introducing dynamic conversion logic.

My review has identified a critical issue in the new conversion logic in vllm/model_executor/model_loader/utils.py that could lead to incorrect model loading. I've also pointed out a few medium-severity issues related to code clarity, maintainability, and a typo. Addressing these points will improve the robustness and readability of the new implementation.

@vrdn-23
Copy link
Contributor

vrdn-23 commented Jul 14, 2025

This looks awesome @noooop!
Question: Would this extend to only currently supported models or would it negate the need to have separate PRs for non-LLM models (like this for example #20215)?

@noooop
Copy link
Contributor Author

noooop commented Jul 15, 2025

This looks awesome @noooop! Question: Would this extend to only currently supported models or would it negate the need to have separate PRs for non-LLM models (like this for example #20215)?

DebertaV2ForSequenceClassification uses classifier, while this pr uses score, so it is not supported. Is the title not well chosen? In fact, this pr only implements a small amount of functionality.

@noooop noooop changed the title [Model] Automatically support all ForSequenceClassification models [Model] Re-add the implicit conversion feature for as_seq_cls_model Jul 15, 2025
Copy link
Contributor

@maxdebayser maxdebayser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this is going in the right direction

Copy link

mergify bot commented Jul 15, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @noooop.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 15, 2025
@noooop noooop changed the title [Model] Re-add the implicit conversion feature for as_seq_cls_model [Model] Auto retrieve default_pooling_type Jul 17, 2025
@noooop noooop closed this Jul 21, 2025
@noooop noooop force-pushed the auto_conversion branch from e0303da to 8188196 Compare July 21, 2025 03:22
@noooop noooop reopened this Jul 21, 2025
@mergify mergify bot removed the needs-rebase label Jul 21, 2025
@noooop noooop changed the title [Model] Auto retrieve default_pooling_type [Model] Auto resolve default_pooling_type Jul 21, 2025
@noooop noooop marked this pull request as ready for review July 21, 2025 06:57
noooop added 2 commits August 8, 2025 18:40
Signed-off-by: wang.yuqi <[email protected]>
Signed-off-by: wang.yuqi <[email protected]>
noooop added 3 commits August 8, 2025 18:42
Signed-off-by: wang.yuqi <[email protected]>
Signed-off-by: wang.yuqi <[email protected]>
Signed-off-by: wang.yuqi <[email protected]>
noooop added 4 commits August 8, 2025 18:51
Signed-off-by: wang.yuqi <[email protected]>
Signed-off-by: wang.yuqi <[email protected]>
Signed-off-by: wang.yuqi <[email protected]>
Signed-off-by: wang.yuqi <[email protected]>
@DarkLight1337
Copy link
Member

Let's see if tests pass

@noooop
Copy link
Contributor Author

noooop commented Aug 8, 2025

Thanks for reviewing

Copy link
Contributor

@maxdebayser maxdebayser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still a bit ambivalent about the default pooling type. If the model config doesn't come with a definition, shouldn't the user set this value?

@noooop
Copy link
Contributor Author

noooop commented Aug 9, 2025

I'm still a bit ambivalent about the default pooling type. If the model config doesn't come with a definition, shouldn't the user set this value?

  • For embedding models, most have a sentence-transformers config with the correct pooling type, and we also allow users to modify it using override_pooler_config.
  • For classification models, the default pooling type is needed to distinguish between last pooling and cls pooling, which is also the main usage scenario of this PR.
  • For reward models, they use all pooling or step pooling, we should prevent them from using chunked prefill & prefix caching, which is also the usage scenario included in this pr

@maxdebayser
Copy link
Contributor

Ok, I see the point for classification and reward models. Thanks for the explanation.

@noooop noooop changed the title [Model] Auto resolve default_pooling_type & Optimize prefix caching enable verify logic. [Model] Pooling models default to using chunked prefill & prefix caching if possible. Aug 10, 2025
@noooop noooop changed the title [Model] Pooling models default to using chunked prefill & prefix caching if possible. [Model] Pooling models default to using chunked prefill & prefix caching if they are supported. Aug 10, 2025
@noooop noooop changed the title [Model] Pooling models default to using chunked prefill & prefix caching if they are supported. [Model] Pooling models default to using chunked prefill & prefix caching if supported. Aug 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend llama Related to Llama models new-model Requests to new models qwen Related to Qwen models v1
Projects
None yet
4 participants