-
-
Notifications
You must be signed in to change notification settings - Fork 8.7k
[Feature] Add command tool parser for Command-A model #20633
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @gjgjos, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request integrates a new CommandToolParser
to enable seamless function calling for the CohereLabs/c4ai-command-a-03-2025
model. It ensures that tool calls, which follow a specific delimited format, are correctly identified and extracted from both synchronous and streaming model outputs, thereby expanding the system's compatibility with a new family of models.
Highlights
- New Tool Parser: Introduced a new
CommandToolParser
specifically designed to handle tool calls for theCohereLabs/c4ai-command-a-03-2025
model. - Tool Call Format Support: The parser is configured to recognize and extract tool call information enclosed within
<|START_ACTION|>
and<|END_ACTION|>
delimiters, as used by the Command-A model. - Synchronous and Streaming Parsing: The
CommandToolParser
implements logic for both synchronous (batch) and streaming extraction of tool calls, ensuring compatibility with various response types. - Robust JSON Handling: It leverages
partial_json_parser
to gracefully handle incomplete or potentially malformed JSON payloads that may occur during streaming responses, improving parsing robustness.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This PR introduces a CommandToolParser
for Cohere's command model tool-use format, covering both synchronous and streaming modes. A critical bug in the streaming logic leading to incorrect tool call indexing was identified and fixed, along with suggestions for improving correctness and performance.
# Streaming state | ||
self.prev_tool_call_arr: list[dict] = [] | ||
self.streamed_args_for_tool: list[str] = [] | ||
self.current_tool_id: int = -1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
# Case 2: Starting a new block | ||
if cur_start > prev_start: | ||
self.current_tool_id += 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
return ExtractedToolCallInformation(tools_called=True, | ||
tool_calls=tool_calls, | ||
content=prefix or None) | ||
except Exception: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self.prev_tool_call_arr: list[dict] = [] | ||
self.streamed_args_for_tool: list[str] = [] | ||
self.current_tool_id: int = -1 | ||
self.current_tool_name_sent: bool = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
prev_start = previous_token_ids.count(self.tool_call_start_token_id) | ||
cur_start = current_token_ids.count(self.tool_call_start_token_id) | ||
cur_end = current_token_ids.count(self.tool_call_end_token_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The count()
method is called on previous_token_ids
and current_token_ids
in each invocation of this streaming method. Since these lists can grow large, this is inefficient as it re-scans the entire list every time. Consider maintaining the counts as part of the parser's state and updating them incrementally with delta_token_ids
to improve performance.
Signed-off-by: <> Signed-off-by: Ubuntu <[email protected]> Signed-off-by: Doil Kim <[email protected]>
8145cea
to
2b12b68
Compare
Signed-off-by: Doil Kim <[email protected]> Co-authored-by: 김종곤 <[email protected]>
2b12b68
to
84c6f34
Compare
good! |
This pull request has merge conflicts that must be resolved before it can be |
Signed-off-by: Doil Kim <[email protected]>
Signed-off-by: Doil Kim <[email protected]>
Purpose
This PR adds a new tool parser module named
CommandToolParser
to support thecommand
tool calling format used by the[CohereLabs/c4ai-command-a-03-2025](https://huggingface.co/CohereLabs/c4ai-command-a-03-2025)
model.The parser is designed to extract tool call information from model outputs that follow the
<|START_ACTION|> ... <|END_ACTION|>
format, parsing both synchronous and streaming responses. It leveragespartial_json_parser
to handle incomplete or malformed JSON in streaming scenarios gracefully.Test Plan
Serve the model using
vLLM
with the following configuration:vllm serve CohereLabs/c4ai-command-a-03-2025 \ --enable-auto-tool-choice \ --tool-call-parser command
Use the OpenAI-compatible API interface to send tool-calling requests:
Test Result
The model successfully produced a tool call:
Accompanied by a reasoning message:
This confirms that the
command
parser correctly handles tool extraction and reasoning content from the model output.(Optional) Documentation Update
CommandToolParser
class explaining the parsing logic.--tool-call-parser command
as a valid option for servingcommand
family models.