Skip to content

[Feature] Add command tool parser for Command-A model #20633

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from

Conversation

gjgjos
Copy link

@gjgjos gjgjos commented Jul 8, 2025

Purpose

This PR adds a new tool parser module named CommandToolParser to support the command tool calling format used by the [CohereLabs/c4ai-command-a-03-2025](https://huggingface.co/CohereLabs/c4ai-command-a-03-2025) model.

The parser is designed to extract tool call information from model outputs that follow the <|START_ACTION|> ... <|END_ACTION|> format, parsing both synchronous and streaming responses. It leverages partial_json_parser to handle incomplete or malformed JSON in streaming scenarios gracefully.

Test Plan

  1. Serve the model using vLLM with the following configuration:

    vllm serve CohereLabs/c4ai-command-a-03-2025 \
        --enable-auto-tool-choice \
        --tool-call-parser command
  2. Use the OpenAI-compatible API interface to send tool-calling requests:

    from openai import OpenAI
    
    client = OpenAI(
        api_key="EMPTY",
        base_url="http://localhost:8000/v1",
    )
    
    model = client.models.list().data[0].id
    
    tools = [{
        "type": "function",
        "function": {
            "name": "get_current_weather",
            "description": "Get the current weather in a given location",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {"type": "string", "description": "e.g. 'San Francisco'"},
                    "state": {"type": "string", "description": "e.g. 'CA'"},
                    "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
                },
                "required": ["city", "state", "unit"]
            }
        }
    }]
    
    messages = [
        {"role": "user", "content": "Can you tell me what the temperature will be in Dallas, in fahrenheit?"}
    ]
    
    response = client.chat.completions.create(
        model=model,
        messages=messages,
        tools=tools,
        temperature=0.3,
        stream=False
    )
    
    print(response)

Test Result

The model successfully produced a tool call:

{
  "tool_calls": [
    {
      "id": "chatcmpl-tool-...",
      "type": "function",
      "function": {
        "name": "get_current_weather",
        "arguments": "{\"city\": \"Dallas\", \"state\": \"TX\", \"unit\": \"fahrenheit\"}"
      }
    }
  ]
}

Accompanied by a reasoning message:

<|START_THINKING|>I will use the 'get_current_weather' tool to find out the temperature in Dallas, Texas.<|END_THINKING|>

This confirms that the command parser correctly handles tool extraction and reasoning content from the model output.


(Optional) Documentation Update

  • Added inline documentation within the CommandToolParser class explaining the parsing logic.
  • If needed, future docs can describe --tool-call-parser command as a valid option for serving command family models.

@gjgjos gjgjos requested a review from aarnphm as a code owner July 8, 2025 14:51
Copy link

github-actions bot commented Jul 8, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @gjgjos, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request integrates a new CommandToolParser to enable seamless function calling for the CohereLabs/c4ai-command-a-03-2025 model. It ensures that tool calls, which follow a specific delimited format, are correctly identified and extracted from both synchronous and streaming model outputs, thereby expanding the system's compatibility with a new family of models.

Highlights

  • New Tool Parser: Introduced a new CommandToolParser specifically designed to handle tool calls for the CohereLabs/c4ai-command-a-03-2025 model.
  • Tool Call Format Support: The parser is configured to recognize and extract tool call information enclosed within <|START_ACTION|> and <|END_ACTION|> delimiters, as used by the Command-A model.
  • Synchronous and Streaming Parsing: The CommandToolParser implements logic for both synchronous (batch) and streaming extraction of tool calls, ensuring compatibility with various response types.
  • Robust JSON Handling: It leverages partial_json_parser to gracefully handle incomplete or potentially malformed JSON payloads that may occur during streaming responses, improving parsing robustness.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This PR introduces a CommandToolParser for Cohere's command model tool-use format, covering both synchronous and streaming modes. A critical bug in the streaming logic leading to incorrect tool call indexing was identified and fixed, along with suggestions for improving correctness and performance.

# Streaming state
self.prev_tool_call_arr: list[dict] = []
self.streamed_args_for_tool: list[str] = []
self.current_tool_id: int = -1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

current_tool_id should be initialized to 0 instead of -1. This is part of a fix for a bug that causes non-contiguous tool call indices in streaming mode.

Suggested change
self.current_tool_id: int = -1
self.current_tool_id: int = 0


# Case 2: Starting a new block
if cur_start > prev_start:
self.current_tool_id += 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

Incrementing self.current_tool_id here when a new action block starts (<|START_ACTION|>) is incorrect and causes non-contiguous tool call indices. The index should only be incremented after a complete tool call has been parsed. This line should be removed.

return ExtractedToolCallInformation(tools_called=True,
tool_calls=tool_calls,
content=prefix or None)
except Exception:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Catching a broad Exception can hide unexpected errors and make debugging more difficult. It's better to catch more specific exceptions that you expect to handle, such as json.JSONDecodeError and ValueError.

Suggested change
except Exception:
except (json.JSONDecodeError, ValueError):

Comment on lines +32 to +35
self.prev_tool_call_arr: list[dict] = []
self.streamed_args_for_tool: list[str] = []
self.current_tool_id: int = -1
self.current_tool_name_sent: bool = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The instance variables prev_tool_call_arr, streamed_args_for_tool, and current_tool_name_sent are initialized but appear to be unused within the class. To improve code clarity and maintainability, they should be removed.

Comment on lines +98 to +100
prev_start = previous_token_ids.count(self.tool_call_start_token_id)
cur_start = current_token_ids.count(self.tool_call_start_token_id)
cur_end = current_token_ids.count(self.tool_call_end_token_id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The count() method is called on previous_token_ids and current_token_ids in each invocation of this streaming method. Since these lists can grow large, this is inefficient as it re-scans the entire list every time. Consider maintaining the counts as part of the parser's state and updating them incrementally with delta_token_ids to improve performance.

Signed-off-by:  <>
Signed-off-by: Ubuntu <[email protected]>
Signed-off-by: Doil Kim <[email protected]>
@gjgjos gjgjos force-pushed the feat/command-tool-parser branch 2 times, most recently from 8145cea to 2b12b68 Compare July 9, 2025 01:55
Signed-off-by: Doil Kim <[email protected]>
Co-authored-by: 김종곤 <[email protected]>
@gjgjos gjgjos force-pushed the feat/command-tool-parser branch from 2b12b68 to 84c6f34 Compare July 9, 2025 01:59
@Deepfocused
Copy link

good!

Copy link

mergify bot commented Jul 11, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @gjgjos.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jul 11, 2025
@mergify mergify bot removed the needs-rebase label Jul 11, 2025
@gjgjos gjgjos closed this Jul 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants