-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Model client streaming from the selector of SelectorGroupChat #6145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This is an interesting issue Would it be possible to provide a more specific reproduction example, including which model/client configuration triggers this error? I'd love to help investigate further. |
@SongChiYoung, many thanks for helping. here is my code:
You can save this in a file, say, test_flow.py and then run python test_flow.py. The type something after the pop up. From alibaba: the most powerful model The error is:
|
Just sharing a thought from an architectural perspective: Rather than adding ad-hoc fixes or modifying SelectorGroupChat specifically for this, I'm considering whether it would make more sense — once PR #6063 is merged — to handle this kind of use case by configuring The reason I hesitate to embed special logic for QwQ or similar agents into GroupChat (or any group structures) is that future use cases or new types of GroupChats may again require exposing stream or other model-specific flags — which could become hard to maintain. Curious to hear thoughts from maintainers on this! |
By the way, the current restriction on stream = True on the model side is that OpenAIChatCompletionClient does not allow setting stream = True. |
Thanks for the clarification! Based on the error message That’s why I think one possible path forward is to handle this at the model level (e.g. via model config / registry), so the correct This is aligned with the goal of PR #6063
That said, this is just my opinion — I believe the maintainers' judgment here is the most important. |
Is the constraint on stream only a temporary one for QwQ or it is permanent? I think we can enable streaming for As the next step, we can enable streaming of orchestration events through @yingjiewei are you interested in submitting a PR for this? Just focus on adding |
Fixed via #6145 — SelectorGroupChat now supports streaming mode for select_speaker. |
Feature Request
We can enable streaming for SelectoGroupChat's built-in selector by introducing an option in SelectorGroupChat, e.g., model_client_stream so the model client will be used in streaming model. It will use create_stream rather than create.
As the next step, we can enable streaming of orchestration events through run_stream so the streaming output will be visible from consumer of run_stream. Issue here: #6161
--- Below is the original bug report ---
What happened?
Describe the bug
Some llm models only support stream = True. The assistant agent supports this very well by setting model_client_stream = True. But the OpenAIChatCompletionClient does not allow to pass stream = True to it. Therefore, it's not very possible to use llm models which only supports stream = True.
To Reproduce
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Additional context
Add any other context about the problem here.
Which packages was the bug in?
Python AgentChat (autogen-agentchat>=0.4.0)
AutoGen library version.
Python dev (main branch)
Other library version.
No response
Model used
No response
Model provider
None
Other model provider
No response
Python version
None
.NET version
None
Operating system
None
The text was updated successfully, but these errors were encountered: