-
Notifications
You must be signed in to change notification settings - Fork 6.7k
Add model_context
to SelectorGroupChat
for enhanced speaker selection
#6330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add model_context
to SelectorGroupChat
for enhanced speaker selection
#6330
Conversation
…ker selection (microsoft#6301) Signed-off-by: Abhijeetsingh Meena <[email protected]>
Hi @ekzhu, I’ve made some changes to use messages from Would really appreciate any feedback on the approach — also curious which context class you'd prefer as the default. |
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
@Ethan0456 I realized that #6350 may be doing a similar thing to this PR but from message thread point of view. Let's pause this PR for now and let's see if we can address context size problem using #6350 first. |
As an AutoGen user who has been eagerly looking forward to this PR, I wanted to share my thoughts in detail. It's a bit long, but I hope it's clear. I would appreciate any feedback after reading. Community NeedBased on ongoing community feedback, I believe there is a clear need for internal message summarization and management functionality within Personal Use CaseThat said, I’m sharing my perspective here not as a contributor, but as a user who practically needs this functionality. Limitations of #6350While #6350 does address a similar issue, its TTL cutoff approach simply limits the number of messages. This doesn’t quite meet the need for summarizing or selectively preserving internal messages. Specifically, in the case of Why model_context Works Better for MeThe model_context-based approach proposed in this PR, particularly using Concern About Expanding #6350 ScopeIf #6350 were to expand beyond TTL cutoff into more complex message preservation or summarization, it might blur the responsibility between simple message cleanup and full history management. This could make the purpose of each mechanism less clear. ConclusionTherefore, I personally see #6350 as a clean and focused solution for trimming unnecessary messages, and I’m very supportive of that contribution moving forward. However, this PR enables more precise conversation flow control through internal message summarization and history context management, and it’s something I was also looking forward to seeing merged. I believe the two are not in conflict—they solve different problems and can complement each other well. Additional NoteAutoGen’s model_context structure is already designed to allow users to customize message management without requiring external extensions. That said, tools like the community extension |
Hi @ekzhu, @SongChiYoung, I also believe that A Hypothetical ExampleFor example, a (hypothetical) workflow—similar to what @SongChiYoung described—could involve maintaining a list of This approach may not be achievable with the current design proposed in PR #6350—not to say that the PR isn't useful, but rather that it targets a different problem space. Another workflow where Hypothesis-Driven Agent CollaborationScenario: You're orchestrating a team of LLM agents, each responsible for a different stage of scientific reasoning—such as hypothesis generation, experiment design, result analysis, and reflection. Why not traditional How This enables goal-aware, context-rich memory selection, compared to a more straightforward time-based truncation approach, like the one proposed in PR #6350. Would love to hear your thoughts on this! |
@Ethan0456 @SongChiYoung Great points made. Let's resume work here. There are many complaints about SelectorGroupChat, we can try to improve it here. |
- Added `update_message_thread` method in `BaseGroupChatManager` to manage message thread updates. - Replaced direct `_message_thread` modifications with calls to this method. - Overrode `update_message_thread` in `SelectorGroupChat` to also update the `model_context`. Signed-off-by: Abhijeetsingh Meena <[email protected]>
...ckages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_base_group_chat_manager.py
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add some unit tests to show that model context is being managed, validate it using ReplayChatCompletionClient
which records the calls.
Hi @ekzhu, I've updated the code based on your suggestion and added a unit test to validate the selector group chat with model context. Please let me know if you have any additional suggestions for improvement. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you resolve the merge conflict with the main branch?
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Show resolved
Hide resolved
Signed-off-by: Abhijeetsingh Meena <[email protected]>
Signed-off-by: Abhijeetsingh Meena <[email protected]>
…rGroupChat` Signed-off-by: Abhijeetsingh Meena <[email protected]>
Signed-off-by: Abhijeetsingh Meena <[email protected]>
Signed-off-by: Abhijeetsingh Meena <[email protected]>
Hi @ekzhu, I’ve completed the following updates:
Please let me know if there’s anything else that needs to be addressed. |
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
…_group_chat/_selector_group_chat.py
…_group_chat/_selector_group_chat.py
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6330 +/- ##
==========================================
- Coverage 78.57% 78.56% -0.02%
==========================================
Files 225 225
Lines 16525 16549 +24
==========================================
+ Hits 12984 13001 +17
- Misses 3541 3548 +7
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
@ekzhu @SongChiYoung @Ethan0456 I am a contributor of #6350. While resolving the merge conflict in #6350, I found that this PR has broken my PR. 🤣 As discussed with @ekzhu in #6169, my goal is to reduce the load which is increased on the external database by cutting message_thread's length. Let me think more about how #6350 can be improved. If you have any good feedback, it is always welcome. Thanks. |
Why are these changes needed?
This PR enhances the
SelectorGroupChat
class by introducing a newmodel_context
parameter to support more context-aware speaker selection.Changes
model_context: ChatCompletionContext | None
parameter toSelectorGroupChat
.UnboundedChatCompletionContext
when None is provided likeAssistantAgent
._select_speaker
to prepend context messages frommodel_context
to the main thread history.construct_message_history
.Related issue number
Closes Issue #6301, enabling the group chat manager to utilize
model_context
for richer, more informed speaker selection decisions.Checks