Skip to content

Add model_context to SelectorGroupChat for enhanced speaker selection #6330

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged

Conversation

Ethan0456
Copy link
Contributor

@Ethan0456 Ethan0456 commented Apr 17, 2025

Why are these changes needed?

This PR enhances the SelectorGroupChat class by introducing a new model_context parameter to support more context-aware speaker selection.

Changes

  • Added a model_context: ChatCompletionContext | None parameter to SelectorGroupChat.
  • Defaulted to UnboundedChatCompletionContext when None is provided like AssistantAgent.
  • Updated _select_speaker to prepend context messages from model_context to the main thread history.
  • Refactored history construction into a helper method construct_message_history.

Related issue number

Closes Issue #6301, enabling the group chat manager to utilize model_context for richer, more informed speaker selection decisions.

Checks

@Ethan0456 Ethan0456 marked this pull request as ready for review April 17, 2025 17:48
@Ethan0456
Copy link
Contributor Author

Ethan0456 commented Apr 21, 2025

Hi @ekzhu,

I’ve made some changes to use messages from model_context for speaker selection. For now, BufferedChatCompletionContext with a buffer size of 5 is set as the default for testing.

Would really appreciate any feedback on the approach — also curious which context class you'd prefer as the default.

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 22, 2025

@Ethan0456 I realized that #6350 may be doing a similar thing to this PR but from message thread point of view. Let's pause this PR for now and let's see if we can address context size problem using #6350 first.

@SongChiYoung
Copy link
Contributor

@ekzhu @Ethan0456

As an AutoGen user who has been eagerly looking forward to this PR, I wanted to share my thoughts in detail. It's a bit long, but I hope it's clear. I would appreciate any feedback after reading.

Community Need

Based on ongoing community feedback, I believe there is a clear need for internal message summarization and management functionality within SelectorGroupChat. This has been raised repeatedly in Discord, Discussions (especially #6347), and even in Help channels with similar requests.

Personal Use Case

That said, I’m sharing my perspective here not as a contributor, but as a user who practically needs this functionality.

Limitations of #6350

While #6350 does address a similar issue, its TTL cutoff approach simply limits the number of messages. This doesn’t quite meet the need for summarizing or selectively preserving internal messages.

Specifically, in the case of SelectorGroupChat, TTL cutoff could potentially remove critical messages, including the initial user request, which raises a concern that the selector might lose context and misidentify the next agent. I am concerned that TTL alone may not address this effectively.

Why model_context Works Better for Me

The model_context-based approach proposed in this PR, particularly using HeadAndTailChatCompletionContext, allows for reliably preserving both the initial and most recent messages. This ensures that SelectorGroupChat can always reference the original user intent when choosing the next speaker, which is essential for the use cases I face. Achieving this kind of context preservation through a simple TTL mechanism seems difficult.

Concern About Expanding #6350 Scope

If #6350 were to expand beyond TTL cutoff into more complex message preservation or summarization, it might blur the responsibility between simple message cleanup and full history management. This could make the purpose of each mechanism less clear.

Conclusion

Therefore, I personally see #6350 as a clean and focused solution for trimming unnecessary messages, and I’m very supportive of that contribution moving forward. However, this PR enables more precise conversation flow control through internal message summarization and history context management, and it’s something I was also looking forward to seeing merged.

I believe the two are not in conflict—they solve different problems and can complement each other well.


Additional Note

AutoGen’s model_context structure is already designed to allow users to customize message management without requiring external extensions. That said, tools like the community extension autogen-contextplus (which I contributed to) or future model_context improvements could make history management within SelectorGroupChat even more flexible and powerful.

@Ethan0456
Copy link
Contributor Author

Ethan0456 commented Apr 22, 2025

Hi @ekzhu, @SongChiYoung,

I also believe that model_context offers more flexibility in this scenario, particularly when it comes to controlling the tokens and the structure of message history used for speaker selection.

A Hypothetical Example

For example, a (hypothetical) workflow—similar to what @SongChiYoung described—could involve maintaining a list of "user query" -> ["task result" or "unsuccessful attempt + reflection"] entries inside the model_context. This kind of structured memory can help influence speaker selection in a more intentional and context-sensitive way, rather than just relying on the most recent n messages.

This approach may not be achievable with the current design proposed in PR #6350—not to say that the PR isn't useful, but rather that it targets a different problem space.

Another workflow where model_context could be especially beneficial is the following:

Hypothesis-Driven Agent Collaboration

Scenario: You're orchestrating a team of LLM agents, each responsible for a different stage of scientific reasoning—such as hypothesis generation, experiment design, result analysis, and reflection.

Why not traditional last_n_messages?
In such a setup, relying solely on the most recent messages can omit critical information, like earlier hypotheses or failed experiments, which might be essential for driving the next step of reasoning.

How model_context helps?
Instead of a linear transcript, model_context can maintain a structured list of "hypothesis" -> "attempt" -> "failure reason" triples. This richer form of context allows the SelectorGroupChat to select agents like ReflectionAgent to evaluate past attempts holistically and make informed decisions.

This enables goal-aware, context-rich memory selection, compared to a more straightforward time-based truncation approach, like the one proposed in PR #6350.

Would love to hear your thoughts on this!

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 22, 2025

@Ethan0456 @SongChiYoung Great points made. Let's resume work here.

There are many complaints about SelectorGroupChat, we can try to improve it here.

- Added `update_message_thread` method in `BaseGroupChatManager` to manage message thread updates.
- Replaced direct `_message_thread` modifications with calls to this method.
- Overrode `update_message_thread` in `SelectorGroupChat` to also update the `model_context`.

Signed-off-by: Abhijeetsingh Meena <[email protected]>
Copy link
Collaborator

@ekzhu ekzhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add some unit tests to show that model context is being managed, validate it using ReplayChatCompletionClient which records the calls.

@Ethan0456
Copy link
Contributor Author

Hi @ekzhu,

I've updated the code based on your suggestion and added a unit test to validate the selector group chat with model context.

Please let me know if you have any additional suggestions for improvement.

Copy link
Collaborator

@ekzhu ekzhu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you resolve the merge conflict with the main branch?

@Ethan0456
Copy link
Contributor Author

Hi @ekzhu,

I’ve completed the following updates:

  • Refactored message history construction to use LLMMessage - 2580acf
  • Updated the API documentation to include the model_context parameter and added an example - 26f6de0
  • Added a unit test to validate model_context usage in SelectorGroupChat - 63212ef
  • Integrated model_context into SelectorGroupChatConfig - 57dbeaa
  • Resolved merge conflicts - 6213476

Please let me know if there’s anything else that needs to be addressed.

Copy link

codecov bot commented May 6, 2025

Codecov Report

Attention: Patch coverage is 95.12195% with 2 lines in your changes missing coverage. Please review.

Project coverage is 78.56%. Comparing base (085ff3d) to head (e849089).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...gentchat/teams/_group_chat/_selector_group_chat.py 93.93% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6330      +/-   ##
==========================================
- Coverage   78.57%   78.56%   -0.02%     
==========================================
  Files         225      225              
  Lines       16525    16549      +24     
==========================================
+ Hits        12984    13001      +17     
- Misses       3541     3548       +7     
Flag Coverage Δ
unittests 78.56% <95.12%> (-0.02%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@ekzhu ekzhu merged commit 2864fbf into microsoft:main May 6, 2025
63 checks passed
@withsmilo
Copy link
Contributor

@ekzhu @SongChiYoung @Ethan0456
It feels like I’ve discovered your passionate discussion a bit too late.

I am a contributor of #6350. While resolving the merge conflict in #6350, I found that this PR has broken my PR. 🤣

As discussed with @ekzhu in #6169, my goal is to reduce the load which is increased on the external database by cutting message_thread's length. Let me think more about how #6350 can be improved. If you have any good feedback, it is always welcome. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants