-
Notifications
You must be signed in to change notification settings - Fork 6.6k
FEAT: SelectorGroupChat could using stream inner select_prompt #6286
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FEAT: SelectorGroupChat could using stream inner select_prompt #6286
Conversation
Can we address this first? #6161. Otherwise the |
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
…t_streaming_from_the_selector_of_selectorgroupchat_#6145
I’ll take a look at #6161. And, just change response = await self._model_client.create(messages=select_speaker_messages) to if self._streaming:
message: CreateResult | str = ""
async for _message in self._model_client.create_stream(messages=select_speaker_messages):
message = _message
if isinstance(message, CreateResult):
response = message
else:
raise ValueError("Model failed to select a speaker.")
else:
response = await self._model_client.create(messages=select_speaker_messages) So I don't think there would be any problem solving that issue — BTW, what kind of help do you expect for that issue? Do you want me to take the lead on it? |
Yes, please address #6161 and then this one. |
…ctor_of_selectorgroupchat_#6145
…ctor_of_selectorgroupchat_#6145
@ekzhu I resolved merge conflict. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #6286 +/- ##
==========================================
+ Coverage 77.93% 77.97% +0.03%
==========================================
Files 214 214
Lines 15356 15377 +21
==========================================
+ Hits 11968 11990 +22
+ Misses 3388 3387 -1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
…t_streaming_from_the_selector_of_selectorgroupchat_#6145
ce0ac44
to
29f2c0f
Compare
…ctor_of_selectorgroupchat_#6145
@SongChiYoung generally, I think it is more important to emit the inner |
…ctor_of_selectorgroupchat_#6145
@ekzhu Please check this PR. |
python/packages/autogen-agentchat/src/autogen_agentchat/messages.py
Outdated
Show resolved
Hide resolved
python/packages/autogen-agentchat/src/autogen_agentchat/messages.py
Outdated
Show resolved
Hide resolved
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
…_group_chat/_selector_group_chat.py
…ctor_of_selectorgroupchat_#6145
…ctor_of_selectorgroupchat_#6145
Added a few fixes to pull it across the finish line. |
...n/packages/autogen-agentchat/src/autogen_agentchat/teams/_group_chat/_selector_group_chat.py
Outdated
Show resolved
Hide resolved
…ctor_of_selectorgroupchat_#6145
…ctor_of_selectorgroupchat_#6145
…e0424 * upstream/main: Remove `name` field from OpenAI Assistant Message (microsoft#6388) Introduce workbench (microsoft#6340) TEST/change gpt4, gpt4o serise to gpt4.1nano (microsoft#6375) update website version (microsoft#6364) Add self-debugging loop to `CodeExecutionAgent` (microsoft#6306) Fix: deserialize model_context in AssistantAgent and SocietyOfMindAgent and CodeExecutorAgent (microsoft#6337) Add azure ai agent (microsoft#6191) Avoid re-registering a message type already registered (microsoft#6354) Added support for exposing GPUs to docker code executor (microsoft#6339) fix: ollama fails when tools use optional args (microsoft#6343) Add an example using autogen-core and FastAPI to create streaming responses (microsoft#6335) FEAT: SelectorGroupChat could using stream inner select_prompt (microsoft#6286) Add experimental notice to canvas (microsoft#6349) DOC: add extentions - autogen-oaiapi and autogen-contextplus (microsoft#6338) fix: ensure serialized messages are passed to LLMStreamStartEvent (microsoft#6344) Generalize Continuous SystemMessage merging via model_info[“multiple_system_messages”] instead of `startswith("gemini-")` (microsoft#6345) Agentchat canvas (microsoft#6215) Signed-off-by: Peter Jausovec <[email protected]>
Why are these changes needed?
This PR updates
SelectorGroupChat
to support streaming mode forselect_speaker
.It introduces a
streaming
argument — when set toTrue
,select_speaker
will usecreate_streaming()
instead ofcreate()
.Additional context
Some models (e.g., QwQ) only work properly in streaming mode.
To support them, the prompt selection step in
SelectorGroupChat
must also run withstreaming=True
.Related issue number
Closes #6145
Checks