Skip to content

FEAT: SelectorGroupChat could using stream inner select_prompt #6286

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

SongChiYoung
Copy link
Contributor

@SongChiYoung SongChiYoung commented Apr 12, 2025

Why are these changes needed?

This PR updates SelectorGroupChat to support streaming mode for select_speaker.
It introduces a streaming argument — when set to True, select_speaker will use create_streaming() instead of create().

Additional context

Some models (e.g., QwQ) only work properly in streaming mode.
To support them, the prompt selection step in SelectorGroupChat must also run with streaming=True.

Related issue number

Closes #6145

Checks

@SongChiYoung SongChiYoung changed the title FEAT: select group chat could using stream FEAT: SelectorGroupChat could using stream inner select_prompt Apr 12, 2025
@ekzhu
Copy link
Collaborator

ekzhu commented Apr 13, 2025

Can we address this first? #6161. Otherwise the streaming option won't actually stream to run_stream.

@SongChiYoung
Copy link
Contributor Author

SongChiYoung commented Apr 13, 2025

@ekzhu

Can we address this first? #6161. Otherwise the streaming option won't actually stream to run_stream.

I’ll take a look at #6161.
That said, in my case, I confirmed that stream was working by adding some print statements inside the create_stream() function at Openai Client

And, just change

            response = await self._model_client.create(messages=select_speaker_messages)

to

            if self._streaming:
                message: CreateResult | str = ""
                async for _message in self._model_client.create_stream(messages=select_speaker_messages):
                    message = _message
                if isinstance(message, CreateResult):
                    response = message
                else:
                    raise ValueError("Model failed to select a speaker.")
            else:
                response = await self._model_client.create(messages=select_speaker_messages)

So I don't think there would be any problem solving that issue —
-> well, except maybe some merge conflict later.

BTW, what kind of help do you expect for that issue?
There’s no code or draft linked yet.

Do you want me to take the lead on it?

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 15, 2025

Yes, please address #6161 and then this one.

@SongChiYoung
Copy link
Contributor Author

@ekzhu I resolved merge conflict.

Copy link

codecov bot commented Apr 17, 2025

Codecov Report

Attention: Patch coverage is 95.65217% with 1 line in your changes missing coverage. Please review.

Project coverage is 77.97%. Comparing base (71363a3) to head (94a8f9b).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...utogen-agentchat/src/autogen_agentchat/messages.py 83.33% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #6286      +/-   ##
==========================================
+ Coverage   77.93%   77.97%   +0.03%     
==========================================
  Files         214      214              
  Lines       15356    15377      +21     
==========================================
+ Hits        11968    11990      +22     
+ Misses       3388     3387       -1     
Flag Coverage Δ
unittests 77.97% <95.65%> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@SongChiYoung SongChiYoung force-pushed the feature/model_client_streaming_from_the_selector_of_selectorgroupchat_#6145 branch from ce0ac44 to 29f2c0f Compare April 17, 2025 09:16
@ekzhu
Copy link
Collaborator

ekzhu commented Apr 18, 2025

@SongChiYoung generally, I think it is more important to emit the inner SelectorEvent activities rather than just streaming the acitivies.

@SongChiYoung
Copy link
Contributor Author

@ekzhu
Thanks for all of your help.
I think, all of issue is resolved and changed from your direction.
and, I was add test case for this change.

Please check this PR.

@ekzhu
Copy link
Collaborator

ekzhu commented Apr 21, 2025

Added a few fixes to pull it across the finish line.

@ekzhu ekzhu merged commit 9b0a0bd into microsoft:main Apr 21, 2025
60 checks passed
peterj added a commit to kagent-dev/autogen that referenced this pull request Apr 24, 2025
…e0424

* upstream/main:
  Remove `name` field from OpenAI Assistant Message (microsoft#6388)
  Introduce workbench (microsoft#6340)
  TEST/change gpt4, gpt4o serise to gpt4.1nano (microsoft#6375)
  update website version (microsoft#6364)
  Add self-debugging loop to `CodeExecutionAgent` (microsoft#6306)
  Fix: deserialize model_context in AssistantAgent and SocietyOfMindAgent and CodeExecutorAgent (microsoft#6337)
  Add azure ai agent (microsoft#6191)
  Avoid re-registering a message type already registered (microsoft#6354)
  Added support for exposing GPUs to docker code executor (microsoft#6339)
  fix: ollama fails when tools use optional args (microsoft#6343)
  Add an example using autogen-core and FastAPI to create streaming responses (microsoft#6335)
  FEAT: SelectorGroupChat could using stream inner select_prompt (microsoft#6286)
  Add experimental notice to canvas (microsoft#6349)
  DOC: add extentions - autogen-oaiapi and autogen-contextplus (microsoft#6338)
  fix: ensure serialized messages are passed to LLMStreamStartEvent (microsoft#6344)
  Generalize Continuous SystemMessage merging via model_info[“multiple_system_messages”] instead of `startswith("gemini-")` (microsoft#6345)
  Agentchat canvas (microsoft#6215)

Signed-off-by: Peter Jausovec <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Model client streaming from the selector of SelectorGroupChat
2 participants