Skip to content

Add MCP cookbook for OpenAI-Gemini compatibility #187

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Ramachetan
Copy link

This PR adds a new cookbook example demonstrating how to create a Chainlit application that works seamlessly with both Google Gemini and OpenAI models using a single codebase.

Changes

Created new folder mcp-openai-gemini with a dual-compatible implementation using the OpenAI SDK
Added comprehensive README.md explaining how to configure for either OpenAI or Gemini models
Included environment setup instructions and configuration examples

Why this matters

This cookbook provides developers with a practical example of how to build MCP Client applications that can switch between LLM providers with minimal code changes, leveraging Google AI Studio's OpenAI SDK compatibility layer.

@dkopljar27
Copy link

It would be great to also add support for AzureOpenAI...
Add import> from openai import AsyncAzureOpenAI

Init AzureClient>
client = AsyncAzureOpenAI( api_key=API_KEY, api_version=API_VERISON, azure_endpoint=ENDPOINT )

Add step for empty first chunk (due to e.g. content filtering)

 async for chunk in stream_resp:
    if not chunk.choices:
        continue
    delta = chunk.choices[0].delta
    if delta and delta.content:
        await msg.stream_token(delta.content)

@Ramachetan
Copy link
Author

I can work on it but I don't have access to azure open ai right so, can't really test it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants