Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support qwen in openai compatible mode #3604

Conversation

allen12921
Copy link

as dashscope developer-reference says, DashScope provides an OpenAI-compatible interface https://dashscope.aliyuncs.com/compatible-mode/v1 for many qwen models but it can't support the optional 'user' paramater and litellm always add this paramater

Type

🐛 Bug Fix

Changes

Remove the optional 'user' paramater for model whose name is started with 'qwen'

Pre-Submission Checklist (optional but appreciated):

OS Tests (optional but appreciated):

  • Tested on Linux

as https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope/?spm=a2c4g.11186623.0.0.5ab04bb0SFbKjC says, DashScope provides an OpenAI-compatible interface for many qwen models but it can't support the optional user paramater after test out
Copy link

vercel bot commented May 13, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback May 13, 2024 3:24am

@@ -5849,6 +5849,7 @@ def _map_and_modify_arg(supported_params: dict, provider: str, model: str):
for k in passed_params.keys():
if k not in default_params.keys():
optional_params[k] = passed_params[k]
if custom_llm_provider == "openai" and "qwen" in model: optional_params.pop("user", None)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@allen12921 we have a list of 'openai_compatible_providers', can you add it there instead -

openai_compatible_providers: List = [

And add a working example to docs - e.g. -

**We support ALL Groq models, just set `groq/` as a prefix when sending completion requests**

@allen12921 allen12921 closed this May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants