-
-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: adding support for other LLM providers with litellm #14
base: master
Are you sure you want to change the base?
Conversation
Hi @krrishdholakia!
The PR I made was just a thing I saw missing, I'm not actually using litellm myself! (congratulations on the popularity!)
To be completely honest: I just don't see why? I try to avoid extra dependencies whenever possible. When looking at your I guess it's just not for me right now. Maybe in the future. I wish you the best and continued success! 🚀 P.S. when checking out your profile I found https://github.com/BerriAI/liteLLM-proxy which looks great! Definitely going to try that out. |
Hey @ErikBjare Thanks for the feedback.
What did you not like/want? Feedback here is great!
Interesting - what problem does it solve for you? |
Stumbled into a bunch of issues, leading to me creating BerriAI/litellm#1537 In the process, I learned that Azure OpenAI is directly supported by openai-python. So might not have a need for litellm after all. So I decided to open a seperate PR with just the openai v1.0 changes and go from there: #65 |
Hi @ErikBjare,
Noticed you're calling GPT-4 + local models via llama cpp. Wanted to see if we could help with LiteLLM (https://github.com/BerriAI/litellm).
Replaced the openai chatcompletions call with litellm.completion.
This should enable gptme to support Anthropic, Bedrock, AI21, TogetherAI, Azure, etc.
Curious - i know you've made a PR into LiteLLM before. What was litellm missing to be useful to you? Any feedback here would be helpful.