Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEAT]: Integration of Litellm as model proxy such as Ollama #1154

Closed
flefevre opened this issue Apr 20, 2024 · 6 comments · Fixed by #1424
Closed

[FEAT]: Integration of Litellm as model proxy such as Ollama #1154

flefevre opened this issue Apr 20, 2024 · 6 comments · Fixed by #1424
Assignees
Labels
enhancement New feature or request feature request Integration Request Request for support of a new LLM, Embedder, or Vector database

Comments

@flefevre
Copy link

What would you like to see?

it could be good to integrate Litellm proxy https://github.com/BerriAI/litellm in addition to ollama
so AnythingLLM could be comaptible with a multiple api model serving

I have seen the post #271 but perhaps it has evolved in the right direction?

@flefevre flefevre added enhancement New feature or request feature request labels Apr 20, 2024
@timothycarambat
Copy link
Member

Do you mean using LiteLLM as your LLM provider or adding LiteLLM within AnythingLLM as the main backend? That other issue was the latter, which is not possible because that is a python service and would be more set up and configuration and overhead than it is worth to undertake.

Adding it as an LLM provider where you have to run LiteLLM on your own and we simply connect to it is very doable

@timothycarambat timothycarambat added the Integration Request Request for support of a new LLM, Embedder, or Vector database label Apr 21, 2024
@flefevre
Copy link
Author

flefevre commented Apr 21, 2024 via email

@timothycarambat
Copy link
Member

Understand completely and that makes plenty of sense! I just wanted to ensure we were talking about the same feature.

@flefevre
Copy link
Author

flefevre commented May 1, 2024

Hello,
Implementing the Litellm binding in Anythingllm will help to solve such problem #1153 where AnythingLlm is able to connect to the vllm, but not to use the model served by vllm due to the specificity of each model, such as in my case mixtral8x7b.

do you know if this feature of Anythingllm Litellm compatibility will be put in the near future feature?
Thanks again. François

@timothycarambat
Copy link
Member

@flefevre it is certainly not out of the question and would alleviate many of such cases of unsupported LLM runners not being able to use AnythingLLM while we later build more dedicated support for specific runners over time.

@timothycarambat timothycarambat self-assigned this May 1, 2024
@avinashkurup
Copy link

avinashkurup commented May 15, 2024

Hi @timothycarambat may I know if this feature is being planned to be integrated in AnythingLLM, the binding of LiteLLM is something which we would like to see in the AnythingLLM.

@shatfield4 shatfield4 linked a pull request May 16, 2024 that will close this issue
10 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request feature request Integration Request Request for support of a new LLM, Embedder, or Vector database
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants