New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FEAT]: Integration of Litellm as model proxy such as Ollama #1154
Comments
Do you mean using LiteLLM as your LLM provider or adding LiteLLM within AnythingLLM as the main backend? That other issue was the latter, which is not possible because that is a python service and would be more set up and configuration and overhead than it is worth to undertake. Adding it as an LLM provider where you have to run LiteLLM on your own and we simply connect to it is very doable |
Hello
I mean "running Littellm on my own" and "siimply connect anythingllm" to it.
This is a good pattern since Litellm can proxy model provider such as vllm
which can serve and multiplex several llm models.
I have setup a full configuration with ollama / litellm / vllm
and i have tested anythingllm server with ollama: it works like a charm
so i will be able to test anythingllm to litellm to serve llms models from
vllm and so test the speed increase in anythingllm
Let me if you want i make something.
I am sorry for my english.
François from France
…On Sun, Apr 21, 2024 at 8:33 PM Timothy Carambat ***@***.***> wrote:
Do you mean using LiteLLM as your LLM provider or adding LiteLLM within
AnythingLLM as the main backend? That other issue was the latter, which is
not possible because that is a python service and would be more set up and
configuration and overhead than it is worth to undertake.
Adding it as an LLM provider where you have to run LiteLLM on your own and
we simply connect to it is very doable
—
Reply to this email directly, view it on GitHub
<#1154 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKZRFBBISI4HSFTVLY4LZ3Y6QA55AVCNFSM6AAAAABGQZ62Q2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRYGE2TQMRWGU>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
-----------------------------------------
François Le Fèvre
36 rue Jean Poulmarch
91190 Gif-sur-Yvette
0665604928
|
Understand completely and that makes plenty of sense! I just wanted to ensure we were talking about the same feature. |
Hello, do you know if this feature of Anythingllm Litellm compatibility will be put in the near future feature? |
@flefevre it is certainly not out of the question and would alleviate many of such cases of unsupported LLM runners not being able to use AnythingLLM while we later build more dedicated support for specific runners over time. |
Hi @timothycarambat may I know if this feature is being planned to be integrated in AnythingLLM, the binding of LiteLLM is something which we would like to see in the AnythingLLM. |
What would you like to see?
it could be good to integrate Litellm proxy https://github.com/BerriAI/litellm in addition to ollama
so AnythingLLM could be comaptible with a multiple api model serving
I have seen the post #271 but perhaps it has evolved in the right direction?
The text was updated successfully, but these errors were encountered: