You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trying to use mistralrs with llamaindex, but it's still quite slow. For my use case, I'm trying to get performance in millisecs. With llamaindex, performance is not ideal.
I'm wondering if I can use mistralrs with llm-chain (it's a Rust-based alternative to llamaindex). I'd be willing to do the work to get the two to work together, but I'm not sure where to start. Can you let me know how I can integrate the two?
The text was updated successfully, but these errors were encountered:
To implement, the integration, I would recommend looking at how to connect their Executor trait with our Pipeline trait. Mistral.rs is designed around the Engine, which does the overseeing work of processing, scheduling, and running the Pipeline.
Currently, we don't have an embedding API, which we should probably add to implement this. I'm not very familiar with llm-chain so I'm not sure if that is a necessity. If it is, I'll try to add that API, as it has already been on my to-do list for some time.
I'm going to close this for now as I've got another project I'll be working on for the next few weeks. If I have the time afterward, I'll reopen the issue and work on the solution.
Trying to use mistralrs with llamaindex, but it's still quite slow. For my use case, I'm trying to get performance in millisecs. With llamaindex, performance is not ideal.
I'm wondering if I can use mistralrs with llm-chain (it's a Rust-based alternative to llamaindex). I'd be willing to do the work to get the two to work together, but I'm not sure where to start. Can you let me know how I can integrate the two?
The text was updated successfully, but these errors were encountered: