Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support configuring multiple models at a time #1391

Open
lenaxia opened this issue May 17, 2024 · 0 comments
Open

Support configuring multiple models at a time #1391

lenaxia opened this issue May 17, 2024 · 0 comments

Comments

@lenaxia
Copy link
Contributor

lenaxia commented May 17, 2024

Is your feature request related to a problem? Please describe.
When creating an agent, there is a drop down to select the LLM model to use for the agent, however currently there is no way to configure multiple models for MemGPT (at aleast from what I can find in the docs and has been told to me in Discord).

Describe the solution you'd like
Either/Or, multiple models should be configurable in the config file, if done manually then defining endpoints for each model would allow the user to utilize multiple serviecs at any given time.

Or, create discovery of models for a configured backend. e.g. LiteLLM Proxy or LocalAI both expose a /models endpoint which publishes all the models that are currently available. This would be populated into the drop down in "Create Agent".

Especially in this case, but also in the first case, Agents should be flagged in the Agents List if their models are no longer available, and/or a default model could be configured as a fallback should a model become unavailable.

Describe alternatives you've considered
No alternatives exist besides copying and pasting configs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: To triage
Development

No branches or pull requests

2 participants