Replies: 1 comment 1 reply
-
I disagree with your point that implementing a closed-source LLM in an open source project would be the right way to go. I also dont believe that it is a requirement for this functionality. If you search around, there are success stories with projects using only 2B LLM models. The openAI website blog explains how one of their customers uses GPT 3.5 in their production. What we really need is the ability to write any prompt and have any LLM form a response in a controlled manner. This functionality is available and does exist and others are utilising it. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How can we improve local model performance?
Problems:
Solutions:
1. Separate prompts for local and API based models:
2. Create an OpenDevin fine-tuned model:
Necessary Improvements:
Beta Was this translation helpful? Give feedback.
All reactions