Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing OpenAI models #26

Open
johnbrownlow opened this issue Aug 18, 2023 · 19 comments
Open

Missing OpenAI models #26

johnbrownlow opened this issue Aug 18, 2023 · 19 comments
Labels
enhancement New feature or request

Comments

@johnbrownlow
Copy link

johnbrownlow commented Aug 18, 2023

Several useful OpenAI models are missing from the types. For example gpt-3.5-turbo-16k as well as the dated models.

@KyleRobertsAI
Copy link

Hi @jorge-menjivar, just wondering if there was any update on adding these models?

@Nymbo
Copy link

Nymbo commented Oct 19, 2023

Hey y'all, this is a real easy patch ~
Just add this block between the models listed in Master/types/ai-models.ts

  'gpt-3.5-turbo-16k': {
    id: 'gpt-3.5-turbo-16k',
    maxLength: 48000,
    tokenLimit: 16000,
    requestLimit: 12000,
    vendor: 'OpenAI',
  },

The model will show up in the dropdown menu :)

@Bortus-AI
Copy link
Collaborator

Several useful OpenAI models are missing from the types. For example gpt-3.5-turbo-16k as well as the dated models.

gpt-3.5-turbo-16k has been added

@sebiweise
Copy link
Collaborator

@jorge-menjivar I think this issue can be closed

@jorge-menjivar
Copy link
Owner

@sebiweise I would like to make it possible to have the option to see all models including the versioned ones, but most importantly models with dynamic names like fine-tuned models. Maybe as an advanced feature that is disabled by default. This would require some moderate changes to our models structure though, which is why I'm leaving for later.

@sebiweise
Copy link
Collaborator

You mean like a database table just for available models instead of the types integration?

@jorge-menjivar
Copy link
Owner

Yeah something like that would do it. The other issue is getting the proper max token length for arbitrary models. I know ollama at least, doesn’t have a way to do this in their endpoints API so I will need to investigate and do a pr to them (I asked on discord, and no response so I’m assuming there is no way). If we were to do this, there would not be a need to implement new models manually anymore, unless something substantial is different about the model.

@sebiweise
Copy link
Collaborator

sebiweise commented Nov 20, 2023

I just removed the PossibleAiModels types integration and returning every possible AiModel now, but I dont know how to get the correct information for maxLength, tokenLimit, requestLimit from the OpenAi API, any idea?
https://github.com/sebiweise/unSAGED/tree/feature/aimodel_vendor_update

Ollama isn´t working either but there is no endpoint for now - like you said.

@jorge-menjivar
Copy link
Owner

We might need to wait for support for this to come from OpenAI or we might have to come up with a new solution to the problem. For OpenAI models we can parse the model id and assume their base model from it.
Fine-tuned models also include the name in the id. They are all named something like ft:gpt-3.5-turbo-0613:xxxxxxxxxxxx. Maybe letting the user manually set the max tokens per model in the UI could be another solution in the worst of cases.

@Bortus-AI Bortus-AI added the enhancement New feature or request label Dec 19, 2023
@sebiweise
Copy link
Collaborator

@jorge-menjivar What do you think of a new service that will be hosted by me or us, that will have a database of every possible AI model and that will offer a submit form to let users post new found AI models and/or vendors that they found. Maybe we could create a cron job that will gather new known AI models from known vendors automatically. When the models are saved we can have something like a status "draft" so someone can complete the corresponding model settings.
And then we have an endpoint that will return the models including the settings and also offer some filters so we can decide which vendors or model types (text/image/...) I want to get via the API call.

@jorge-menjivar
Copy link
Owner

I like this idea. So basically it's a community database/index for models?

@jorge-menjivar
Copy link
Owner

Also check the pre-release app v0.1.1. I made it so that it detects all models and let's you put the correct token window size in the settings

@sebiweise
Copy link
Collaborator

sebiweise commented Dec 20, 2023

Yes, so basically just the types we have now in a database that can be contributed to easily, I would create a new service for that so that we don´t have a problem with another endpoint that isn´t available in the desktop app.
I would just try to build a simple version to test, just a quick Nextjs application with Prisma and a free Planetscale database, Clerk auth for some admin pages and a public available submit form that will create models that will need to be check by a admin.

So then you can add another api call to that service (maybe later on including a api key) and get the "possible ai models" including all settings, you still need to get all models that the supported vendors in your app support but you could get the settings for max_tokens and so on from another endpoint.

@sebiweise
Copy link
Collaborator

Just created a basic Nextjs App, for now it will just display the data from the Planetscale db but I´m currently working on the submit form and a little admin dashboard: https://ai-services-web.vercel.app/

https://ai-services-web.vercel.app/vendor
image

https://ai-services-web.vercel.app/model
image

The submit forms aren´t working at the moment:
https://ai-services-web.vercel.app/form/model
https://ai-services-web.vercel.app/form/vendor

@Bortus-AI
Copy link
Collaborator

@sebiweise Thats really awesome. If vercel gets too slow to handle this let me know. I have over a dozen dedicated servers that I can get you a VPS to run this on at no cost.

@Bortus-AI
Copy link
Collaborator

I'd be happy to add missing params if you need help

@sebiweise
Copy link
Collaborator

Thank you for your help. I think we can just try to use Vercel for now and we will just need to have a look on the insights/usage of the endpoint. I think the database will need an upgrade later on but we will see. The data will be cached so I think for now no problem.
I will finish the database scheme and the management dashboard tomorrow so I can invite you to be able to edit/add ai models.

@jorge-menjivar
Copy link
Owner

@sebiweise Looking great! Do we need to keep the max length and request limit? I removed them from the desktop app because they seemed redundant. Max length is just a guess for most models, and request limit could be set to token_limit - 100.

  • Max length is used for the max length of input text so if you go over a certain number of characters while typing, it will show you a warning. This is of course a guess because we cannot compute ahead of time the number of tokens a certain number of characters will use. The rule of thumb for now for this value has been token_limit * 3 or something like that.
  • Request limit is the max number of tokens to send in the request, so if we use token_limit - 100 for its value, we guarantee that the model can return at least 100 tokens in the response.

@sebiweise
Copy link
Collaborator

sebiweise commented Dec 22, 2023

Yes I think we will have to take a look if we can remove some of the params and the params are need or not. I just pushed a working example that uses the "AI-Services" to get the possible ai models and the correct params/limits.
#163

Main implementation (maybe we can change some code in the future to reduce the http calls that are done):
https://github.com/jorge-menjivar/unsaged/pull/163/files#diff-b2b8156aafdf2a2697ee5d5d733f9e17a1d4e89734f8b97ab410121180b3baf0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants