-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature]: 馃帀 GPT-4o Day 1 Support #3612
Comments
Also accessible in playground: https://platform.openai.com/playground/chat?mode=chat&model=gpt-4o&models=gpt-4o |
It's absolutely out. Why on earth there's a need to add new gpt models to litellm each time openai adds them? Can't it just route gpt.* to openai and at least in principle make it future proof? |
There's also the curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY" {
"id": "gpt-4o",
"object": "model",
"created": 1715367049,
"owned_by": "system"
},
{
"id": "gpt-4o-2024-05-13",
"object": "model",
"created": 1715368132,
"owned_by": "system"
} |
@krrishdholakia - added it already: #3615 (I have it working with OpenAI directly) |
@mmmaia The main reason for adding the models "by hand" is due to their pricing/context windows. You would think OpenAI would create a json spec somewhere with this information so that others could automate. |
This is now live - #3613 Should be available for everybody to use (no updates needed), in the next 5-10 minutes once cache is updated - https://raw.githubusercontent.com/BerriAI/litellm/main/model_prices_and_context_window.json |
On SDKfrom litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-api-key"
response = completion(
model="gpt-4o",
messages=[{ "content": "Hello, how are you?","role": "user"}]
) On Proxy
model_list:
- model_name: gpt-4o
litellm_params:
model: gpt-4o
api_key: os.environ/MY_OPENAI_KEY
curl --location 'http://0.0.0.0:4000/chat/completions' \
--header 'Content-Type: application/json' \
--data ' {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": "what llm are you"
}
]
}
' See all ways to test on proxy: https://docs.litellm.ai/docs/proxy/quick_start#using-litellm-proxy---curl-request-openai-package-langchain |
You don't need to wait for it to be on litellm json map - @emsi you could have just done |
Hey @emsi you do not need to add models to litellm to use new models. Just append the provider to the model name - e.g. |
UPDATE
Live now
On SDK
On Proxy
See all ways to test on proxy: https://docs.litellm.ai/docs/proxy/quick_start#using-litellm-proxy---curl-request-openai-package-langchain
The Feature
Creating parent issue to track gpt-4o support.
It's not out yet - https://platform.openai.com/docs/models/gpt-4-turbo-and-gpt-4
But should be in the next few weeks
Motivation, pitch
support all models
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: