Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue: Cost calculated for trace when using GPT4-Vision is wrong #460

Open
Gr33nLight opened this issue Feb 21, 2024 · 2 comments
Open

Issue: Cost calculated for trace when using GPT4-Vision is wrong #460

Gr33nLight opened this issue Feb 21, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@Gr33nLight
Copy link

Issue you'd like to raise.

Hello, I'm running the following configuration:

    const chat = new ChatOpenAI({
      modelName: 'gpt-4-vision-preview',
      streaming: true,
      maxTokens: 1024,
    }).withConfig({ runName: 'VisionChain' });

    const message = new HumanMessage({
      content: [
        {
          type: 'text',
          text: "...",
        },
        {
          type: 'image_url',
          image_url: {
            url: `data:image/jpeg;base64,${base64Data}`,
            detail: 'low',
          },
        },
      ],
    });

I suspect langsmith is counting the image passed as base64 as it was a normal string instead of being tread as an image for the vision api. A call that actually costed me 0.03 is being reported as $2.2. This can be also observed by the token count that takes in consideration the full base64 encoded string (which is not wrong per se). Display of the submitted image in the prompt is working as expected so thumbs up for that :)

Suggestion:

The cost should be calculated using the pricing model of the gpt-4-vision-preview and not the standard model.

@hinthornw
Copy link
Collaborator

Ah yes - we have a fix in the pipeline- thank you for flagging!

@jonsoini
Copy link

I'm seeing a similar issue with Gemini Pro Vision, no cost data is displayed, but the number of tokens for a request with an image are in the millions. Hoping the fix mentioned above covers this as well?

model = ChatVertexAI(model_name="gemini-pro-vision", max_output_tokens=2048, temperature=0.01)

    msg = model.invoke(
        [
            HumanMessage(
                content=[
                    {"type": "text", "text": prompt},
                    {
                        "type": "image_url",
                        "image_url": {"url": f"data:image/jpeg;base64,{img_base64}"},
                    },
                ]
            )
        ]
    )

@hinthornw hinthornw added the bug Something isn't working label Apr 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants