Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CoT agent token usage is empty #4473

Closed
4 tasks done
sinomoe opened this issue May 17, 2024 · 1 comment · Fixed by #4474
Closed
4 tasks done

CoT agent token usage is empty #4473

sinomoe opened this issue May 17, 2024 · 1 comment · Fixed by #4474
Labels
🐞 bug Something isn't working 🤖 feat:agent Agent related topics

Comments

@sinomoe
Copy link
Contributor

sinomoe commented May 17, 2024

Self Checks

  • This is only for bug report, if you would like to ask a question, please head to Discussions.
  • I have searched for existing issues search for existing issues, including closed ones.
  • I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • Please do not modify this template :) and fill in all the required fields.

Dify version

0.6.8

Cloud or Self Hosted

Self Hosted (Source)

Steps to reproduce

chat with a react agent
image

✔️ Expected Behavior

token usage should not be 0

❌ Actual Behavior

token usage is 0

Copy link

dosubot bot commented May 17, 2024

It looks like the issue with the token usage being reported as 0 in your self-hosted Dify setup might be due to the usage_dict['usage'] not being correctly populated with LLMUsage data before calling the increase_usage function. This function is crucial for tracking the usage data, but if it's called with an empty or uninitialized usage, it won't accurately reflect the tokens used.

To address this, you'll need to ensure that after invoking the LLM and processing its chunks, you capture the LLM usage data and update usage_dict['usage'] accordingly. This involves:

  1. Capturing the LLM usage data post-invocation.
  2. Updating usage_dict['usage'] with this data.
  3. Calling increase_usage with the correctly populated usage_dict.

Here's a simplified approach to how you might implement these steps:

# Example for updating LLM usage data
def process_llm_chunks(chunks):
    # Capture LLM usage data here
    example_usage_data = LLMUsage(prompt_tokens=100, completion_tokens=200, prompt_price=0.01, completion_price=0.02)
    
    # Update usage_dict with the LLM usage data
    usage_dict['usage'] = example_usage_data

# Ensure this update happens before calling increase_usage
process_llm_chunks(chunks)

# Call increase_usage with the updated usage_dict
increase_usage(llm_usage, usage_dict['usage'])

This example is a conceptual guide and needs to be adapted to fit the specifics of how your application captures and processes LLM usage data. The key takeaway is to ensure that usage_dict['usage'] is meaningfully populated before it's used to track usage metrics.

Details

Sources


About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

To continue the conversation, mention @dosu.

@dosubot dosubot bot added 🐞 bug Something isn't working 🤖 feat:agent Agent related topics labels May 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working 🤖 feat:agent Agent related topics
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant