You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It looks like the issue with the token usage being reported as 0 in your self-hosted Dify setup might be due to the usage_dict['usage'] not being correctly populated with LLMUsage data before calling the increase_usage function. This function is crucial for tracking the usage data, but if it's called with an empty or uninitialized usage, it won't accurately reflect the tokens used.
To address this, you'll need to ensure that after invoking the LLM and processing its chunks, you capture the LLM usage data and update usage_dict['usage'] accordingly. This involves:
Capturing the LLM usage data post-invocation.
Updating usage_dict['usage'] with this data.
Calling increase_usage with the correctly populated usage_dict.
Here's a simplified approach to how you might implement these steps:
# Example for updating LLM usage datadefprocess_llm_chunks(chunks):
# Capture LLM usage data hereexample_usage_data=LLMUsage(prompt_tokens=100, completion_tokens=200, prompt_price=0.01, completion_price=0.02)
# Update usage_dict with the LLM usage datausage_dict['usage'] =example_usage_data# Ensure this update happens before calling increase_usageprocess_llm_chunks(chunks)
# Call increase_usage with the updated usage_dictincrease_usage(llm_usage, usage_dict['usage'])
This example is a conceptual guide and needs to be adapted to fit the specifics of how your application captures and processes LLM usage data. The key takeaway is to ensure that usage_dict['usage'] is meaningfully populated before it's used to track usage metrics.
About Dosu
This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.
Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.
Self Checks
Dify version
0.6.8
Cloud or Self Hosted
Self Hosted (Source)
Steps to reproduce
chat with a react agent
✔️ Expected Behavior
token usage should not be 0
❌ Actual Behavior
token usage is 0
The text was updated successfully, but these errors were encountered: