You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're working with a couple of long context window prompts where knowing the token counts of prompt components is useful before we make the requests.
However, testing our workflows with Claude 3 has been tricky because we have no way to get these estimates until after we've made the request, which costs us a few API credits. It would be extremely useful if the tokenizer could add support for the new series of models.
The text was updated successfully, but these errors were encountered:
We're working with a couple of long context window prompts where knowing the token counts of prompt components is useful before we make the requests.
However, testing our workflows with Claude 3 has been tricky because we have no way to get these estimates until after we've made the request, which costs us a few API credits. It would be extremely useful if the tokenizer could add support for the new series of models.
The text was updated successfully, but these errors were encountered: