You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All these would be really useful, I think. You'd want:
expected costs for a particular pipeline (either calculated directly, or via directing to some API or online costs calculator for the particular LLMs involved). In the case of multiple LLMs involved, this might get complicated i.e.
expected tokens to be used for a particular pipeline, and split by LLM / task (so you can manually make the calculation yourself if you want)
add a 'max_budget' option for a param, which would either stop when you hit that estimated cost, or it wouldn't run the pipeline and would suggest ways to downscale the amount of work your pipeline does in order to stay within the costs?
expected_time is maybe interesting, but probably hard to do. But I agree it'd be pretty useful if you could get a ballpark figure.
You might even consider abstracting some of this out into some online calculator tool where you just get the number of tokens for a raw data file and then for a typical task you could get a rough estimate of both time and cost for typical scenarios (i.e. gpt3.5 or claude-haiku etc). Just from a marketing standpoint having one of these HF spaces that do that might be attractive.
I think this'd all be really beneficial for the tool. In the end, you can only make very loose guesses right now based on small subsection runs and then try to extrapolate out.
gabrielmbmb
changed the title
Add a way to predict or estimate expected tokens or expected costs
[FEATURE] Add a way to predict or estimate expected tokens or expected costs
Apr 22, 2024
Is your feature request related to a problem? Please describe.
Generation with reliance on external APIs might be expensive.
Describe the solution you'd like
Something like a pipeline method
expected_costs
orexpected_tokens
Describe alternatives you've considered
Wait and see.
Additional context
Perhaps something like
expected_time
would also be interesting?The text was updated successfully, but these errors were encountered: