You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Azure is gating access to gpt-3.5-turbo-0125 and 1106 behind a requirement to have purchased Provisioned Throughput Units (PTUs). This means that for pay-as-you-go, the models that are considered legacy (0613, 0301) are the only ones supported. Similar constraints are going to be in place, if they aren't already, for Bedrock model access.
Is the provisioned throughput model a supported use case OOTB by LangSmith?
Suggestion:
No response
The text was updated successfully, but these errors were encountered:
Are you asking about whether it's supported in the playground or whether we do cost analytics that is specifically geared around PTU setups? Or something else?
Based on the title, assuming you're referring to cost estimation: our current logic is fairly naive (and customizable, but always per-token), but if you're using PTUs and find an additional feature set that would be useful, would love to make langsmith work better for that
Issue you'd like to raise.
Hello again, not an issue but rather a question.
Azure is gating access to gpt-3.5-turbo-0125 and 1106 behind a requirement to have purchased Provisioned Throughput Units (PTUs). This means that for pay-as-you-go, the models that are considered legacy (0613, 0301) are the only ones supported. Similar constraints are going to be in place, if they aren't already, for Bedrock model access.
Is the provisioned throughput model a supported use case OOTB by LangSmith?
Suggestion:
No response
The text was updated successfully, but these errors were encountered: