We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I want to get the logprobs from a vLLM endpoint on the prompt + answer in order to evaluate the LLM on selective task. How can I do that?
curl --location URL/v1/chat/completions \ --header "Content-Type: application/json" \ --data '{ "model": "model_name", "echo": true, "messages": [ { "role": "user", "content": "hello"} ], "logprobs": true, "top_logprobs": 1 }'
I am using this code but I get only the logprobs of the answer. Can anyone help please?
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Your current environment
I want to get the logprobs from a vLLM endpoint on the prompt + answer in order to evaluate the LLM on selective task. How can I do that?
I am using this code but I get only the logprobs of the answer. Can anyone help please?
How would you like to use vllm
I want to run inference of a [specific model](put link here). I don't know how to integrate it with vllm.
The text was updated successfully, but these errors were encountered: