Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add prompt_feedback/Safety filter info to VertexAI Gemini Response #218

Closed
kardiff18 opened this issue May 13, 2024 · 2 comments
Closed

Comments

@kardiff18
Copy link

Right now, a user receives an empty string of text is a Safety filter blocks the response rather than the information about the safety filters (such as the probability, etc.) It would be great to have the safety filter dictionary returned as part of prompt feedback, similar to the non-VertexAI Gemini implementation

llm_output = {"prompt_feedback": proto.Message.to_dict(response.prompt_feedback)}

@lkuligin
Copy link
Collaborator

it's part of the generation_info:

or am I missing anything?

@kardiff18
Copy link
Author

That's fair.. I guess i was hoping for the prompt_feedback directly. Based on the chain I'm using I can't get it from generation_info myself without writing a custom callback, which I can do it just would be nice if everything was returned natively.

You can close this for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants