New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better evaluation results show. #12149
Comments
Hello! Thanks for contributing to the discussion with such thoughtful suggestions. 🌟 Displaying the number of images per category during evaluation sounds like a reasonable enhancement for gaining more granular insights into model performance across different classes. This can definitely help users better understand the specific areas where the model excels or requires more training data. If you are willing to submit a PR for this feature, it would be a fantastic addition to the project. Please ensure your changes maintain the current functionality as an option, perhaps through a user-defined flag in the evaluation script. This way, users can choose the output format that best suits their needs. Looking forward to seeing your contribution! |
@glenn-jocher I have implemented this feature and used the |
Hello @sunmooncode! Great to hear that you've implemented the feature. 🚀 For the parameter name, how about |
@glenn-jocher Very well, I will update it! |
@sunmooncode thanks for the update! Looking forward to your PR. If you need any help, just let us know! 😊 |
Search before asking
Description
Is it more reasonable to print the number of images for each category in the current evaluation script results, rather than uniformly printing the total number of evaluation images?
Use case
This is the current result.
This is the modified result.
Additional
I am not sure if this optimization is beneficial.
Are you willing to submit a PR?
The text was updated successfully, but these errors were encountered: