Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

results and evaluation for 512x352 images #40

Open
ZihaoW123 opened this issue Jul 7, 2023 · 5 comments
Open

results and evaluation for 512x352 images #40

ZihaoW123 opened this issue Jul 7, 2023 · 5 comments

Comments

@ZihaoW123
Copy link

Hi authors, your work is impressive. Thanks for sharing the code base.

However, I find the file "utils/metrics.py" is the evaluation code only for 256x176 images. And the FID calculated by "utils/metrics.py" seems to be incorrect.

It would greatly help the community if you could share 512x352 generated image results and the evaluation code for 512x352 images. Looking forward to your kind response.

@ankanbhunia
Copy link
Owner

Could you please explain your issue regarding the FID calculated by "utils/metrics.py"?

@ZihaoW123
Copy link
Author

It is right for 256x176 images to calculate the FID using "utils/metrics.py".

And to evaluate 512x352 images, I replace the cv2.resize(imread(str(fn)).astype(np.float32),(176, 256)) with cv2.resize(imread(str(fn)).astype(np.float32),(352, 512)) in "utils/metrics.py" .

I find that in InceptionV3, there is a resize function that resizes the image from 512x352 to 299x299. As follows:
image
In order to scientifically test the 512x352 image quality, I think the resolution of the image should not be reduced during the evaluation.

So I want to know your script code for evaluating 512x352 images. Thanks.

@ankanbhunia
Copy link
Owner

The Inception network takes input of size (299,299). So we need to resize the images to this size before calculating FID.

You are right in the fact that for higher resolution images it does not quite make sense to reduce the dimension. However, this is a standard protocol and other papers also evaluate like this way.

@ZihaoW123
Copy link
Author

So I'm wondering if there is a difference between my code and yours for testing 512x352 images.

@nicolasugrinovic
Copy link

@ZihaoW123 could you match the results for 256x176 images obtained using the file "utils/metrics.py" with the tables in the paper?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants