Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[optim] add missing 'maximize' parameter to LBFGS, NAdam and RAdam optimizers #126642

Closed
KebabRonin opened this issue May 19, 2024 · 2 comments
Closed
Labels
actionable module: loss Problem is related to loss function module: optimizer Related to torch.optim triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@KebabRonin
Copy link

KebabRonin commented May 19, 2024

馃殌 The feature, motivation and pitch

Currently, LBFGS, NAdam and RAdam optimizers do not have the maximize parameter in their constructors, which means they cannot be used with some loss functions (like R2Score).

Alternatives

In the case maximize can't be implemented due to a constraint of the optimizer, it would be nice to make it clear that maximize is intentionally not supported. Some possible suggestions:

  • Update the documentation to mention the reason maximize isn't supported for the optimizer
  • Still support the parameter but throw an error when maximize=True

Additional context

No response

cc @vincentqb @jbschlosser @albanD @janeyx99 @crcrpar

@drisspg drisspg added module: loss Problem is related to loss function module: optimizer Related to torch.optim triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels May 20, 2024
@janeyx99
Copy link
Contributor

janeyx99 commented May 20, 2024

Yes, it is known that these do not support the flag, and the documentation for each exclude mentioning maximize for that reason.

It is totally easily actionable to add maximize for RAdam and NAdam + tests, perhaps easier than adding the docs lol. I may just add this for funsies later this week but if someone wants to do this, go for it!

LBFGS I've gotta check, but off the top of my head it should work as well, but the implementation is not immediately doable in my head.

@huihoaan
Copy link
Contributor

@janeyx99 May I help to add this? I have made a pr for Radam. If #126765 is OK, I will add another pr for Nadam.

pytorchmergebot pushed a commit that referenced this issue May 27, 2024
Fixes #[126642](#126642)

I reference the maximize in `Adam` and add `Radam's` maximize flag. If this pr is OK, I will add another pr for `Nadam`.
Pull Request resolved: #126765
Approved by: https://github.com/janeyx99
titaiwangms pushed a commit to titaiwangms/pytorch that referenced this issue May 28, 2024
Fixes #[126642](pytorch#126642)

I reference the maximize in `Adam` and add `Radam's` maximize flag. If this pr is OK, I will add another pr for `Nadam`.
Pull Request resolved: pytorch#126765
Approved by: https://github.com/janeyx99
Aidyn-A pushed a commit to tinglvv/pytorch that referenced this issue May 30, 2024
Fixes #[126642](pytorch#126642)

I reference the maximize in `Adam` and add `Radam's` maximize flag. If this pr is OK, I will add another pr for `Nadam`.
Pull Request resolved: pytorch#126765
Approved by: https://github.com/janeyx99
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
actionable module: loss Problem is related to loss function module: optimizer Related to torch.optim triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants