You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using ReduceLROnPlateau and Early Stopping with my own custom Concordance Correlation monitor metric (RhoC). I want my networks to train with .fit with the callbacks inactivate for the first 50 or 100 epochs, then activate. As far as I'm aware, there aren't arguments to do this, so I created my own version of the lr_scheduler module and modified the LRScheduler(Callback) class to take in an argument called epoch_start.
I've shown the modified class functions at the bottom. (def __init__, def kwargs, def _on_epoch_end)
This does the job for me for now. I can do a similar thing to modify EarlyStopping. I just wanted to check in and see if there is actually a way to do this, or a work around without needing to modify the source code. I looked into SequentialLR and could apply it in a similar way as in this post, just with ConstantLR for the first 50 epochs, which would work for normal pyTorch and if I was manually coding my own fit function, but I'm unsure how to integrate SequentialLR with skorch's fit and callbacks system.
So my questions are:
Other than modifying source code, how can I add an activation delay to callbacks based on epoch number. If there is a way that already exists with skorchs .fit function.
How could I implement SequentialLR or an equivalent set of learning rate schedulers in callbacks?
This is more for interest as modifiying the source code works for me. Any pointers let me know :)
def__init__(self,
policy='WarmRestartLR',
monitor='train_loss',
event_name="event_lr",
step_every='epoch',
epoch_start=1,
**kwargs):
self.policy=policyself.monitor=monitorself.event_name=event_nameself.step_every=step_everyself.epoch_start=epoch_start# if 'epoch_start' in kwargs:# del kwargs['epoch_start']vars(self).update(kwargs)
defkwargs(self):
# These are the parameters that are passed to the# scheduler. Parameters that don't belong there must be# excluded.excluded= ('policy', 'monitor', 'event_name', 'step_every', 'epoch_start')
kwargs= {key: valforkey, valinvars(self).items()
ifnot (keyinexcludedorkey.endswith('_'))}
returnkwargs`
defon_epoch_end(self, net, **kwargs):
ifself.step_every!='epoch':
returnifisinstance(self.lr_scheduler_, ReduceLROnPlateau):
ifcallable(self.monitor):
score=self.monitor(net)
else:
try:
score=net.history[-1, self.monitor]
exceptKeyErrorase:
raiseValueError(
f"'{self.monitor}' was not found in history. A "f"Scoring callback with name='{self.monitor}' ""should be placed before the LRScheduler callback"
) fromen_epoch=len(net.history)
ifn_epoch<=self.epoch_start:
print("Not starting lr scheduler yet")
returnelse:
self._step(net, self.lr_scheduler_, score=score)
# ReduceLROnPlateau does not expose the current lr so it can't be recordedelse:
if (
(self.event_nameisnotNone)
andhasattr(self.lr_scheduler_, "get_last_lr")
):
net.history.record(self.event_name, self.lr_scheduler_.get_last_lr()[0])
self._step(net, self.lr_scheduler_)
Hi,
I'm using ReduceLROnPlateau and Early Stopping with my own custom Concordance Correlation monitor metric (RhoC). I want my networks to train with .fit with the callbacks inactivate for the first 50 or 100 epochs, then activate. As far as I'm aware, there aren't arguments to do this, so I created my own version of the lr_scheduler module and modified the
LRScheduler(Callback)
class to take in an argument called epoch_start.I've shown the modified class functions at the bottom. (
def __init__, def kwargs, def _on_epoch_end
)This does the job for me for now. I can do a similar thing to modify EarlyStopping. I just wanted to check in and see if there is actually a way to do this, or a work around without needing to modify the source code. I looked into SequentialLR and could apply it in a similar way as in this post, just with ConstantLR for the first 50 epochs, which would work for normal pyTorch and if I was manually coding my own fit function, but I'm unsure how to integrate SequentialLR with skorch's fit and callbacks system.
So my questions are:
This is more for interest as modifiying the source code works for me. Any pointers let me know :)
My callbacks are defined like:
The text was updated successfully, but these errors were encountered: