You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, i used the weights saved after the training step to inference, but the results are not aligned with the results generated from the last validation step in training. So what is the reason for this phenomenon?
The text was updated successfully, but these errors were encountered:
Hi. Typically, this is because the randomly sampled noises are different from the training stage.
Could you please provide more details? Like are you training MotionDirector on a single video or multiple videos?
Hi. Typically, this is because the randomly sampled noises are different from the training stage. Could you please provide more details? Like are you training MotionDirector on a single video or multiple videos?
Thanks for your reply, your answer helps, it is maybe different settings between training and inference. I have another question, how is the generalization ability of MotionDirector? For example, if i use a custom dreambooth weight, which is different from training. Can the temporal weights trained from MotionDirector work under this situation?
If you use DreamBooth to only finetune the spatial layers, I think it is OK, just like the results shown here. If the temporal layers are also changed, I'm not sure what will happen. You can try it out. Looking forward to your insights.
Hello, i used the weights saved after the training step to inference, but the results are not aligned with the results generated from the last validation step in training. So what is the reason for this phenomenon?
The text was updated successfully, but these errors were encountered: