Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert SimSwap .pth model to .onnx #437

Open
northumber opened this issue Jul 31, 2023 · 4 comments
Open

Convert SimSwap .pth model to .onnx #437

northumber opened this issue Jul 31, 2023 · 4 comments

Comments

@northumber
Copy link

Hi, I wanted to convert the pretrained SimSwap 512 .pth model to .onnx file format.

I'm not so much into Python, so I don't really know what to do. From what I understand, the code to do so looks something like this:

import io
import numpy as np
import torch.onnx
 
torch_model = ModelClass()
model_path = "model.pth"
batch_size = 1

torch_model.load_state_dict(torch.load(model_path))
torch_model.eval()

x = torch.randn(batch_size, 3, 512, 512, requires_grad=True)
torch_out = torch_model(x)

torch.onnx.export(torch_model, x, "model.onnx", export_params=True, opset_version=11, do_constant_folding=True, input_names = ['input'], output_names = ['output'], dynamic_axes = {'input' : {0: 'batch_size'}, 'output': {0: 'batch_size'}})

For a pretrained typical torch model, the code where torch_model = is something like models.resnet50(pretrained=True) but for custom model is required custom model class like ModelClass() in the code. I don't know what to write or where to find the custom model class for the SimSwap 512 model.

@TimothyAlexisVass
Copy link

TimothyAlexisVass commented Sep 12, 2023

There is more to this than just that model so you won't just be able to replace the inswapper_128 with this anyway. Your best bet is to create a new plugin for your UI using this instead of trying to change roop to work like you want it.

@TaiTair
Copy link

TaiTair commented Jan 10, 2024

I did a bit of digging into the repo and managed to find where the appropriate model is declared and etc by looking at train.py.
The model needs a whole bunch of optional parameters provided to it so I just hard-coded those to pass errors. You can also find those inside train.py.

I cloned the repo, got the .pth and put it inside checkpoints, then made a symlink to the .pth file inside checkpoints/simswap and then I made this convert.py script that I put in the root of the repo.

import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.model_zoo as modelzoo
import torchvision
import torch.onnx
import tensorflow as tf
from models.projected_model import fsModel

opt = lambda: None
opt.name = 'simswap'
opt.verbose = True
opt.isTrain = False
opt.resize_or_crop = 'none'
opt.crop_size = 512
opt.Arc_path = '/redacted/SimSwap/arcface_model/arcface_checkpoint.tar'
opt.gpu_ids = '0'
opt.which_epoch = 550000
opt.gan_mode = 'original'
opt.fp16 = True
opt.checkpoints_dir = '/redacted/SimSwap/checkpoints'
opt.Gdeep = False
model = fsModel()
model.initialize(opt)
dummy_input = torch.randn(1, 3, 512, 512, requires_grad=True)
model.eval()
torch.onnx.export(model=model, args=dummy_input, f="/redacted/SimSwap/checkpoints/550000_net_G.onnx", export_params=True)

I had to modify BaseModel so that its forward function can take self and *args otherwise it would complain about being passed two parameters when I exported the model.

def forward(self, *args):
        pass

convert.py seems to run fine and even exports a .onnx now but the file only weighs 36 bytes unfortunately... I'm not too sure what's in that file but I don't think it will be of any use.

I'm still learning a lot about tensors, tensorflow and machine learning. From what I gathered though, .onnx files typically contain inference functions as well while .pth files only contain the weights for a trained model. This can result in some incompatibility if converted. So from what I understand, it is feasible to convert the weights into the onnx format but the resulting onnx file will be missing some crucial inference functions thus rendering it useless?

Maybe my little script will put someone on the right path.

@henryruhs
Copy link

henryruhs commented Apr 17, 2024

Who ever landed here, thank me later: https://huggingface.co/netrunner-exe/Insight-Swap-models-onnx/tree/main

@TaiTair
Copy link

TaiTair commented Apr 26, 2024

Who ever landed here, thank me later: https://huggingface.co/netrunner-exe/Insight-Swap-models-onnx/tree/main

Thanks man I used this to make a quick & dirty implementation of simswap for comfyui: https://github.com/TaiTair/comfyui-simswap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants