Request for guidance regarding training #255
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hello,
I trained a 512 model with VGGFace2 for 390k steps and I got the following video output
. I have also attached one of the samples obtained at step 390k. Do you know where I might be messing up?

I used the SimSwap colab.ipnyb
Hello! Maybe some missing fix in code? Can you share your checkpoints?
Did you set --Gdeep True ? Even though 400k should be the minimum you should have seen semi normal results with 390k. Make sure your weights have updated properly and you are pointing to them when initiating testing. I believe they will get better with more iterations but the current output does not make sense.
I think this is still a problem with the code - it seems that some layers do not load properly. Even after 80,000 I already had a bad result, and the faces changed, albeit in poor quality. But unfortunately, while there is no checkpoint, it is impossible to test it with the code and say for sure or help ...
I see, the training code seems very straightforward, i lack knowledge in this particular area but the generator functions may not be set up properly. I really need to learn this next. Theres just so much to learn 😭.
omg man, can you share it how to train vggface with 512 ? the dataset is 100GB. my colab only can 70gb free.
I do not think that he will share his checkpoints or dataset. During my observation of this repository, not many people have contributed to the SimSwap community. Of course, really cool guys like @ftaker887, @instant-high, @woctezuma and others helped solve problems, wrote GUIs, made suggestions for improving the code (sorry if I forgot to mention someone). But unfortunately, most users are used to receiving help or resources but prefer not contributing anything in return. This is my opinion, correct me if I'm wrong.
Turn off the mask, and print the results.
From the training output, the training should be no problem.
The problem may occur with the test code.
There may be a code compatibility problem, the test code and the training model forward code are not compatible
Hi! @neuralchen! I try to train 224. epoch 88500
Hope this helps solve the problem
With mask:

Without mask:

I also have the same problem The model can be successfully trained and the training sample data is normal but Error while testing
I think the effect is too white , is this normal?
I did a little experiment with the code. If in

videoswap.pyinline 89changeswap_result = swap_model(None, frame_align_crop_tenor, id_vetor, None, True)[0]toswap_result = swap_model(None, spNorm(frame_align_crop_tenor), id_vetor, None, True)[0]The result will be slightly better but the face will be very contrasting:
But this affect other public models - people and 512:

Perhaps they should be accepted, something like parse
--namearg from base_options (I didn't manage to do it) and addinput_norm = spNorm(frame_align_crop_tenor) if name != 'people' or name != '512' else frame_align_crop_tenorandswap_result = swap_model(None, input_norm, id_vetor, None, True)[0]Perhaps some layer from the model does not load or does not load correctly. If we compare it with a screen with a white face, then in addition to high contrast, there is a difference in the eyes, teeth, etc.
Sent your ckpt to my email, i will check what has happend.
I send link on gmail in profile
Hey guys,
Sorry for the late reply. I was busy with some things. Here is my checkpoint I got the results from. I have not debugged it till now. I will look into it and get back to you guys soon. Also, I did set
--Gdeep True.I try to test checkpoints. The effect that arose on the video was due to the fact that in the test code the
crop_sizeoption was tied to beta checkpoint 512 and--crop_size 512try to load beta 512 from 550000 epoch. I solved it in this way #246. But it didn't solve all problem and now instead of a multi-colored disco face like in the video, we get just a boring one-color :)With mask


Without mask
终于有人和我有同样的问题https://github.com/neuralchen/SimSwap/issues/251
512 version --netG should be loaded in the same way as fs_network_fix,py instead of fix_network_512.py
to do this, though the error like
had gone...
But the result of face swap were still worse like above...
I try to change the model initialize way , because in the train time, generate result of face swap is normal. but still worse...
I don't understood.
Hey bud, was wondering if you had trained this model any further. Thanks for posting the other one, greatly appreciated.