how to use trained model ? i use my dataset after traing there are 4 '.pth' files {100000_net_D.pth 100000_net_G.pth 100000_optim_D.pth 100000_optim_G.pth} how can I you use these model files #245

Open
opened 2022-04-26 07:56:10 +02:00 by doctorcui · 19 comments
doctorcui commented 2022-04-26 07:56:10 +02:00 (Migrated from github.com)
No description provided.
doctorcui commented 2022-04-26 07:59:38 +02:00 (Migrated from github.com)

i put new trained pth files in ./checkpoints/simswap512 when i use python test_video_swapsingle.py --crop_size 224 --use_mask --name simswap512 --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path ./demo_file/Iron_man.jpg --video_path ./demo_file/multi_people_1080p.mp4 --output_path ./output/multi_test_swapsingle.mp4 --temp_path ./temp_results` BUT it does not work

i put new trained pth files in ./checkpoints/simswap512 when i use `python` test_video_swapsingle.py --crop_size 224 --use_mask --name simswap512 --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path ./demo_file/Iron_man.jpg --video_path ./demo_file/multi_people_1080p.mp4 --output_path ./output/multi_test_swapsingle.mp4 --temp_path ./temp_results` BUT it does not work
netrunner-exe commented 2022-04-26 09:19:25 +02:00 (Migrated from github.com)

What size you train - 512 or 224? If it was 224 - try add --which_epoch. In your case: python test_video_swapsingle.py --crop_size 224 --use_mask --which_epoch 100000 --name simswap512 --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path ./demo_file/Iron_man.jpg --video_path ./demo_file/multi_people_1080p.mp4 --output_path ./output/multi_test_swapsingle.mp4 --temp_path ./temp_results

What size you train - 512 or 224? If it was 224 - try add `--which_epoch`. In your case: `python test_video_swapsingle.py --crop_size 224 --use_mask --which_epoch 100000 --name simswap512 --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path ./demo_file/Iron_man.jpg --video_path ./demo_file/multi_people_1080p.mp4 --output_path ./output/multi_test_swapsingle.mp4 --temp_path ./temp_results`
netrunner-exe commented 2022-04-26 09:27:16 +02:00 (Migrated from github.com)

If you train 512 - try to comment or remove lines 49 and 50 in test_video_swapsingle.py. Then everything is the same as in the example above, except --crop_size - change 224 to 512

If you train 512 - try to comment or remove lines `49` and `50` in `test_video_swapsingle.py`. Then everything is the same as in the example above, except -`-crop_size` - change 224 to 512
netrunner-exe commented 2022-04-26 09:35:07 +02:00 (Migrated from github.com)

What dataset and GPU you used for training?

What dataset and GPU you used for training?
doctorcui commented 2022-04-26 12:22:05 +02:00 (Migrated from github.com)

thank you for you answer
another error appears []Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0']

> thank you for you answer another error appears []Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0']
doctorcui commented 2022-04-26 12:25:51 +02:00 (Migrated from github.com)

If you train 512 - try to comment or remove lines 49 and 50 in test_video_swapsingle.py. Then everything is the same as in the example above, except --crop_size - change 224 to 512

i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset

> If you train 512 - try to comment or remove lines `49` and `50` in `test_video_swapsingle.py`. Then everything is the same as in the example above, except -`-crop_size` - change 224 to 512 i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset
netrunner-exe commented 2022-04-26 12:58:03 +02:00 (Migrated from github.com)

thank you for you answer another error appears []Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0']

Try #246 and --crop_size exactly what you train

> > > > thank you for you answer another error appears []Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0'] Try #246 and --crop_size exactly what you train
netrunner-exe commented 2022-04-26 13:00:28 +02:00 (Migrated from github.com)

If you train 512 - try to comment or remove lines 49 and 50 in test_video_swapsingle.py. Then everything is the same as in the example above, except --crop_size - change 224 to 512

i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset

8000 is very little for training. vgg2 cropped and aligned dataset contain around 600000 images and recommended to train about 400k-600k it

> > If you train 512 - try to comment or remove lines `49` and `50` in `test_video_swapsingle.py`. Then everything is the same as in the example above, except -`-crop_size` - change 224 to 512 > > i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset 8000 is very little for training. vgg2 cropped and aligned dataset contain around 600000 images and recommended to train about 400k-600k it
doctorcui commented 2022-04-26 13:30:29 +02:00 (Migrated from github.com)

If you train 512 - try to comment or remove lines 49 and 50 in test_video_swapsingle.py. Then everything is the same as in the example above, except --crop_size - change 224 to 512

i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset

8000 is very little for training. vgg2 cropped and aligned dataset contain around 600000 images and recommended to train about 400k-600k it

tkank for your advive i us e 512*512 dataset to train but when a use -crop_size 512 to test error occoues [Pretrained network G has fewer layers; The following are not initialized:
['down0', 'first_layer', 'last_layer', 'up0]

> > > If you train 512 - try to comment or remove lines `49` and `50` in `test_video_swapsingle.py`. Then everything is the same as in the example above, except -`-crop_size` - change 224 to 512 > > > > > > i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset > > 8000 is very little for training. vgg2 cropped and aligned dataset contain around 600000 images and recommended to train about 400k-600k it tkank for your advive i us e 512*512 dataset to train but when a use -crop_size 512 to test error occoues [Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0]
doctorcui commented 2022-04-26 13:33:17 +02:00 (Migrated from github.com)

If you train 512 - try to comment or remove lines 49 and 50 in test_video_swapsingle.py. Then everything is the same as in the example above, except --crop_size - change 224 to 512

i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset

8000 is very little for training. vgg2 cropped and aligned dataset contain around 600000 images and recommended to train about 400k-600k it

tkank for your advive i us e 512*512 dataset to train but when a use -crop_size 512 to test error occoues [Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0]

it's strange that when i use -- cropsize 224 that error not appears but it's result really bad

> > > > If you train 512 - try to comment or remove lines `49` and `50` in `test_video_swapsingle.py`. Then everything is the same as in the example above, except -`-crop_size` - change 224 to 512 > > > > > > > > > i download 8000 pictures on internet and Divide them into 80 groups of 100 images each and follow {Generate the HQ dataset by yourself. (If you want to do so)} to make my dataset > > > > > > 8000 is very little for training. vgg2 cropped and aligned dataset contain around 600000 images and recommended to train about 400k-600k it > > tkank for your advive i us e 512*512 dataset to train but when a use -crop_size 512 to test error occoues [Pretrained network G has fewer layers; The following are not initialized: ['down0', 'first_layer', 'last_layer', 'up0] it's strange that when i use -- cropsize 224 that error not appears but it's result really bad
netrunner-exe commented 2022-04-26 13:57:46 +02:00 (Migrated from github.com)

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training
doctorcui commented 2022-04-26 15:07:59 +02:00 (Migrated from github.com)

python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep True -------》train.py dont have crop_size 512 224 parameter

> python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep True -------》train.py dont have crop_size 512 224 parameter
doctorcui commented 2022-04-26 15:09:50 +02:00 (Migrated from github.com)

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training

from 224 and 512 the ONLY difference is --Gdeep True or FALSE

> You train 512 or 224? Not a dataset, --crop_size of your command that you use for training from 224 and 512 the ONLY difference is --Gdeep True or FALSE
netrunner-exe commented 2022-04-26 15:42:18 +02:00 (Migrated from github.com)

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training

from 224 and 512 the ONLY difference is --Gdeep True or FALSE

Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me.

> > You train 512 or 224? Not a dataset, --crop_size of your command that you use for training > > from 224 and 512 the ONLY difference is --Gdeep True or FALSE Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me.
doctorcui commented 2022-04-26 17:51:38 +02:00 (Migrated from github.com)

thank you again for your advices ! and yes i know what you mean but i use 512*512 dataset , follow instructions command 2 to train
when i test model i set crop-size=512 and error still happens
and Did you successfully complete the 512 training?

> thank you again for your advices ! and yes i know what you mean but i use 512*512 dataset , follow instructions command 2 to train when i test model i set crop-size=512 and error still happens and Did you successfully complete the 512 training?
doctorcui commented 2022-04-27 05:25:25 +02:00 (Migrated from github.com)

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training

from 224 and 512 the ONLY difference is --Gdeep True or FALSE

Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me.

its stange that @neuralchen give two train command and it' s difference is only {Gdeep} when traing G-model use
fs_networks_fix.py when i test, G-model use fs_networks_512.py to initial model it's wrong because they are different nn.Sequential levels
we should use fs_networks.py to initial trained model what we only need change is set -->{deep} True or False

> > > You train 512 or 224? Not a dataset, --crop_size of your command that you use for training > > > > > > from 224 and 512 the ONLY difference is --Gdeep True or FALSE > > Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me. its stange that @neuralchen give two train command and it' s difference is only {Gdeep} when traing G-model use fs_networks_fix.py when i test, G-model use fs_networks_512.py to initial model it's wrong because they are different nn.Sequential levels we should use fs_networks.py to initial trained model what we only need change is set -->{deep} True or False
netrunner-exe commented 2022-04-27 07:26:07 +02:00 (Migrated from github.com)

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training

from 224 and 512 the ONLY difference is --Gdeep True or FALSE

Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me.

its stange that @neuralchen give two train command and it' s difference is only {Gdeep} when traing G-model use fs_networks_fix.py when i test, G-model use fs_networks_512.py to initial model it's wrong because they are different nn.Sequential levels we should use fs_networks.py to initial trained model what we only need change is set -->{deep} True or False

Perhaps this is necessary in order to correctly train the 512 model so that the result will be better than in the previously published beta 512

> > > > You train 512 or 224? Not a dataset, --crop_size of your command that you use for training > > > > > > > > > from 224 and 512 the ONLY difference is --Gdeep True or FALSE > > > > > > Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me. > > its stange that @neuralchen give two train command and it' s difference is only {Gdeep} when traing G-model use fs_networks_fix.py when i test, G-model use fs_networks_512.py to initial model it's wrong because they are different nn.Sequential levels we should use fs_networks.py to initial trained model what we only need change is set -->{deep} True or False Perhaps this is necessary in order to correctly train the 512 model so that the result will be better than in the previously published beta 512
doctorcui commented 2022-04-27 14:16:09 +02:00 (Migrated from github.com)

You train 512 or 224? Not a dataset, --crop_size of your command that you use for training

from 224 and 512 the ONLY difference is --Gdeep True or FALSE

Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me.

its stange that @neuralchen give two train command and it' s difference is only {Gdeep} when traing G-model use fs_networks_fix.py when i test, G-model use fs_networks_512.py to initial model it's wrong because they are different nn.Sequential levels we should use fs_networks.py to initial trained model what we only need change is set -->{deep} True or False

Perhaps this is necessary in order to correctly train the 512 model so that the result will be better than in the previously published beta 512

maybe , so if follow commend 2 to train 512 G-model when you run test use --cropsize 512 will break out code model error so i tried --224 and set fs_model.py line 59 [deep=True] finaly it works (ps: im vrey sure i use train commend 2 {python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep True } i trained 512 model but 224)

> > > > > You train 512 or 224? Not a dataset, --crop_size of your command that you use for training > > > > > > > > > > > > from 224 and 512 the ONLY difference is --Gdeep True or FALSE > > > > > > > > > Honestly, I don't understand what you mean. @neuralchen wrote the most understandable instructions - if you train 224 - use command 1 and a dataset that is cropped to 224x224, if 512 - command 2 and a dataset that is cropped to 512x512. The error that you get if you trained incorrectly, or incorrectly used the option -- crop_size - I mean it occurs if you trained 224 and put --crop_size 512 in inference and vice versa. At least that's how it appeared to me. > > > > > > its stange that @neuralchen give two train command and it' s difference is only {Gdeep} when traing G-model use fs_networks_fix.py when i test, G-model use fs_networks_512.py to initial model it's wrong because they are different nn.Sequential levels we should use fs_networks.py to initial trained model what we only need change is set -->{deep} True or False > > Perhaps this is necessary in order to correctly train the 512 model so that the result will be better than in the previously published beta 512 maybe , so if follow commend 2 to train 512 G-model when you run test use --cropsize 512 will break out code model error so i tried --224 and set fs_model.py line 59 [deep=True] finaly it works (ps: im vrey sure i use train commend 2 `{python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep True } ` i trained 512 model but 224)
neuralchen commented 2022-05-03 16:27:18 +02:00 (Migrated from github.com)

--Gdeep True or FALSE the option is designed to optionally add one downscaling layer and a upsampling layer.
This design is to increase the receptive field of the backbone when processing the large size image, e.g., 512.

--Gdeep True or FALSE the option is designed to optionally add one downscaling layer and a upsampling layer. This design is to increase the receptive field of the backbone when processing the large size image, e.g., 512.
Sign in to join this conversation.