Commit Graph

36 Commits

Author SHA1 Message Date
iperov 388964e8d0 fix model export. Update tf2onnx to 1.9.2 2021-11-20 16:05:44 +04:00
iperov 853312e84c fix 2021-10-18 13:18:59 +04:00
iperov 71f22957a6 SAEHD: random scale increased to -0.15+0.15. Improved lr_dropout capability to reach lower value of the loss.
When random_warp is off, inter_AB network is no longer trained to keep the face more src-like.

added option Random hue/saturation/light intensity applied to the src face set only at the input of the neural network. Stabilizes color perturbations during face swapping. Reduces the quality of the color transfer by selecting the closest one in the src faceset. Thus the src faceset must be diverse enough. Typical fine value is 0.05.

AMP: random scale increased to -0.15+0.15. Improved lr_dropout capability to reach lower value of the loss.
2021-10-18 11:03:23 +04:00
iperov b1b5d6f482 AMP, SAEHD: In the sample generator, the random scaling was increased from -0.05+0.05 to -0.125+0.125 , which improves the generalization of faces. 2021-10-09 13:58:46 +04:00
iperov 9e0079c6a0 SAEHD, AMP: removed the implicit function of periodically retraining last 16 “high-loss” samples 2021-09-29 16:48:54 +04:00
iperov 8a897f236f fix blur_out_mask for tf 1.13, fix export model for dx12 live build. 2021-09-11 00:00:15 +04:00
iperov d301a4799f _ 2021-09-06 14:07:20 +04:00
iperov b1990d421a AMP: don't train inters with random_warp-off 2021-09-03 13:50:31 +04:00
iperov 3fe8ce86b1 AMP: enable inter_src+inter_dst training 2021-08-30 18:52:29 +04:00
iperov f346e35e7f fix 2021-08-30 12:57:19 +04:00
iperov 01f1a084b4 AMP: remove dst_denoise option 2021-08-29 11:46:51 +04:00
iperov e53be5e22d fix 2021-08-24 13:18:51 +04:00
iperov 91187ecb95 AMP: some arhi change results training stabilization. New options blur_out_mask + rtm_dst_denoise. Sample processors count are no more limited to 8, thus if you have AMD processor with 16+ cores, increase paging file size. 2021-08-24 12:58:23 +04:00
iperov 83b1412da7 Export AMP/SAEHD: added "Export quantized" option 2021-07-30 17:13:46 +04:00
iperov d26d579836 AMP change help of ct mode. Default inter dims now 1024 2021-07-29 23:14:30 +04:00
iperov 9e1bc5a153 AMP: return lr_dropout option 2021-07-28 11:00:19 +04:00
iperov 4be135af60 AMP: last high loss samples behaviour - same as SAEHD 2021-07-28 08:58:32 +04:00
iperov 0748b8d043 AMP: removed eyes mouth prio option(default enabled), removed masked training option(default enabled). 2021-07-17 21:58:57 +04:00
iperov 959a3530f8 AMP: default morph factor is now 0.5. Preview shows with 1.0 2021-07-17 10:27:07 +04:00
iperov da8f33ee85 removing fp16 for amp 2021-07-15 13:23:33 +04:00
iperov 7b8991a3cc fix help 2021-07-15 00:50:34 +04:00
iperov 35877dbfd7 AMP, SAEHD: added option use_fp16: Increases training/inference speed, reduces model size. Model may crash. Enable it after 1-5k iters.
AMP: removed lr_dropout, now it is enabled by default;
2021-07-14 23:03:54 +04:00
iperov ee1bc83a14 AMP, SAEHD: reverted GAN to december version. 2021-07-07 21:45:43 +04:00
iperov 446b95942f fix 2021-07-01 15:08:11 +04:00
iperov 0444a9a565 default morph factor is now 0.1 2021-07-01 14:29:11 +04:00
iperov 075422e0c5 amp: fixes, removed pretrain option 2021-07-01 14:25:54 +04:00
iperov 63c794b3d0 rename dump_dflive to export_dfm 2021-06-28 18:26:54 +04:00
iperov 9d6b6feb1f _ 2021-06-28 14:27:09 +04:00
iperov 90a74efd89 fix 2021-06-26 23:06:30 +04:00
iperov 3c6bbe22b9 AMP: added Inter dimensions param. The model is not changed. Should be equal or more than AutoEncoder dimensions. More dims are better, but require more VRAM. You can fine-tune model size to fit your GPU. 2021-06-26 10:55:33 +04:00
iperov 5783191849 AMP: code refactoring, fix preview history
added dumpdflive command
2021-06-26 10:44:41 +04:00
iperov 62f1d57871 AMP: fix a merge bug where morph_factor was always = 1.0 2021-06-14 17:58:01 +04:00
iperov 5fac5ee4f3 AMP model: added ‘morph_factor’ option. [0.1 .. 0.5]
The smaller the value, the more src-like facial expressions will appear.
The larger the value, the less space there is to train a large dst faceset in the neural network.
Typical fine value is 0.33

AMP model: added ‘pretrain’ mode as in SAEHD
Windows build: Default pretrain dataset is updated with applied Generic XSeg mask
2021-06-01 09:37:55 +04:00
iperov e52b53f87c AMP fix 2021-05-30 09:24:23 +04:00
iperov 11a7993238 fix 2021-05-29 22:23:07 +04:00
iperov 6f86c68e65 add AMP model 2021-05-29 18:57:35 +04:00