Advice on quality of output(not an issue) #259
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
In case someone overlooked it, you can preserve quality a lot better by, changing your temp_directory files to png which is lossless, instead of jpg which is designed for compression and definitely loses quality. The folder will quickly grow in size because of the non compression nature of png, but you can always delete the folder once you get your output video.
Interesting. How could you actually implement this? I mean the necessary code modifications to make use of PNG instead.
Its quite simple, go to videoswap.py and on the bottom where you see jpg , just replace it with png. The difference is quite noticable. You can also set up moviepy to not compress the video file by exporting it as a .mov file which is also lossless no matter how many times you edit it, it will never lose quality. I will upload the altered files to my github today, and give some instructions on a few things, like when the image is "not iterable" and what you can do to get passed that. Also how to use Gfgpan to quickly enhance the faces to hd and recompile the video at a higher resolution.
@Fibonacci134 yours would be a tremendous contribution and I will eagerly await it.
Many thanks in advance 👍.
No problem at all! i uploaded the videoswap.py and i will be posting a tutorial on how to use GFPGAN to enhance. will also post reverse2original.py that is a bit faster at inference im still working on it so it is still a work in progress.
So yesterday after reading this post I downloaded GFPGAN to try it out. All I can say is... Wow. The results are incredible. Thanks for the tip.
@razr112 how does GFPGAN handle multiple faces in a picture?
Also, what if the face is not looking at the camera, but for example to the left or the right?
@Fibonacci134 when you'll have tested your improved output quality code and if you would'nt mind sharing it, please let us kindly know 👍.
Yes GFGPAN is amazing!! quick tip, if you want to just enhance the face (its a lot quicker) you can add --bg_upsampler none and it'll only do the faces. you can also have it save to only "restored imgs" by altering the inference.py file to convert an entire video into sequential images :
ffmpeg -i (name of your video).mp4 -vf fps=25 out%d.png
convert back to video once images are enhanced :
ffmpeg -f image2 -framerate 25 -i %0d.png -vcodec libx264 -crf 22 video.mp4
model v 1.3 is really good but I prefer v 1
Where exactly do I place that code in the inference.py file?
@Fibonacci134
Hey that code isn't meant to be put in inference.py , its just normal ffmpeg command. To make things simple, all you have to do is install FFmpeg and add the FFmpeg program to the Windows path using Environment variables. If using linux, then no need to do that, it will work without any other effort. The steps are as follows:
-Put video in a empty folder
-right click inside folder and open in terminal
Now that you've mentioned it, i think i will add the script into the Gfgpan script to automate the process. Give me a couple of days lol
Ah okay. I already use FFmpeg commands to extract the frames. I just couldn't figure out how to implement it into the inference file.
Awesome! Looking forward to it. Implementing the script is way above my level of beginner coding knowledge lol.
@Fibonacci134
Ohh okay, awesome. Lol ots just that there are generally more windows users and most are usually not too familiar with ffmpeg. Its great that you use it, such a handy little tool. And no worries bro, we're all beginners in grand scheme of things lol im Guessing you too are a self teacher so you know there's never any structure, we just try and learn things as they come across 😂. Will hopefully get to work on some stuff this weekend.
I use ffmpeg and gfpgan to get better output and the difference is just legendary. But GFPGAN is a resource intensive and slow algorithm, which is the only con that I have found.
Implementing GFPGAN or GPEN into the main swap pipeline would be an amazing improvement of this repo. If this could be implemented in a similar way to dot, it would be easy to use as an option before swapping. --gpen_type 512 for instance.
Are there any plans to implement something like this to SimSwap?
@epicstar7 you can include gfpgan cleaning as a step only on the masked results, for minimal overhead cost, by adding just a few cells to the colab, at least for single face swaps in a video (I haven't tried for multi case but should be relatively straightforward to extend) - first get all the necessary packages:
then write a new version of videoswap to use - the version below also saves frames as lossless pngs rather than jpg which can improve the quality of results at the cost of requiring additional memory
then finally run a slightly modified version of the original script, where this also removes the watermark placed by default, and lets you choose whether to use the 224 or 512 crop size version of the model (NB if choosing 512, you will also need to add a
!wget https://github.com/neuralchen/SimSwap/releases/download/512_beta/512.zip !unzip ./512.zip -d ./checkpointsline above the original checkpoint download)hope that helps!
@fitzgeraldja Oh my god man ! You're my hero, thank you
@fitzgeraldja This is really awesome. I can't get this version to work on my pc though, only https://github.com/mike9251/simswap-inference-pytorch
any chance you would fix the code for that repo? :)
I did that then I got this error:
Traceback (most recent call last):
File "test_video_swapsingle.py", line 86, in
no_simswaplogo=opt.no_simswaplogo,use_mask=opt.use_mask,crop_size=crop_size)
File "C:\SimSwap\SimSwap-main\util\videoswap.py", line 115, in video_swap
clips = ImageSequenceClip(image_filenames,fps = fps)
File "C:\Users\amt\anaconda3\envs\simswap\lib\site-packages\moviepy\video\io\ImageSequenceClip.py", line 64, in init
if isinstance(sequence[0], str):
IndexError: list index out of range
@ziko2222 ,have you solved it, I have received the same problem