add: adding docker file #405

Closed
VisionOra wants to merge 1 commits from add/DockerFile into main
3 changed files with 79 additions and 30 deletions

24
Dockerfile Normal file
View File

@@ -0,0 +1,24 @@
FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub && \
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
RUN apt-get update && \
apt-get install -y wget && \
wget -qO - https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
apt-get update && \
apt-get install -y git && \
apt-get install ffmpeg libsm6 libxext6 -y && \
apt-get clean
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools
WORKDIR /
COPY ./requirements.txt /SimSwap/requirements.txt
RUN pip install -r /SimSwap/requirements.txt
# Use external volume for data
ENV NVIDIA_VISIBLE_DEVICES 1
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8888", "--NotebookApp.token=''","--NotebookApp.password=''", "--allow-root"]

View File

@@ -1,5 +1,7 @@
# SimSwap: An Efficient Framework For High Fidelity Face Swapping
## Proceedings of the 28th ACM International Conference on Multimedia
**The official repository with Pytorch**
**Our method can realize **arbitrary face swapping** on images and videos with **one single trained model**.**
@@ -9,50 +11,71 @@ Training and test code are now available!
We are working with our incoming paper SimSwap++, keeping expecting!
The high resolution version of ***SimSwap-HQ*** is supported!
The high resolution version of **_SimSwap-HQ_** is supported!
[![simswaplogo](/docs/img/logo1.png)](https://github.com/neuralchen/SimSwap)
Our paper can be downloaded from [[Arxiv]](https://arxiv.org/pdf/2106.06340v1.pdf) [[ACM DOI]](https://dl.acm.org/doi/10.1145/3394171.3413630)
Our paper can be downloaded from [[Arxiv]](https://arxiv.org/pdf/2106.06340v1.pdf) [[ACM DOI]](https://dl.acm.org/doi/10.1145/3394171.3413630)
<!-- [[Google Drive]](https://drive.google.com/file/d/1fcfWOGt1mkBo7F0gXVKitf8GJMAXQxZD/view?usp=sharing)
<!-- [[Google Drive]](https://drive.google.com/file/d/1fcfWOGt1mkBo7F0gXVKitf8GJMAXQxZD/view?usp=sharing)
[[Baidu Drive ]](https://pan.baidu.com/s/1-TKFuycRNUKut8hn4IimvA) Password: ```ummt``` -->
## Attention
***This project is for technical and academic use only. Please do not apply it to illegal and unethical scenarios.***
***In the event of violation of the legal and ethical requirements of the user's country or region, this code repository is exempt from liability***
**_This project is for technical and academic use only. Please do not apply it to illegal and unethical scenarios._**
***Please do not ignore the content at the end of this README!***
**_In the event of violation of the legal and ethical requirements of the user's country or region, this code repository is exempt from liability_**
**_Please do not ignore the content at the end of this README!_**
If you find this project useful, please star it. It is the greatest appreciation of our work.
## Setup
### Docker Setup
1. Make docker image
```bash
docker build . -t simswap
```
2. Run docker image
```bash
docker run -it -v <local-git-path>:/SimSwap -p 8888:8888 simswap
```
3. Access jupyter notebook at https://localhost:8888
## Top News <img width=8% src="./docs/img/new.gif"/>
**`2023-04-25`**: We fixed the "AttributeError: 'SGD' object has no attribute 'defaults' now" bug. If you have already downloaded **arcface_checkpoint.tar**, please **download it again**. Also, you also need to update the scripts in ```./models/```.
**`2023-04-25`**: We fixed the "AttributeError: 'SGD' object has no attribute 'defaults' now" bug. If you have already downloaded **arcface_checkpoint.tar**, please **download it again**. Also, you also need to update the scripts in `./models/`.
**`2022-04-21`**: For resource limited users, we provide the cropped VGGFace2-224 dataset [[Google Driver] VGGFace2-224 (10.8G)](https://drive.google.com/file/d/19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc/view?usp=sharing) [[Baidu Driver]](https://pan.baidu.com/s/1OiwLJHVBSYB4AY2vEcfN0A) [Password: lrod].
**`2022-04-20`**: Training scripts are now available. We highly recommend that you guys train the simswap model with our released high quality dataset [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ).
**`2021-11-24`**: We have trained a beta version of ***SimSwap-HQ*** on [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) and open sourced the checkpoint of this model (if you think the Simswap 512 is cool, please star our [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) repo). Please dont forget to go to [Preparation](./docs/guidance/preparation.md) and [Inference for image or video face swapping](./docs/guidance/usage.md) to check the latest set up.
**`2021-11-24`**: We have trained a beta version of **_SimSwap-HQ_** on [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) and open sourced the checkpoint of this model (if you think the Simswap 512 is cool, please star our [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) repo). Please dont forget to go to [Preparation](./docs/guidance/preparation.md) and [Inference for image or video face swapping](./docs/guidance/usage.md) to check the latest set up.
**`2021-11-23`**: The google drive link of [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) is released.
**`2021-11-23`**: The google drive link of [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) is released.
**`2021-11-17`**: We released a high resolution face dataset [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) and the method to generate this dataset. This dataset is for research purpose.
**`2021-11-17`**: We released a high resolution face dataset [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ) and the method to generate this dataset. This dataset is for research purpose.
**`2021-08-30`**: Docker has been supported, please refer [here](https://replicate.ai/neuralchen/simswap-image) for details.
**`2021-08-17`**: We have updated the [Preparation](./docs/guidance/preparation.md), The main change is that the gpu version of onnx is now installed by default, Now the time to process a video is greatly reduced.
**`2021-07-19`**: ***Obvious border abruptness has been resolved***. We add the ability to using mask and upgrade the old algorithm for better visual effect, please go to [Inference for image or video face swapping](./docs/guidance/usage.md) for details. Please dont forget to go to [Preparation](./docs/guidance/preparation.md) to check the latest set up. (Thanks for the help from [@woctezuma](https://github.com/woctezuma) and [@instant-high](https://github.com/instant-high))
**`2021-07-19`**: **_Obvious border abruptness has been resolved_**. We add the ability to using mask and upgrade the old algorithm for better visual effect, please go to [Inference for image or video face swapping](./docs/guidance/usage.md) for details. Please dont forget to go to [Preparation](./docs/guidance/preparation.md) to check the latest set up. (Thanks for the help from [@woctezuma](https://github.com/woctezuma) and [@instant-high](https://github.com/instant-high))
## The first open source high resolution dataset for face swapping!!!
## High Resolution Dataset [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ)
[![logo](./docs/img/vggface2_hq_compare.png)](https://github.com/NNNNAI/VGGFace2-HQ)
## Dependencies
- python3.6+
- pytorch1.5+
- torchvision
@@ -73,29 +96,28 @@ In order to ensure the normal training, the batch size must be greater than 1.
Friendly reminder, due to the difference in training settings, the user-trained model will have subtle differences in visual effects from the pre-trained model we provide.
- Train 224 models with VGGFace2 224*224 [[Google Driver] VGGFace2-224 (10.8G)](https://drive.google.com/file/d/19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc/view?usp=sharing) [[Baidu Driver] ](https://pan.baidu.com/s/1OiwLJHVBSYB4AY2vEcfN0A) [Password: lrod]
- Train 224 models with VGGFace2 224\*224 [[Google Driver] VGGFace2-224 (10.8G)](https://drive.google.com/file/d/19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc/view?usp=sharing) [[Baidu Driver] ](https://pan.baidu.com/s/1OiwLJHVBSYB4AY2vEcfN0A) [Password: lrod]
For faster convergence and better results, a large batch size (more than 16) is recommended!
***We recommend training more than 400K iterations (batch size is 16), 600K~800K will be better, more iterations will not be recommended.***
**_We recommend training more than 400K iterations (batch size is 16), 600K~800K will be better, more iterations will not be recommended._**
```
python train.py --name simswap224_test --batchSize 8 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep False
```
[Colab demo for training 224 model](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb)
[Colab demo for training 224 model](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb)
For faster convergence and better results, a large batch size (more than 16) is recommended!
- Train 512 models with VGGFace2-HQ 512*512 [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ).
- Train 512 models with VGGFace2-HQ 512\*512 [VGGFace2-HQ](https://github.com/NNNNAI/VGGFace2-HQ).
```
python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep True
```
## Inference with a pretrained SimSwap model
[Preparation](./docs/guidance/preparation.md)
[Inference for image or video face swapping](./docs/guidance/usage.md)
@@ -104,13 +126,12 @@ python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /p
<div style="background: yellow; width:140px; font-weight:bold;font-family: sans-serif;">Stronger feature</div>
[Colab for switching specific faces in multi-face videos](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/MultiSpecific.ipynb)
[Colab for switching specific faces in multi-face videos](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/MultiSpecific.ipynb)
[Image face swapping demo & Docker image on Replicate](https://replicate.ai/neuralchen/simswap-image)
## Video
<img src="./docs/img/video.webp"/>
<div>
<img width=24% src="./docs/img/anni.webp"/>
@@ -124,17 +145,16 @@ python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /p
</div>
## Results
![Results1](/docs/img/results1.PNG)
![Results2](/docs/img/total.PNG)
<!-- ![video2](/docs/img/anni.webp)
![video3](/docs/img/chenglong.webp)
![video4](/docs/img/zhoujielun.webp)
![video5](/docs/img/zhuyin.webp) -->
**High-quality videos can be found in the link below:**
[[Mama(video) 1080p]](https://drive.google.com/file/d/1mnSlwzz7f4H2O7UwApAHo64mgK4xSNyK/view?usp=sharing)
@@ -145,20 +165,22 @@ python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /p
[[Google Drive link for video 3]](https://drive.google.com/file/d/1oftHAnLmgFis4XURcHTccGSWbWSXYKK1/view?usp=sharing)
[[Baidu Drive link for video]](https://pan.baidu.com/s/1WTS6jm2TY17bYJurw57LUg ) Password: ```b26n```
[[Baidu Drive link for video]](https://pan.baidu.com/s/1WTS6jm2TY17bYJurw57LUg ) Password: `b26n`
[[Online Video]](https://www.bilibili.com/video/BV12v411p7j5/)
## User case
If you have some interesting results after using our project and are willing to share, you can contact us by email or share directly on the issue. Later, we may make a separate section to show these results, which should be cool.
At the same time, if you have suggestions for our project, please feel free to ask questions in the issue, or contact us directly via email: [email1](mailto:chenxuanhongzju@outlook.com), [email2](mailto:nicklau26@foxmail.com), [email3](mailto:ziangliu824@gmail.com). (All three can be contacted, just choose any one)
## License
For academic and non-commercial use only.The whole project is under the CC-BY-NC 4.0 license. See [LICENSE](https://github.com/neuralchen/SimSwap/blob/main/LICENSE) for additional details.
## To cite our paper
```
@inproceedings{DBLP:conf/mm/ChenCNG20,
author = {Renwang Chen and
@@ -188,7 +210,7 @@ For academic and non-commercial use only.The whole project is under the CC-BY-NC
[![logo](./docs/img/vggface2_hq_compare.png)](https://github.com/NNNNAI/VGGFace2-HQ)
Learn about our other projects
Learn about our other projects
[[VGGFace2-HQ]](https://github.com/NNNNAI/VGGFace2-HQ);
@@ -211,8 +233,9 @@ Learn about our other projects
## Acknowledgements
<!--ts-->
* [Deepfacelab](https://github.com/iperov/DeepFaceLab)
* [Insightface](https://github.com/deepinsight/insightface)
* [Face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch)
* [BiSeNet](https://github.com/CoinCheung/BiSeNet)
- [Deepfacelab](https://github.com/iperov/DeepFaceLab)
- [Insightface](https://github.com/deepinsight/insightface)
- [Face-parsing.PyTorch](https://github.com/zllrunning/face-parsing.PyTorch)
- [BiSeNet](https://github.com/CoinCheung/BiSeNet)
<!--te-->

2
requirements.txt Normal file
View File

@@ -0,0 +1,2 @@
opencv-python
jupyter