1 Commits

Author SHA1 Message Date
sohaibanwaar
ca730ada1b add: adding docker file and requirements.txt 2023-05-13 15:06:17 +05:00
11 changed files with 73 additions and 108 deletions

26
Dockerfile Normal file
View File

@@ -0,0 +1,26 @@
FROM pytorch/pytorch:1.11.0-cuda11.3-cudnn8-devel
RUN apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/3bf863cc.pub && \
apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1804/x86_64/7fa2af80.pub
RUN apt-get update && \
apt-get install -y wget && \
wget -qO - https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub | apt-key add - && \
apt-get update && \
apt-get install -y git && \
apt-get install ffmpeg libsm6 libxext6 -y && \
apt install -y libprotobuf-dev protobuf-compiler && \
apt-get clean
RUN pip install --upgrade pip
RUN pip install --upgrade setuptools
WORKDIR /simpswap
COPY ./requirements.txt /simpswap/requirements.txt
RUN pip install -r /simpswap/requirements.txt
# Use external volume for data
ENV NVIDIA_VISIBLE_DEVICES 1
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8888", "--allow-root"]

View File

@@ -4,8 +4,6 @@
**Our method can realize **arbitrary face swapping** on images and videos with **one single trained model**.**
***We are recruiting full-time engineers. If you are interested, please send an [email](mailto:chen19910528@sjtu.edu.cn?subject=[GitHub]%20Source%20Han%20Sans) to my team. Please refer to the website for specific recruitment conditions: [Requirements](https://join.sjtu.edu.cn/Admin/QsPreview.aspx?qsid=44f5413a90974114b8f5e643177ef32d)***
Training and test code are now available!
[ <a href="https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb)
@@ -15,12 +13,7 @@ The high resolution version of ***SimSwap-HQ*** is supported!
[![simswaplogo](/docs/img/logo1.png)](https://github.com/neuralchen/SimSwap)
Our paper can be downloaded from [[Arxiv]](https://arxiv.org/pdf/2106.06340v1.pdf) [[ACM DOI]](https://dl.acm.org/doi/10.1145/3394171.3413630)
### This project also received support from [SocialBook](https://socialbook.io).
<!-- [![logo](./simswaplogo/socialbook_logo.2020.357eed90add7705e54a8.svg)](https://socialbook.io) -->
<img width=30% src="./simswaplogo/socialbook_logo.2020.357eed90add7705e54a8.svg"/>
Our paper can be downloaded from [[Arxiv]](https://arxiv.org/pdf/2106.06340v1.pdf) [[ACM DOI]](https://dl.acm.org/doi/10.1145/3394171.3413630)
<!-- [[Google Drive]](https://drive.google.com/file/d/1fcfWOGt1mkBo7F0gXVKitf8GJMAXQxZD/view?usp=sharing)
[[Baidu Drive ]](https://pan.baidu.com/s/1-TKFuycRNUKut8hn4IimvA) Password: ```ummt``` -->
@@ -36,8 +29,6 @@ If you find this project useful, please star it. It is the greatest appreciation
## Top News <img width=8% src="./docs/img/new.gif"/>
**`2023-09-26`**: We fixed bugs in colab!
**`2023-04-25`**: We fixed the "AttributeError: 'SGD' object has no attribute 'defaults' now" bug. If you have already downloaded **arcface_checkpoint.tar**, please **download it again**. Also, you also need to update the scripts in ```./models/```.
**`2022-04-21`**: For resource limited users, we provide the cropped VGGFace2-224 dataset [[Google Driver] VGGFace2-224 (10.8G)](https://drive.google.com/file/d/19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc/view?usp=sharing) [[Baidu Driver]](https://pan.baidu.com/s/1OiwLJHVBSYB4AY2vEcfN0A) [Password: lrod].
@@ -61,9 +52,6 @@ If you find this project useful, please star it. It is the greatest appreciation
[![logo](./docs/img/vggface2_hq_compare.png)](https://github.com/NNNNAI/VGGFace2-HQ)
## Dependencies
- python3.6+
- pytorch1.5+
@@ -74,7 +62,6 @@ If you find this project useful, please star it. It is the greatest appreciation
- imageio
- moviepy
- insightface
- ***timm==0.5.4***
## Training
@@ -97,7 +84,7 @@ For faster convergence and better results, a large batch size (more than 16) is
python train.py --name simswap224_test --batchSize 8 --gpu_ids 0 --dataset /path/to/VGGFace2HQ --Gdeep False
```
[Colab demo for training 224 model][ <a href="https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb)
[Colab demo for training 224 model](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/train.ipynb)
For faster convergence and better results, a large batch size (more than 16) is recommended!
@@ -117,7 +104,7 @@ python train.py --name simswap512_test --batchSize 16 --gpu_ids 0 --dataset /p
<div style="background: yellow; width:140px; font-weight:bold;font-family: sans-serif;">Stronger feature</div>
[Colab for switching specific faces in multi-face videos][ <a href="https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/MultiSpecific.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="google colab logo"></a>](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/MultiSpecific.ipynb)
[Colab for switching specific faces in multi-face videos](https://colab.research.google.com/github/neuralchen/SimSwap/blob/main/MultiSpecific.ipynb)
[Image face swapping demo & Docker image on Replicate](https://replicate.ai/neuralchen/simswap-image)
@@ -171,7 +158,7 @@ At the same time, if you have suggestions for our project, please feel free to a
For academic and non-commercial use only.The whole project is under the CC-BY-NC 4.0 license. See [LICENSE](https://github.com/neuralchen/SimSwap/blob/main/LICENSE) for additional details.
## To cite our papers
## To cite our paper
```
@inproceedings{DBLP:conf/mm/ChenCNG20,
author = {Renwang Chen and
@@ -183,22 +170,6 @@ For academic and non-commercial use only.The whole project is under the CC-BY-NC
year = {2020}
}
```
```
@Article{simswapplusplus,
author = {Xuanhong Chen and
Bingbing Ni and
Yutian Liu and
Naiyuan Liu and
Zhilin Zeng and
Hang Wang},
title = {SimSwap++: Towards Faster and High-Quality Identity Swapping},
journal = {{IEEE} Trans. Pattern Anal. Mach. Intell.},
volume = {46},
number = {1},
pages = {576--592},
year = {2024}
}
```
## Related Projects
@@ -217,6 +188,21 @@ For academic and non-commercial use only.The whole project is under the CC-BY-NC
[![logo](./docs/img/vggface2_hq_compare.png)](https://github.com/NNNNAI/VGGFace2-HQ)
## Installation
```bash
# Git clone
git clone <repo>
# Download weights
bash download_weights.sh
# Build Docker file
docker build -t simswap .
# Run docker file
docker run -it -v <local path of simpSwap Git>:/simpswap --gpus=all -p 8888:8888 simswap:latest
# Access jupyter notebook
http://localhost:8888/tree/simpswap
```
Learn about our other projects
[[VGGFace2-HQ]](https://github.com/NNNNAI/VGGFace2-HQ);

View File

@@ -59,6 +59,9 @@ class fsModel(BaseModel):
self.netG = Generator_Adain_Upsample(input_nc=3, output_nc=3, latent_size=512, n_blocks=9, deep=False)
self.netG.to(device)
# Id network
netArc_checkpoint = opt.Arc_path
netArc_checkpoint = torch.load(netArc_checkpoint, map_location=torch.device("cpu"))

View File

@@ -7,9 +7,13 @@ from .config import device, num_classes
def create_model(opt):
#from .pix2pixHD_model import Pix2PixHDModel, InferenceModel
from .fs_model import fsModel
model = fsModel()
if opt.model == 'pix2pixHD':
#from .pix2pixHD_model import Pix2PixHDModel, InferenceModel
from .fs_model import fsModel
model = fsModel()
else:
from .ui_model import UIModel
model = UIModel()
model.initialize(opt)
if opt.verbose:

View File

@@ -5,7 +5,7 @@
# Created Date: Wednesday January 12th 2022
# Author: Chen Xuanhong
# Email: chenxuanhongzju@outlook.com
# Last Modified: Saturday, 13th May 2023 9:56:35 am
# Last Modified: Thursday, 21st April 2022 8:13:37 pm
# Modified By: Chen Xuanhong
# Copyright (c) 2022 Shanghai Jiao Tong University
#############################################################
@@ -46,7 +46,7 @@ class fsModel(BaseModel):
# Id network
netArc_checkpoint = opt.Arc_path
netArc_checkpoint = torch.load(netArc_checkpoint, map_location=torch.device("cpu"))
self.netArc = netArc_checkpoint
self.netArc = netArc_checkpoint['model'].module
self.netArc = self.netArc.cuda()
self.netArc.eval()
self.netArc.requires_grad_(False)

View File

@@ -13,7 +13,7 @@ class BaseOptions():
self.parser.add_argument('--name', type=str, default='people', help='name of the experiment. It decides where to store samples and models')
self.parser.add_argument('--gpu_ids', type=str, default='0', help='gpu ids: e.g. 0 0,1,2, 0,2. use -1 for CPU')
self.parser.add_argument('--checkpoints_dir', type=str, default='./checkpoints', help='models are saved here')
# self.parser.add_argument('--model', type=str, default='pix2pixHD', help='which model to use')
self.parser.add_argument('--model', type=str, default='pix2pixHD', help='which model to use')
self.parser.add_argument('--norm', type=str, default='batch', help='instance normalization or batch normalization')
self.parser.add_argument('--use_dropout', action='store_true', help='use dropout for the generator')
self.parser.add_argument('--data_type', default=32, type=int, choices=[8, 16, 32], help="Supported data type i.e. 8, 16, 32 bit")

View File

@@ -24,7 +24,7 @@ class TestOptions(BaseOptions):
self.parser.add_argument("--onnx", type=str, help="run ONNX model via TRT")
self.parser.add_argument("--Arc_path", type=str, default='arcface_model/arcface_checkpoint.tar', help="run ONNX model via TRT")
self.parser.add_argument("--pic_a_path", type=str, default='G:/swap_data/ID/elon-musk-hero-image.jpeg', help="Person who provides identity information")
self.parser.add_argument("--pic_b_path", type=str, default='./demo_file/multi_people.jpg', help="Person who provides information other than their identity")
self.parser.add_argument("--pic_b_path", type=str, default='G:/swap_data/ID/bengio.jpg', help="Person who provides information other than their identity")
self.parser.add_argument("--pic_specific_path", type=str, default='./crop_224/zrf.jpg', help="The specific person to be swapped")
self.parser.add_argument("--multisepcific_dir", type=str, default='./demo_file/multispecific', help="Dir for multi specific")
self.parser.add_argument("--video_path", type=str, default='G:/swap_data/video/HSB_Demo_Trim.mp4', help="path for the video to swap")
@@ -33,6 +33,6 @@ class TestOptions(BaseOptions):
self.parser.add_argument('--id_thres', type=float, default=0.03, help='how many test images to run')
self.parser.add_argument('--no_simswaplogo', action='store_true', help='Remove the watermark')
self.parser.add_argument('--use_mask', action='store_true', help='Use mask for better result')
self.parser.add_argument('--crop_size', type=int, default=512, help='Crop of size of input image')
self.parser.add_argument('--crop_size', type=int, default=224, help='Crop of size of input image')
self.isTrain = False

Binary file not shown.

Before

Width:  |  Height:  |  Size: 669 KiB

After

Width:  |  Height:  |  Size: 15 KiB

11
requirements.txt Normal file
View File

@@ -0,0 +1,11 @@
jupyter
autopep8
sets
isort
Pillow
insightface==0.2.1
onnxruntime
moviepy
imageio==2.4.1
numpy==1.19.5
lpips

View File

@@ -1,64 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 24.1.0, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 663 121" style="enable-background:new 0 0 663 121;" xml:space="preserve">
<style type="text/css">
.st0{fill:#3753FA;}
.st1{fill:#FFFFFF;}
.st2{fill:none;}
</style>
<path class="st0" d="M64.92,86.76c0-12.55-10.28-23.2-23.21-23.2h-8.76c-1.52,0.38-2.28-0.76-2.28-1.9c0-1.14,0.76-2.28,2.28-2.28
c0,0,0,0,0.38,0h31.96c0-11.41-9.52-20.91-20.92-20.91H32.95c-12.94,0-23.21,10.27-23.21,23.2c0,12.93,10.28,22.82,23.21,22.82h8.76
c1.14,0,2.28,0.76,2.28,2.28c0,1.52-0.76,2.28-2.28,2.28c0,0,0,0-0.38,0H16.59c-0.76,9.13-5.7,17.87-13.7,20.91H41.7
C54.26,109.96,64.92,99.69,64.92,86.76"/>
<g>
<path class="st0" d="M179.91,45.59c4.33,2.47,8.04,6.18,10.51,10.51c2.47,4.33,3.71,9.9,3.71,15.46c0,6.18-1.24,11.13-3.71,16.07
c-2.47,4.33-6.18,8.04-10.51,10.51c-4.33,1.86-9.27,3.09-15.46,3.09c-5.57,0-10.51-1.24-14.84-3.71
c-4.33-2.47-8.04-6.18-10.51-10.51c-2.47-4.33-3.71-9.9-3.71-16.07c0-6.17,1.24-11.13,3.71-15.46c2.47-4.33,6.18-8.04,10.51-10.51
s9.27-3.71,14.84-3.71C170.63,41.88,175.58,43.12,179.91,45.59z M156.42,54.86c-2.47,1.24-4.94,3.71-6.18,6.18
c-1.86,3.09-2.47,6.18-2.47,10.51c0,4.33,0.62,8.04,2.47,10.51c1.86,3.09,3.71,4.94,6.18,6.18s5.57,1.86,8.66,1.86
s6.18-0.62,8.66-1.86s4.94-3.71,6.18-6.18c1.86-3.09,2.47-6.18,2.47-10.51c0-4.33-0.62-8.04-2.47-10.51
c-1.86-2.47-3.71-4.94-6.18-6.18c-2.47-1.24-5.57-1.86-8.66-1.86C161.36,52.39,158.89,53,156.42,54.86z M252.86,48.06
c4.94,3.71,8.04,9.27,9.27,16.07h-12.98c-0.62-3.71-2.47-6.8-4.94-8.66c-2.47-1.86-6.18-3.09-9.9-3.09
c-4.33,0-8.66,1.86-11.74,4.94s-4.33,8.04-4.33,14.22c0,6.18,1.24,11.13,4.33,14.22c3.09,3.1,6.8,4.94,11.74,4.94
c4.33,0,7.42-1.24,9.9-3.09c2.47-1.86,4.33-4.94,4.94-8.66h12.98c-1.24,6.8-4.33,12.37-9.27,16.07s-11.13,6.18-18.55,6.18
c-5.57,0-10.51-1.24-14.84-3.71c-4.33-2.47-8.04-6.18-9.9-10.51c-2.47-4.33-3.71-9.9-3.71-16.07c0-6.18,1.24-11.13,3.71-15.46
c1.86-3.71,5.57-7.42,9.9-9.9c4.33-2.47,9.27-3.71,14.84-3.71C241.73,41.88,247.92,44.35,252.86,48.06L252.86,48.06z M276.36,20.85
c1.24-1.24,3.09-1.86,5.57-1.86c2.47,0,4.33,0.62,5.57,1.86c1.24,1.24,1.86,3.09,1.86,4.94c0,1.86-0.62,3.71-1.86,4.94
c-1.24,1.24-3.09,1.86-5.57,1.86c-2.47,0-4.33-0.62-5.57-1.86c-1.24-1.24-1.86-3.09-1.86-4.94
C274.5,23.94,275.12,22.09,276.36,20.85z M288.11,43.12v58.12h-11.74V43.12L288.11,43.12L288.11,43.12z M341.9,46.2
c3.71,2.47,6.18,6.18,8.04,10.51v-13.6h11.74v58.12h-11.74v-13.6c-1.24,4.33-4.33,8.04-8.04,10.51c-3.71,2.47-8.04,3.71-13.6,3.71
c-4.94,0-9.27-1.24-13.6-3.71c-3.71-2.47-6.8-5.57-9.27-10.51c-2.47-4.94-3.09-9.9-3.09-16.07c0-6.18,1.24-11.13,3.09-15.46
c2.47-4.33,5.57-8.04,9.27-10.51s8.66-3.71,13.6-3.71C333.86,41.88,338.19,43.73,341.9,46.2L341.9,46.2z M319.63,57.95
c-3.09,3.09-4.94,8.04-4.94,14.22c0,6.18,1.86,10.51,4.94,14.22c3.09,3.71,7.42,4.94,12.98,4.94c3.09,0,6.18-0.62,9.27-2.47
c2.47-1.86,4.94-3.71,6.18-6.8c1.24-3.09,2.47-6.18,2.47-9.9c0-3.71-0.62-7.42-2.47-9.9c-1.86-2.47-3.71-4.94-6.18-6.8
c-2.47-1.86-5.57-2.47-9.27-2.47C327.06,52.39,322.73,54.24,319.63,57.95z M390.74,22.71v77.9h-11.74v-77.9H390.74L390.74,22.71z
M457.51,69.08c2.47,3.09,4.33,7.42,4.33,11.74c0,6.18-1.86,11.13-6.18,14.84c-4.33,3.71-9.9,5.57-17.93,5.57h-29.68V27.05h29.05
c7.42,0,12.98,1.86,17.31,4.94c3.71,3.09,6.18,8.04,6.18,13.6c0,4.94-1.24,8.66-3.71,11.13c-2.47,2.47-5.57,4.94-9.9,5.57
C451.33,63.52,455.04,65.37,457.51,69.08z M419.8,58.57h14.84c4.33,0,7.42-1.24,9.9-3.09c2.47-1.86,3.09-4.94,3.09-8.04
c0-3.71-1.24-6.18-3.09-8.04c-2.47-1.86-5.57-3.09-9.9-3.09h-14.22L419.8,58.57L419.8,58.57z M445.77,87.63
c2.47-1.86,3.71-4.94,3.71-8.66s-1.24-6.8-3.71-8.66c-2.47-1.86-6.18-3.09-10.51-3.09h-14.84v23.5h14.84
C439.58,90.72,443.29,90.1,445.77,87.63z M517.48,45.59c4.33,2.47,8.04,5.57,10.51,10.51c2.47,4.33,3.71,9.9,3.71,15.46
c0,6.18-1.24,11.13-3.71,16.07c-2.47,4.33-6.18,8.04-10.51,10.51c-4.33,2.47-9.27,3.71-14.84,3.71s-10.51-1.24-14.84-3.71
c-4.33-2.47-8.04-6.18-10.51-10.51s-3.71-9.9-3.71-16.07c0-6.18,1.24-11.13,3.71-15.46s6.18-8.04,10.51-10.51
c4.33-2.47,9.27-3.71,14.84-3.71C507.59,41.88,512.54,43.12,517.48,45.59z M493.37,54.86c-2.47,1.24-4.94,3.71-6.18,6.18
c-1.86,3.09-2.47,6.18-2.47,10.51c0,4.33,0.62,8.04,2.47,10.51c1.86,3.09,3.71,4.94,6.18,6.18c2.47,1.24,5.57,1.86,8.66,1.86
s6.18-0.62,8.66-1.86c2.47-1.24,4.94-3.71,6.18-6.18c1.86-3.09,2.47-6.18,2.47-10.51c0-4.33-0.62-8.04-2.47-10.51
s-3.71-4.94-6.18-6.18c-2.47-1.24-5.57-1.86-8.66-1.86C498.93,52.39,495.84,53,493.37,54.86z M587.34,45.59
c4.33,2.47,8.04,5.57,10.51,10.51c2.47,4.33,3.71,9.9,3.71,15.46c0,6.18-1.24,11.13-3.71,16.07c-2.47,4.33-6.18,8.04-10.51,10.51
c-4.33,2.47-9.27,3.71-14.84,3.71c-5.57,0-10.51-1.24-14.84-3.71c-4.33-2.47-8.04-6.18-10.51-10.51s-3.71-9.9-3.71-16.07
c0-6.18,1.24-11.13,3.71-15.46s6.18-8.04,10.51-10.51c4.33-2.47,9.27-3.71,14.84-3.71S583.02,43.12,587.34,45.59L587.34,45.59z
M563.85,54.86c-2.47,1.24-4.94,3.71-6.18,6.18c-1.86,3.09-2.47,6.18-2.47,10.51c0,4.33,0.62,8.04,2.47,10.51
c1.86,3.09,3.71,4.94,6.18,6.18s5.57,1.86,8.66,1.86c3.09,0,6.18-0.62,8.66-1.86s4.94-3.71,6.18-6.18
c1.86-3.09,2.47-6.18,2.47-10.51c0-4.33-0.62-8.04-2.47-10.51c-1.86-2.47-3.71-4.94-6.18-6.18c-2.47-1.24-5.57-1.86-8.66-1.86
C568.8,52.39,566.33,53,563.85,54.86z M648.56,100.61l-21.02-25.97v25.97h-11.74v-77.9h11.74v44.52l20.4-24.73h14.22l-25.35,29.05
l25.97,29.05L648.56,100.61L648.56,100.61z"/>
<path class="st0" d="M66.71,121L66.71,121c-33.3,0-60.3-27-60.3-60.3l0,0c0-33.3,27-60.3,60.3-60.3l0,0c33.3,0,60.3,27,60.3,60.3
l0,0C127.01,94,100.01,121,66.71,121z"/>
</g>
<path class="st1" d="M95.72,73.25c0-12.55-10.28-23.2-23.21-23.2h-8.76c-1.52,0.38-2.28-0.76-2.28-1.9s0.76-2.28,2.28-2.28
c0,0,0,0,0.38,0h31.96c0-11.41-9.52-20.91-20.92-20.91H63.75c-12.94,0-23.21,10.27-23.21,23.2s10.28,22.82,23.21,22.82h8.76
c1.14,0,2.28,0.76,2.28,2.28s-0.76,2.28-2.28,2.28c0,0,0,0-0.38,0H47.39c-0.76,9.13-5.7,17.87-13.7,20.91H72.5
C85.06,96.45,95.72,86.18,95.72,73.25"/>
<rect id="_x3C_Slice_x3E_" x="6" class="st2" width="657" height="121"/>
</svg>

Before

Width:  |  Height:  |  Size: 6.0 KiB

View File

@@ -112,7 +112,7 @@
"outputs": [],
"source": [
"!pip install googledrivedownloader\n",
"!pip install timm==0.5.4\n",
"!pip install timm\n",
"!wget -P SimSwap/arcface_model https://github.com/neuralchen/SimSwap/releases/download/1.0/arcface_checkpoint.tar"
]
},
@@ -140,10 +140,9 @@
},
"outputs": [],
"source": [
"%mkdir /content/TrainingData\n",
"!wget --load-cookies /tmp/cookies.txt \"https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\\1\\n/p')&id=19pWvdEHS-CEG6tW3PdxdtZ5QEymVjImc\" -O /content/TrainingData/vggface2_crop_arcfacealign_224.tar && rm -rf /tmp/cookies.txt\n",
"%cd /content/\n",
"!tar -xzvf /content/TrainingData/vggface2_crop_arcfacealign_224.tar -C /content/TrainingData\n ",
"!tar -xzvf /content/TrainingData/vggface2_crop_arcfacealign_224.tar\n",
"!rm /content/TrainingData/vggface2_crop_arcfacealign_224.tar"
]
},