diff --git a/README.md b/README.md
index 2fb24dc..ef69748 100644
--- a/README.md
+++ b/README.md
@@ -12,7 +12,7 @@
+
##### This is the fastest build you can get if you have a discrete NVIDIA or AMD GPU, CPU or Mac Silicon, And you'll receive special priority support. 2.7 beta is the best you can have with 30+ extra features than the open source version.
-
-###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
+
+###### These Pre-builts are perfect for non-technical users or those who don't have time to, or can't manually install all the requirements. Just a heads-up: this is an open-source project, so you can also install it manually.
## TLDR; Live Deepfake in just 3 Clicks
+

+
1. Select a face
2. Select which camera to use
3. Press live!
@@ -109,11 +111,11 @@ This is more likely to work on your computer but will be slower as it utilizes t
**1. Set up Your Platform**
-- Python (3.11 recommended)
-- pip
-- git
-- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - ```iex (irm ffmpeg.tc.ht)```
-- [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
+- Python (3.11 recommended)
+- pip
+- git
+- [ffmpeg](https://www.youtube.com/watch?v=OlNWCpFdVMA) - `iex (irm ffmpeg.tc.ht)`
+- [Visual Studio 2022 Runtimes (Windows)](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
**2. Clone the Repository**
@@ -125,7 +127,7 @@ cd Deep-Live-Cam
**3. Download the Models**
1. [GFPGANv1.4](https://huggingface.co/hacksider/deep-live-cam/resolve/main/GFPGANv1.4.onnx)
-2. [inswapper\_128\_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)
+2. [inswapper_128_fp16.onnx](https://huggingface.co/hacksider/deep-live-cam/resolve/main/inswapper_128_fp16.onnx)
Place these files in the "**models**" folder.
@@ -133,14 +135,16 @@ Place these files in the "**models**" folder.
We highly recommend using a `venv` to avoid issues.
-
For Windows:
+
```bash
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
```
+
For Linux:
+
```bash
# Ensure you use the installed Python 3.10
python3 -m venv venv
@@ -154,10 +158,10 @@ Apple Silicon (M1/M2/M3) requires specific setup:
```bash
# Install Python 3.11 (specific version is important)
-brew install python@3.11
+brew install [email protected]
# Install tkinter package (required for the GUI)
-brew install python-tk@3.10
+brew install [email protected]
# Create and activate virtual environment with Python 3.11
python3.11 -m venv venv
@@ -229,18 +233,20 @@ python3.10 run.py --execution-provider coreml
```
**Important Notes for macOS:**
+
- You **must** use Python 3.10, not newer versions like 3.11 or 3.13
- Always run with `python3.10` command not just `python` if you have multiple Python versions installed
-- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall python-tk@3.10`
+- If you get error about `_tkinter` missing, reinstall the tkinter package: `brew reinstall [email protected]`
- If you get model loading errors, check that your models are in the correct folder
- If you encounter conflicts with other Python versions, consider uninstalling them:
+
```bash
# List all installed Python versions
brew list | grep python
-
+
# Uninstall conflicting versions if needed
- brew uninstall --ignore-dependencies python@3.11 python@3.13
-
+ brew uninstall --ignore-dependencies [email protected] [email protected]
+
# Keep only Python 3.11
brew cleanup
```
@@ -260,7 +266,7 @@ pip install onnxruntime-coreml==1.21.0
python run.py --execution-provider coreml
```
-**DirectML Execution Provider (Windows)**
+**DirectML Execution Provider (Windows AMD/Intel GPU)**
1. Install dependencies:
@@ -269,12 +275,36 @@ pip uninstall onnxruntime onnxruntime-directml
pip install onnxruntime-directml==1.21.0
```
-2. Usage:
+2. Additional packages required:
```bash
-python run.py --execution-provider directml
+pip install opencv-python customtkinter pillow insightface pygrabber tensorflow "numpy<2"
```
+3. Usage:
+
+```bash
+python run.py --execution-provider dml
+```
+
+> **AMD GPU Optimization Notes (RX 5000/6000/7000 series)**
+>
+> Due to a known `amdxc64.dll` multi-session crash on AMD GPUs, this branch includes the following fixes:
+>
+> - All models (face analyser + face swapper) are pre-loaded in the main thread before GUI starts to prevent concurrent DML session initialization crashes
+> - Face detection is throttled (every 5th frame by default) to maximize GPU swap throughput
+> - A global `dml_lock` serializes DML inference calls to prevent race conditions
+> - Detection runs inline in the processing thread (detection thread disabled) to avoid concurrent DML access
+>
+> **Performance on RX 6700 (10GB):**
+>
+> - No enhancer: ~24 FPS
+> - GPEN-256: ~11 FPS
+> - GFPGAN: ~8 FPS
+> - CPU usage: ~20%, GPU usage: ~60-100%
+>
+> For quick launch on AMD, use the included `run-dml.bat` file.
+
**OpenVINO™ Execution Provider (Intel)**
1. Install dependencies:
@@ -289,27 +319,29 @@ pip install onnxruntime-openvino==1.21.0
```bash
python run.py --execution-provider openvino
```
+
## Usage
**1. Image/Video Mode**
-- Execute `python run.py`.
-- Choose a source face image and a target image/video.
-- Click "Start".
-- The output will be saved in a directory named after the target video.
+- Execute `python run.py`.
+- Choose a source face image and a target image/video.
+- Click "Start".
+- The output will be saved in a directory named after the target video.
**2. Webcam Mode**
-- Execute `python run.py`.
-- Select a source face image.
-- Click "Live".
-- Wait for the preview to appear (10-30 seconds).
-- Use a screen capture tool like OBS to stream.
-- To change the face, select a new source image.
+- Execute `python run.py`.
+- Select a source face image.
+- Click "Live".
+- Wait for the preview to appear (10-30 seconds).
+- Use a screen capture tool like OBS to stream.
+- To change the face, select a new source image.
## Download all models in this huggingface link
+
- [**Download models here**](https://huggingface.co/hacksider/deep-live-cam/tree/main)
## Command Line Arguments (Unmaintained)
@@ -341,33 +373,32 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
## Press
- - [**Ars Technica**](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - *"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger"*
- - [**Yahoo!**](https://www.yahoo.com/tech/ok-viral-ai-live-stream-080041056.html) - *"OK, this viral AI live stream software is truly terrifying"*
- - [**CNN Brasil**](https://www.cnnbrasil.com.br/tecnologia/ia-consegue-clonar-rostos-na-webcam-entenda-funcionamento/) - *"AI can clone faces on webcam; understand how it works"*
- - [**Bloomberg Technoz**](https://www.bloombergtechnoz.com/detail-news/71032/kenalan-dengan-teknologi-deep-live-cam-bisa-jadi-alat-menipu) - *"Get to know Deep Live Cam technology, it can be used as a tool for deception."*
- - [**TrendMicro**](https://www.trendmicro.com/vinfo/gb/security/news/cyber-attacks/ai-vs-ai-deepfakes-and-ekyc) - *"AI vs AI: DeepFakes and eKYC"*
- - [**PetaPixel**](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - *"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo"*
- - [**SomeOrdinaryGamers**](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - *"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"*
- - [**IShowSpeed**](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - *"Alright look look look, now look chat, we can do any face we want to look like chat"*
- - [**TechLinked (Linus Tech Tips)**](https://www.youtube.com/watch?v=wnCghLjqv3s&t=551s) - *"They do a pretty good job matching poses, expression and even the lighting"*
- - [**IShowSpeed**](https://youtu.be/JbUPRmXRUtE?t=3964) - *"What the F***! Why do I look like Vinny Jr? I look exactly like Vinny Jr!? No, this shit is crazy! Bro This is F*** Crazy!"*
-
+- [**Ars Technica**](https://arstechnica.com/information-technology/2024/08/new-ai-tool-enables-real-time-face-swapping-on-webcams-raising-fraud-concerns/) - _"Deep-Live-Cam goes viral, allowing anyone to become a digital doppelganger"_
+- [**Yahoo!**](https://www.yahoo.com/tech/ok-viral-ai-live-stream-080041056.html) - _"OK, this viral AI live stream software is truly terrifying"_
+- [**CNN Brasil**](https://www.cnnbrasil.com.br/tecnologia/ia-consegue-clonar-rostos-na-webcam-entenda-funcionamento/) - _"AI can clone faces on webcam; understand how it works"_
+- [**Bloomberg Technoz**](https://www.bloombergtechnoz.com/detail-news/71032/kenalan-dengan-teknologi-deep-live-cam-bisa-jadi-alat-menipu) - _"Get to know Deep Live Cam technology, it can be used as a tool for deception."_
+- [**TrendMicro**](https://www.trendmicro.com/vinfo/gb/security/news/cyber-attacks/ai-vs-ai-deepfakes-and-ekyc) - _"AI vs AI: DeepFakes and eKYC"_
+- [**PetaPixel**](https://petapixel.com/2024/08/14/deep-live-cam-deepfake-ai-tool-lets-you-become-anyone-in-a-video-call-with-single-photo-mark-zuckerberg-jd-vance-elon-musk/) - _"Deepfake AI Tool Lets You Become Anyone in a Video Call With Single Photo"_
+- [**SomeOrdinaryGamers**](https://www.youtube.com/watch?time_continue=1074&v=py4Tc-Y8BcY) - _"That's Crazy, Oh God. That's Fucking Freaky Dude... That's So Wild Dude"_
+- [**IShowSpeed**](https://www.youtube.com/live/mFsCe7AIxq8?feature=shared&t=2686) - _"Alright look look look, now look chat, we can do any face we want to look like chat"_
+- [**TechLinked (Linus Tech Tips)**](https://www.youtube.com/watch?v=wnCghLjqv3s&t=551s) - _"They do a pretty good job matching poses, expression and even the lighting"_
+- [**IShowSpeed**](https://youtu.be/JbUPRmXRUtE?t=3964) - \*"What the F***! Why do I look like Vinny Jr? I look exactly like Vinny Jr!? No, this shit is crazy! Bro This is F*** Crazy!"\*
## Credits
-- [ffmpeg](https://ffmpeg.org/): for making video-related operations easy
-- [Henry](https://github.com/henryruhs): One of the major contributor in this repo
-- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
-- [havok2-htwo](https://github.com/havok2-htwo): for sharing the code for webcam
-- [GosuDRM](https://github.com/GosuDRM): for the open version of roop
-- [pereiraroland26](https://github.com/pereiraroland26): Multiple faces support
-- [vic4key](https://github.com/vic4key): For supporting/contributing to this project
-- [kier007](https://github.com/kier007): for improving the user experience
-- [qitianai](https://github.com/qitianai): for multi-lingual support
-- [laurigates](https://github.com/laurigates): Decoupling stuffs to make everything faster!
-- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
-- Footnote: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
-- All the wonderful users who helped make this project go viral by starring the repo ❤️
+- [ffmpeg](https://ffmpeg.org/): for making video-related operations easy
+- [Henry](https://github.com/henryruhs): One of the major contributor in this repo
+- [deepinsight](https://github.com/deepinsight): for their [insightface](https://github.com/deepinsight/insightface) project which provided a well-made library and models. Please be reminded that the [use of the model is for non-commercial research purposes only](https://github.com/deepinsight/insightface?tab=readme-ov-file#license).
+- [havok2-htwo](https://github.com/havok2-htwo): for sharing the code for webcam
+- [GosuDRM](https://github.com/GosuDRM): for the open version of roop
+- [pereiraroland26](https://github.com/pereiraroland26): Multiple faces support
+- [vic4key](https://github.com/vic4key): For supporting/contributing to this project
+- [kier007](https://github.com/kier007): for improving the user experience
+- [qitianai](https://github.com/qitianai): for multi-lingual support
+- [laurigates](https://github.com/laurigates): Decoupling stuffs to make everything faster!
+- and [all developers](https://github.com/hacksider/Deep-Live-Cam/graphs/contributors) behind libraries used in this project.
+- Footnote: Please be informed that the base author of the code is [s0md3v](https://github.com/s0md3v/roop)
+- All the wonderful users who helped make this project go viral by starring the repo ❤️
[](https://github.com/hacksider/Deep-Live-Cam/stargazers)
@@ -377,10 +408,4 @@ Looking for a CLI mode? Using the -s/--source argument will make the run program
## Stars to the Moon 🚀
-
-