Performance drops from 8-10 it/s to 1.5-1.9 it/s #352

Open
opened 2022-12-02 11:20:57 +01:00 by ameen-roayan · 7 comments
ameen-roayan commented 2022-12-02 11:20:57 +01:00 (Migrated from github.com)

I am not sure as to what exactly is causing this, I did go through some threads that were written recently but there was no specific method to ensure that it is actually using the GPU.

on a 3090ti

I am not sure as to what exactly is causing this, I did go through some threads that were written recently but there was no specific method to ensure that it is actually using the GPU. on a 3090ti
ameen-roayan commented 2022-12-02 19:32:43 +01:00 (Migrated from github.com)

After much reading, boosted initial starting speed to 24it/s but it drops back to 1.5-1.9 it/s immediately, the GPU utilization would be at 30% at that point.

After much reading, boosted initial starting speed to 24it/s but it drops back to 1.5-1.9 it/s immediately, the GPU utilization would be at 30% at that point.
johnrom1243 commented 2022-12-08 05:35:20 +01:00 (Migrated from github.com)

How did you manage to increase the iterations, im having the same issue on a 3080 12gb

How did you manage to increase the iterations, im having the same issue on a 3080 12gb
DrBlou commented 2022-12-08 20:19:15 +01:00 (Migrated from github.com)

Yes, i would like to know too. I spent days on trying to get my RTX3080 fully used but stuck at 20% max with 1.5 - 2.5 it/s. Very frustrating.

Yes, i would like to know too. I spent days on trying to get my RTX3080 fully used but stuck at 20% max with 1.5 - 2.5 it/s. Very frustrating.
ameen-roayan commented 2022-12-12 07:34:33 +01:00 (Migrated from github.com)

From what i recall in that session, when checking which device is being used it was always opting for the CPU, so had to remove onnx and install the gpu version, but that sent me down a rabbithole of other things that needed to be replaced, cant recall the order of things really it was quite a process, but at the end it still is as everyone else capped at 3 max even if it starts at 20

From what i recall in that session, when checking which device is being used it was always opting for the CPU, so had to remove onnx and install the gpu version, but that sent me down a rabbithole of other things that needed to be replaced, cant recall the order of things really it was quite a process, but at the end it still is as everyone else capped at 3 max even if it starts at 20
DrBlou commented 2022-12-12 08:56:36 +01:00 (Migrated from github.com)
Just check out this repo : https://github.com/mike9251/simswap-inference-pytorch
ameen-roayan commented 2022-12-12 09:24:51 +01:00 (Migrated from github.com)

Just check out this repo : https://github.com/mike9251/simswap-inference-pytorch

Yeah, did that as well, same results unfortunately.

> Just check out this repo : https://github.com/mike9251/simswap-inference-pytorch Yeah, did that as well, same results unfortunately.
DrBlou commented 2022-12-23 15:35:55 +01:00 (Migrated from github.com)

Ok i managed to make it work :)
image
image

I suggest you to uninstall everything and start with a fresh clean install of anaconda.

Here is what I did to make it work (I am only using the repo : https://github.com/mike9251/simswap-inference-pytorch)

Install lastest graphic drivers
Install Cuda 11.6
Install Cudnn 8.6.0.163 (extract folders into Cuda folders)
Add windows environment variables paths
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\include
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\lib64

conda create -n simswapgpu python=3.9
conda activate simswapgpu
pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio===0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
pip install -r requirements.txt
pip install onnxruntime-gpu==1.11.1

Replace lines in :

Anaconda3\envs\simswap\Lib\site-packages\insightface\model_zoo\model_zoo.py

class ModelRouter:
def init(self, onnx_file):
self.onnx_file = onnx_file

def get_model(self):
    session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
    input_cfg = session.get_inputs()[0]
Ok i managed to make it work :) ![image](https://user-images.githubusercontent.com/92190077/209352545-fbce9217-0afd-4913-b266-dda8a91467b8.png) ![image](https://user-images.githubusercontent.com/92190077/209352604-faae6ae9-f5c2-4491-bc82-b1b32656bdba.png) I suggest you to uninstall everything and start with a fresh clean install of anaconda. Here is what I did to make it work (I am only using the repo : https://github.com/mike9251/simswap-inference-pytorch) Install lastest graphic drivers Install Cuda 11.6 Install Cudnn 8.6.0.163 (extract folders into Cuda folders) Add windows environment variables paths C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6 C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\include C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\extras\CUPTI\lib64 ``` conda create -n simswapgpu python=3.9 conda activate simswapgpu pip install torch==1.13.1+cu116 torchvision==0.14.1+cu116 torchaudio===0.13.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html pip install -r requirements.txt pip install onnxruntime-gpu==1.11.1 ``` Replace lines in : Anaconda3\envs\simswap\Lib\site-packages\insightface\model_zoo\model_zoo.py class ModelRouter: def __init__(self, onnx_file): self.onnx_file = onnx_file def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) input_cfg = session.get_inputs()[0]
Sign in to join this conversation.