Same error messages every time for videos (doesn't happen when making a cropped image) #366

Open
opened 2022-12-22 06:17:38 +01:00 by Nugget2920 · 5 comments
Nugget2920 commented 2022-12-22 06:17:38 +01:00 (Migrated from github.com)

File "D:\SimSwap\SimSwap\test_video_swapsingle.py", line 58, in
app = Face_detect_crop(name='antelope', root='./insightface_func/models')
File "D:\SimSwap\SimSwap\insightface_func\face_detect_crop_single.py", line 40, in init
model = model_zoo.get_model(onnx_file)
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model
model = router.get_model()
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model
session = onnxruntime.InferenceSession(self.onnx_file, None)
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 375, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

File "D:\SimSwap\SimSwap\test_video_swapsingle.py", line 58, in <module> app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "D:\SimSwap\SimSwap\insightface_func\face_detect_crop_single.py", line 40, in __init__ model = model_zoo.get_model(onnx_file) File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 375, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Nugget2920 commented 2022-12-22 06:38:21 +01:00 (Migrated from github.com)

Also pretty much every other time i use it this error pops up

raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Also pretty much every other time i use it this error pops up raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
CoderDudeBrent commented 2023-01-06 04:16:11 +01:00 (Migrated from github.com)

I'm getting this error on my anaconda version of trying to run this. I can't figure it out either. And there is a problem with the colab version as well.

I'm getting this error on my anaconda version of trying to run this. I can't figure it out either. And there is a problem with the colab version as well.
strider1716 commented 2023-03-08 01:28:08 +01:00 (Migrated from github.com)

Here is how I solved this issue.

Go to "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py"
Edit the file on the on the line 56 or somewhere above it.

I did this in conda so i edited the file in "C:\Users\Jatin\anaconda3\envs\simswap\Lib\site-packages\insightface\model_zoo"
I edited the file model_zoo.py where i replaced.

def get_model(self):
session = onnxruntime.InferenceSession()

with

**def get_model(self):
session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
**

after this i had errors about numpy np.float error in reverse2original file. I changed np.float to np.float64 in three lines in that python file.

Here is how I solved this issue. Go to "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py" Edit the file on the on the line 56 or somewhere above it. I did this in conda so i edited the file in "C:\Users\Jatin\anaconda3\envs\simswap\Lib\site-packages\insightface\model_zoo" I edited the file model_zoo.py where i replaced. **def get_model(self): session = onnxruntime.InferenceSession()** with **def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']) ** after this i had errors about numpy np.float error in reverse2original file. I changed np.float to np.float64 in three lines in that python file.
gillesvandevoorde commented 2023-04-15 18:39:30 +02:00 (Migrated from github.com)

solved it for me too! thanks!

running GTX3060ti with cuda12.0 on windows 11

  • using anaconda and preparation
  • Python 3.8
  • conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
  • changes highlighted above by strider1716
solved it for me too! thanks! running GTX3060ti with cuda12.0 on windows 11 - using anaconda and preparation - Python 3.8 - conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia - changes highlighted above by strider1716
Liam6666 commented 2023-05-11 14:32:09 +02:00 (Migrated from github.com)

mark

mark
Sign in to join this conversation.