Total new to all this please help #293

Open
opened 2022-07-04 03:26:53 +02:00 by LongjonSlim · 3 comments
LongjonSlim commented 2022-07-04 03:26:53 +02:00 (Migrated from github.com)

I followed the tutorial exactly however when i try to run it i get this

Traceback (most recent call last):
File "test_video_swapsingle.py", line 58, in
app = Face_detect_crop(name='antelope', root='./insightface_func/models')
File "E:\Anaconda\envs\simswap\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in init
model = model_zoo.get_model(onnx_file)
File "E:\Anaconda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model
model = router.get_model()
File "E:\Anaconda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model
session = onnxruntime.InferenceSession(self.onnx_file, None)
File "E:\Anaconda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "E:\Anaconda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session
"onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers))
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Any help would be great thank you.

I followed the tutorial exactly however when i try to run it i get this Traceback (most recent call last): File "test_video_swapsingle.py", line 58, in <module> app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "E:\Anaconda\envs\simswap\SimSwap-main\insightface_func\face_detect_crop_single.py", line 40, in __init__ model = model_zoo.get_model(onnx_file) File "E:\Anaconda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "E:\Anaconda\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "E:\Anaconda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "E:\Anaconda\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...) Any help would be great thank you.
Christosioan commented 2022-07-04 17:49:00 +02:00 (Migrated from github.com)

I've had the same problem and I tried adding the providers. However, now I am getting this:

app = Face_detect_crop(name='antelope', root='./insightface_func/models', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
TypeError: init() got an unexpected keyword argument 'providers'

I've had the same problem and I tried adding the providers. However, now I am getting this: app = Face_detect_crop(name='antelope', root='./insightface_func/models', providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) TypeError: __init__() got an unexpected keyword argument 'providers'
sshivs commented 2022-07-16 03:42:21 +02:00 (Migrated from github.com)

Not the OP, but there are some more changes needed to pass it further upto the onnx inference object.

Not the OP, but there are some more changes needed to pass it further upto the onnx inference object.
Mayorc1978 commented 2023-03-07 22:54:14 +01:00 (Migrated from github.com)

I've the same problem and I didn't solve the issue can you specify the other needed changes to make it work. Cause even with ExponentialML corrections I still get the same error.

> I've the same problem and I didn't solve the issue can you specify the other needed changes to make it work. Cause even with ExponentialML corrections I still get the same error.
Sign in to join this conversation.