ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. #316

Open
opened 2022-08-19 10:16:50 +02:00 by santosadrian · 8 comments
santosadrian commented 2022-08-19 10:16:50 +02:00 (Migrated from github.com)

(simswap) C:\Users\foldd\Desktop\SimSwap>python test_video_swapmulti.py --crop_size 224 --use_mask --name people --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path lo.png --video_path pt.mp4 --output_path ./output/multi_test_swapmulti-pt.mp4 --temp_path ./temp
------------ Options -------------
Arc_path: arcface_model/arcface_checkpoint.tar
aspect_ratio: 1.0
batchSize: 8
checkpoints_dir: ./checkpoints
cluster_path: features_clustered_010.npy
crop_size: 224
data_type: 32
dataroot: ./datasets/cityscapes/
display_winsize: 512
engine: None
export_onnx: None
feat_num: 3
fineSize: 512
fp16: False
gpu_ids: [0]
how_many: 50
id_thres: 0.03
image_size: 224
input_nc: 3
instance_feat: False
isTrain: False
label_feat: False
label_nc: 0
latent_size: 512
loadSize: 1024
load_features: False
local_rank: 0
max_dataset_size: inf
model: pix2pixHD
multisepcific_dir: ./demo_file/multispecific
nThreads: 2
n_blocks_global: 6
n_blocks_local: 3
n_clusters: 10
n_downsample_E: 4
n_downsample_global: 3
n_local_enhancers: 1
name: people
nef: 16
netG: global
ngf: 64
niter_fix_global: 0
no_flip: False
no_instance: False
no_simswaplogo: False
norm: batch
norm_G: spectralspadesyncbatch3x3
ntest: inf
onnx: None
output_nc: 3
output_path: ./output/multi_test_swapmulti-pt.mp4
phase: test
pic_a_path: lo.png
pic_b_path: ./crop_224/zrf.jpg
pic_specific_path: ./crop_224/zrf.jpg
resize_or_crop: scale_width
results_dir: ./results/
semantic_nc: 3
serial_batches: False
temp_path: ./temp
tf_log: False
use_dropout: False
use_encoded_image: False
use_mask: True
verbose: False
video_path: pt.mp4
which_epoch: latest
-------------- End ----------------
Traceback (most recent call last):
File "test_video_swapmulti.py", line 58, in
app = Face_detect_crop(name='antelope', root='./insightface_func/models')
File "C:\Users\foldd\Desktop\SimSwap\insightface_func\face_detect_crop_multi.py", line 40, in init
model = model_zoo.get_model(onnx_file)
File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model
model = router.get_model()
File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model
session = onnxruntime.InferenceSession(self.onnx_file, None)
File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session
"onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers))
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

(simswap) C:\Users\foldd\Desktop\SimSwap>python test_video_swapmulti.py --crop_size 224 --use_mask --name people --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path lo.png --video_path pt.mp4 --output_path ./output/multi_test_swapmulti-pt.mp4 --temp_path ./temp ------------ Options ------------- Arc_path: arcface_model/arcface_checkpoint.tar aspect_ratio: 1.0 batchSize: 8 checkpoints_dir: ./checkpoints cluster_path: features_clustered_010.npy crop_size: 224 data_type: 32 dataroot: ./datasets/cityscapes/ display_winsize: 512 engine: None export_onnx: None feat_num: 3 fineSize: 512 fp16: False gpu_ids: [0] how_many: 50 id_thres: 0.03 image_size: 224 input_nc: 3 instance_feat: False isTrain: False label_feat: False label_nc: 0 latent_size: 512 loadSize: 1024 load_features: False local_rank: 0 max_dataset_size: inf model: pix2pixHD multisepcific_dir: ./demo_file/multispecific nThreads: 2 n_blocks_global: 6 n_blocks_local: 3 n_clusters: 10 n_downsample_E: 4 n_downsample_global: 3 n_local_enhancers: 1 name: people nef: 16 netG: global ngf: 64 niter_fix_global: 0 no_flip: False no_instance: False no_simswaplogo: False norm: batch norm_G: spectralspadesyncbatch3x3 ntest: inf onnx: None output_nc: 3 output_path: ./output/multi_test_swapmulti-pt.mp4 phase: test pic_a_path: lo.png pic_b_path: ./crop_224/zrf.jpg pic_specific_path: ./crop_224/zrf.jpg resize_or_crop: scale_width results_dir: ./results/ semantic_nc: 3 serial_batches: False temp_path: ./temp tf_log: False use_dropout: False use_encoded_image: False use_mask: True verbose: False video_path: pt.mp4 which_epoch: latest -------------- End ---------------- Traceback (most recent call last): File "test_video_swapmulti.py", line 58, in <module> app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "C:\Users\foldd\Desktop\SimSwap\insightface_func\face_detect_crop_multi.py", line 40, in __init__ model = model_zoo.get_model(onnx_file) File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 335, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\ProgramData\Anaconda3\envs\simswap\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 364, in _create_inference_session "onnxruntime.InferenceSession(..., providers={}, ...)".format(available_providers)) ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
skintflickz commented 2022-08-19 13:09:53 +02:00 (Migrated from github.com)

Is there a question in that post somewhere ?

Is there a question in that post somewhere ?
santosadrian commented 2022-08-19 14:15:28 +02:00 (Migrated from github.com)

Sorry, I thought that this is not a forum to post questions. But if you ask, sure... how can I fix this error?

Thank you.

Sorry, I thought that this is not a forum to post questions. But if you ask, sure... how can I fix this error? Thank you.
skintflickz commented 2022-08-19 14:23:01 +02:00 (Migrated from github.com)

Ah ok..
Well the way I fixed it was , I use a conda environment btw.
Uninstall onnxruntime-gpu
pip install onnxruntime-gpu==1.9.0

i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out.

Somehow that sorted the problem you have above.

However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s

Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0 i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out. Somehow that sorted the problem you have above. However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s
mossan0101 commented 2022-08-29 06:18:42 +02:00 (Migrated from github.com)

Try this one: https://github.com/mike9251/simswap-inference-pytorch
It is faster than the official repository and supports the RTX3000 series for inference.

Try this one: https://github.com/mike9251/simswap-inference-pytorch It is faster than the official repository and supports the RTX3000 series for inference.
k128 commented 2022-09-18 02:26:50 +02:00 (Migrated from github.com)

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23
from: session = onnxruntime.InferenceSession(self.onnx_file, None)
to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
white0rchardpUnK commented 2022-09-28 08:27:16 +02:00 (Migrated from github.com)

Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0

i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out.

Somehow that sorted the problem you have above.

However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s

I have the same problem with my 10 series graphics card.
The old version does need to be installed because the training code is not updated based on the new version.

> Ah ok.. Well the way I fixed it was , I use a conda environment btw. Uninstall onnxruntime-gpu pip install onnxruntime-gpu==1.9.0 > > i has something to do with the the latest version 1.12.0 and cuda version I think, I haven't been able to work it out. > > Somehow that sorted the problem you have above. > > However it made very little difference to processing speed on my k80, went form 1.2 it/s to 1,48 it/s I have the same problem with my 10 series graphics card. The old version does need to be installed because the training code is not updated based on the new version.
woctezuma commented 2023-09-20 23:26:03 +02:00 (Migrated from github.com)

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

I can confirm that editing the following file works.

File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23

Before:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None)

After:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider'])

Related:

> I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) I can confirm that editing the following file works. ``` File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23 ``` Before: ```python def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, None) ``` After: ```python def get_model(self): session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider']) ``` Related: - https://github.com/neuralchen/SimSwap/issues/445
TransAmMan commented 2023-09-27 03:00:55 +02:00 (Migrated from github.com)

I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])

I can confirm that editing the following file works.

File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23

Before:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None)

After:

    def get_model(self):
        session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider'])

Related:

Can you help me implement this fix for a hosted GPU runtime?

> > I was able to fix this error by changing changing site-packages\insightface\model_zoo\model_zoo.py:23 from: session = onnxruntime.InferenceSession(self.onnx_file, None) to: session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['CUDAExecutionProvider', 'CPUExecutionProvider']) > > I can confirm that editing the following file works. > > ``` > File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 23 > ``` > > Before: > > ```python > def get_model(self): > session = onnxruntime.InferenceSession(self.onnx_file, None) > ``` > > After: > > ```python > def get_model(self): > session = onnxruntime.InferenceSession(self.onnx_file, None, providers=['AzureExecutionProvider', 'CPUExecutionProvider']) > ``` > > Related: > > * [Problem ValueError with SimSwap #445](https://github.com/neuralchen/SimSwap/issues/445) Can you help me implement this fix for a hosted GPU runtime?
Sign in to join this conversation.