Do not install both onnxruntime and onnxruntime-gpu #124

Open
opened 2023-09-22 14:32:51 +02:00 by andypotato · 6 comments
andypotato commented 2023-09-22 14:32:51 +02:00 (Migrated from github.com)

In requirements-gpu.txt the requirement for onnxruntime should be removed. Only keep onnxruntime-gpu

Installing both onnxruntime and onnxruntime-gpu leads to undefined behavior as the CUDAExecutionProvider might not be available despite the GPU runtime package being installed. If both are installed, onnxruntime.get_device() may just randomly return "CPU" or "GPU".

Actually onnxruntime-gpu also contains the CPUExecutionProvider as fallback.

In `requirements-gpu.txt` the requirement for `onnxruntime` should be removed. Only keep `onnxruntime-gpu` Installing both `onnxruntime` and `onnxruntime-gpu` leads to undefined behavior as the CUDAExecutionProvider might not be available despite the GPU runtime package being installed. If both are installed, onnxruntime.get_device() may just randomly return "CPU" or "GPU". Actually `onnxruntime-gpu` also contains the CPUExecutionProvider as fallback.
glucauze commented 2023-09-22 17:39:53 +02:00 (Migrated from github.com)

Thanks for your feedback, i will try to test and fix that when i have time.

Thanks for your feedback, i will try to test and fix that when i have time.
KINGLIFER commented 2023-09-28 15:48:32 +02:00 (Migrated from github.com)

fix?

fix?
andypotato commented 2023-09-28 23:55:48 +02:00 (Migrated from github.com)

Workaround for now:

  • Activate venv
  • Uninstall onnxruntime
  • Remove requirement from requirements-gpu.txt to prevent automatic reinstall on startup

Make sure you KEEP onnxruntime-gpu in requirements-gpu.txt and don't uninstall it.

Workaround for now: - Activate venv - Uninstall onnxruntime - Remove requirement from requirements-gpu.txt to prevent automatic reinstall on startup Make sure you KEEP onnxruntime-gpu in requirements-gpu.txt and don't uninstall it.
venshine commented 2023-10-26 16:08:28 +02:00 (Migrated from github.com)

Workaround for now:

  • Activate venv
  • Uninstall onnxruntime
  • Remove requirement from requirements-gpu.txt to prevent automatic reinstall on startup

Make sure you KEEP onnxruntime-gpu in requirements-gpu.txt and don't uninstall it.

I already uninstall onnxruntime, but got error

Error loading script: faceswaplab.py
    Traceback (most recent call last):
      File "/workspace/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts
        script_module = script_loading.load_module(scriptfile.path)
      File "/workspace/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
        module_spec.loader.exec_module(module)
      File "<frozen importlib._bootstrap_external>", line 883, in exec_module
      File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab.py", line 10, in <module>
        from scripts.faceswaplab_api import faceswaplab_api
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_api/faceswaplab_api.py", line 12, in <module>
        from scripts.faceswaplab_swapping import swapper
      File "/workspace/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 14, in <module>
        import insightface
      File "/root/miniconda3/envs/diffusion/lib/python3.10/site-packages/insightface/__init__.py", line 16, in <module>
        from . import model_zoo
      File "/root/miniconda3/envs/diffusion/lib/python3.10/site-packages/insightface/model_zoo/__init__.py", line 1, in <module>
        from .model_zoo import get_model
      File "/root/miniconda3/envs/diffusion/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 22, in <module>
        class PickableInferenceSession(onnxruntime.InferenceSession):
    AttributeError: module 'onnxruntime' has no attribute 'InferenceSession'
> Workaround for now: > > * Activate venv > * Uninstall onnxruntime > * Remove requirement from requirements-gpu.txt to prevent automatic reinstall on startup > > Make sure you KEEP onnxruntime-gpu in requirements-gpu.txt and don't uninstall it. I already uninstall onnxruntime, but got error ``` Error loading script: faceswaplab.py Traceback (most recent call last): File "/workspace/stable-diffusion-webui/modules/scripts.py", line 382, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "/workspace/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "<frozen importlib._bootstrap_external>", line 883, in exec_module File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed File "/workspace/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab.py", line 10, in <module> from scripts.faceswaplab_api import faceswaplab_api File "/workspace/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_api/faceswaplab_api.py", line 12, in <module> from scripts.faceswaplab_swapping import swapper File "/workspace/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 14, in <module> import insightface File "/root/miniconda3/envs/diffusion/lib/python3.10/site-packages/insightface/__init__.py", line 16, in <module> from . import model_zoo File "/root/miniconda3/envs/diffusion/lib/python3.10/site-packages/insightface/model_zoo/__init__.py", line 1, in <module> from .model_zoo import get_model File "/root/miniconda3/envs/diffusion/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py", line 22, in <module> class PickableInferenceSession(onnxruntime.InferenceSession): AttributeError: module 'onnxruntime' has no attribute 'InferenceSession' ```
andypotato commented 2023-10-26 16:33:17 +02:00 (Migrated from github.com)

With your venv activated, start up python and type:

import onnxruntime as rt
rt.get_device()

This will print out your backend, either GPU or CPU. In case you get on error like "no attribute" then your onnxruntime is not properly installed.

With your venv activated, start up python and type: ``` import onnxruntime as rt rt.get_device() ``` This will print out your backend, either `GPU` or `CPU`. In case you get on error like "no attribute" then your onnxruntime is not properly installed.
venshine commented 2023-10-27 06:14:45 +02:00 (Migrated from github.com)

With your venv activated, start up python and type:

import onnxruntime as rt
rt.get_device()

This will print out your backend, either GPU or CPU. In case you get on error like "no attribute" then your onnxruntime is not properly installed.

I already uninstall onnxruntime, but I already install onnxruntime-gpu, I still get the error.

> With your venv activated, start up python and type: > > ``` > import onnxruntime as rt > rt.get_device() > ``` > > This will print out your backend, either `GPU` or `CPU`. In case you get on error like "no attribute" then your onnxruntime is not properly installed. I already uninstall onnxruntime, but I already install onnxruntime-gpu, I still get the error.
Sign in to join this conversation.