Slow inference | GPU Issue #147

Open
opened 2021-11-07 06:46:27 +01:00 by SaroshZDandoti · 5 comments
SaroshZDandoti commented 2021-11-07 06:46:27 +01:00 (Migrated from github.com)

Hello, I cant seem to get my GPU to use for training.
I have :
gtx 1650 4GB
windows 11
CUDA: 11.5

onnx runtime GPU version : 1.9

I have these installed and my GPU is not being used.

Training and everything is working just fine, but it's very slow
I'm getting like: 2.5 it/s

Please help me how can I fix this.
I have read other similar issues , and cant seems to find an answer.
If i want to install old CUDA versions , they are not available for windows 11.
Thanks in Advance!!!

Edit : Slow Inference not Training

Hello, I cant seem to get my GPU to use for training. I have : gtx 1650 4GB windows 11 CUDA: 11.5 onnx runtime GPU version : 1.9 I have these installed and my GPU is not being used. ~~Training~~ and everything is working just fine, but it's very slow I'm getting like: 2.5 it/s Please help me how can I fix this. I have read other similar issues , and cant seems to find an answer. If i want to install old CUDA versions , they are not available for windows 11. Thanks in Advance!!! Edit : Slow Inference not Training
jz-exwzd commented 2021-11-08 01:06:19 +01:00 (Migrated from github.com)

Did you write the training codes yourself?

Did you write the training codes yourself?
SaroshZDandoti commented 2021-11-08 03:38:32 +01:00 (Migrated from github.com)

Did you write the training codes yourself?

What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow.

> Did you write the training codes yourself? What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow.
jz-exwzd commented 2021-11-08 04:22:15 +01:00 (Migrated from github.com)

Did you write the training codes yourself?

What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow.

It is not training then, as stated in your title of the issue.
It should be "slow inference".
In fact, they do not provide the code for training, just inferencing.
Correct me if I am wrong.

> > Did you write the training codes yourself? > > What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow. It is not training then, as stated in your title of the issue. It should be "slow inference". In fact, they do not provide the code for training, just inferencing. Correct me if I am wrong.
SaroshZDandoti commented 2021-11-08 05:21:16 +01:00 (Migrated from github.com)

Did you write the training codes yourself?

What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow.

It is not training then, as stated in your title of the issue. It should be "slow inference". In fact, they do not provide the code for training, just inferencing. Correct me if I am wrong.

Yep, you are right, that's my bad and i apologize for being careless about it.
Anyways, i don't seem to know what's wrong with it.

I have checked if GPU is available with torch.is_cuda_available as well and it says true.
But it seems that only GPU is not being used at all.

> > > Did you write the training codes yourself? > > > > > > What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow. > > It is not training then, as stated in your title of the issue. It should be "slow inference". In fact, they do not provide the code for training, just inferencing. Correct me if I am wrong. Yep, you are right, that's my bad and i apologize for being careless about it. Anyways, i don't seem to know what's wrong with it. I have checked if GPU is available with torch.is_cuda_available as well and it says true. But it seems that only GPU is not being used at all.
whenupdate commented 2021-11-20 17:21:24 +01:00 (Migrated from github.com)

have same issue, only take 2GB GPU maximum working. are u solve this issue ?

have same issue, only take 2GB GPU maximum working. are u solve this issue ?
Sign in to join this conversation.