Slow inference | GPU Issue #147
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hello, I cant seem to get my GPU to use for training.
I have :
gtx 1650 4GB
windows 11
CUDA: 11.5
onnx runtime GPU version : 1.9
I have these installed and my GPU is not being used.
Trainingand everything is working just fine, but it's very slowI'm getting like: 2.5 it/s
Please help me how can I fix this.
I have read other similar issues , and cant seems to find an answer.
If i want to install old CUDA versions , they are not available for windows 11.
Thanks in Advance!!!
Edit : Slow Inference not Training
Did you write the training codes yourself?
What? no. I'm just trying to run the scripts with Anaconda Prompt, everything runs fine but its very slow.
It is not training then, as stated in your title of the issue.
It should be "slow inference".
In fact, they do not provide the code for training, just inferencing.
Correct me if I am wrong.
Yep, you are right, that's my bad and i apologize for being careless about it.
Anyways, i don't seem to know what's wrong with it.
I have checked if GPU is available with torch.is_cuda_available as well and it says true.
But it seems that only GPU is not being used at all.
have same issue, only take 2GB GPU maximum working. are u solve this issue ?