Only using 15% of gpu for video's is this normal ? #315

Open
opened 2022-08-18 02:53:39 +02:00 by skintflickz · 7 comments
skintflickz commented 2022-08-18 02:53:39 +02:00 (Migrated from github.com)
No description provided.
MARCOCHEUNG0124 commented 2022-08-18 08:49:23 +02:00 (Migrated from github.com)

Probably no. I recommend you to check your training parameters

Probably no. I recommend you to check your training parameters
skintflickz commented 2022-08-18 12:57:42 +02:00 (Migrated from github.com)

Probably no. I recommend you to check your training parameters

I'm using the base training parameters.
Its uses exactly 1 gb of vram every time I run the script.

could you offer any suggestions of what I need to adjust to utilise the full gpu ?

And is it normal that the CPU AMD Ryzen 5 2600 is running at 50% and using 12 gig of ram ?

> Probably no. I recommend you to check your training parameters I'm using the base training parameters. Its uses exactly 1 gb of vram every time I run the script. could you offer any suggestions of what I need to adjust to utilise the full gpu ? And is it normal that the CPU AMD Ryzen 5 2600 is running at 50% and using 12 gig of ram ?
skintflickz commented 2022-08-18 14:19:23 +02:00 (Migrated from github.com)

I found this in ONNX was wondering if this is the problem ?

`import onnxruntime as ort

model_path = ''

providers = [
('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
}),
'CPUExecutionProvider',
]

session = ort.InferenceSession(model_path, providers=providers)`

I found this in ONNX was wondering if this is the problem ? `import onnxruntime as ort model_path = '<path to model>' providers = [ ('CUDAExecutionProvider', { 'device_id': 0, 'arena_extend_strategy': 'kNextPowerOfTwo', '**gpu_mem_limit': 2 * 1024 * 1024 * 1024**, 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': True, }), 'CPUExecutionProvider', ] session = ort.InferenceSession(model_path, providers=providers)`
MARCOCHEUNG0124 commented 2022-08-22 04:10:54 +02:00 (Migrated from github.com)

I found this in ONNX was wondering if this is the problem ?

`import onnxruntime as ort

model_path = ''

providers = [ ('CUDAExecutionProvider', { 'device_id': 0, 'arena_extend_strategy': 'kNextPowerOfTwo', 'gpu_mem_limit': 2 * 1024 * 1024 * 1024, 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': True, }), 'CPUExecutionProvider', ]

session = ort.InferenceSession(model_path, providers=providers)`

Congratulation! I saw you had solved this issue.

> I found this in ONNX was wondering if this is the problem ? > > `import onnxruntime as ort > > model_path = '' > > providers = [ ('CUDAExecutionProvider', { 'device_id': 0, 'arena_extend_strategy': 'kNextPowerOfTwo', '**gpu_mem_limit': 2 * 1024 * 1024 * 1024**, 'cudnn_conv_algo_search': 'EXHAUSTIVE', 'do_copy_in_default_stream': True, }), 'CPUExecutionProvider', ] > > session = ort.InferenceSession(model_path, providers=providers)` Congratulation! I saw you had solved this issue.
machinbidule62 commented 2022-08-23 03:15:11 +02:00 (Migrated from github.com)

how to solve ? please

how to solve ? please
skintflickz commented 2022-08-23 14:10:04 +02:00 (Migrated from github.com)

how to solve ? please

I've moved on to other less frustrating things

> how to solve ? please I've moved on to other less frustrating things
ichason commented 2022-09-23 08:33:09 +02:00 (Migrated from github.com)

how to solve ? please

how to solve ? please
Sign in to join this conversation.