4

I am creating a YOLOV8 model and loading some pre-trained weights. I then want to use that model to run inference on some images however I want to specify that the inference should run on GPU - is it possible to do this when creating the YOLO model?

I am loading the model like this:

model = YOLO("yolov8n.pt") 

but when I pass in a device like so:

model = YOLO("yolov8n.pt", device='gpu') 

I get an unexpected argument error:

TypeError: __init__() got an unexpected keyword argument 'device'

3 Answers 3

6

In order to move a YOLO model to GPU you must use the pytorch .to syntax like so:

model = YOLO("yolov8n.pt") 
model.to('cuda')

some useful docs here

You can also explicitly run a prediction and specify the device. See docs here

model.predict(source, save=True, imgsz=320, conf=0.5,device='xyz')
Sign up to request clarification or add additional context in comments.

3 Comments

I'm having the same issue. I've trained the model with GPU enabled (yolo detect train model=yolov8m.pt data=data/pasxalitses.yaml imgsz=640 workers=8 batch=16 device=0 epochs=3000 name=yolov8_pasx_t100m) and I've set the model to run on the GPU but it keeps using the CPU instead. I'm working on a Jetson Orin AGX developer kit, with CUDA enabled both for OpenCV and PyTorch. is there any other way to enable the GPU?
updated my answer with some more docs - maybe those will help
When using your solution {model = YOLO("yolov8n.pt", device='gpu')} I get the same error you had as well. I've tried model = YOLO("dnn_model/yolov8l.pt") and model.to(device), while setting the device to cuda but it doesn't seem to work either. The model.predict didn't work either, as it took more than 1min to run. For the model.predict I set the device to device=0 since everything else throws an exception.
2

The best way of doing this is to specify the device under the method for the task at hand (predict/train/val...). Not only moving the model to GPU. That may lead to a data-model device mismatch

from ultralytics.yolo.engine.model import YOLO

model = YOLO("yolov8n.pt")

# force to run on CPU by using device flag
results = model.predict(source="0", show=True, stream=True, classes=0, device='cpu')
# train on GPU 1
model.train(data="coco128.yaml", epochs=100, imgsz=640, device=1)

Comments

0

For use GPU in yolov8 ensure that your CUDA and CuDNN Compatible with your PyTorch installation.

then follow this step use this command for install torchvision

pip3 install torch torchvision torchaudio --index-url
https://download.pytorch.org/whl/cu118

then check

import torch
print(torch.cuda.get_device_name())

if your gpu name print like this NVIDIA GeForce RTX 3050 Laptop GPU then you can use GPU with Yolov8

1 Comment

I have positive response like you mentioned, and nvidia-smi demonstrates sufficient use during training, but despite employing all above suggestions, during inference I failed to use gpu. All inference is happening on cpu.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.