0

I trained a model on top of the pre-trained model yolov8n-seg.pt in YOLO, but the result is worse the pre-trained model on the same image. I annotated around 150 images for person detection, using the label 'person'. But the new model is significantly worse than the yolov8n-seg.pt model. Why does that happen?

The expectation was that with more samples annotated, the pre-trained model will be fine tuned to better detect on new images.

1
  • Please add your training code and clarify the difference between the performance of the two models. As the problem is currently described, there can be a bunch of reasons why the performance goes wrong. Commented Apr 26, 2024 at 8:10

2 Answers 2

2

I have zero experience with YOLOv8. I maintain the previous YOLO framework, Darknet, and related tools like DarkHelp and DarkMark. But for certain, I can tell you that with previous versions of YOLO, if you train with 150 images, it will instantly (or very quickly) forget everything else, and you'll have just those 150 images to build your new weights. The other 80 classes (if for example we're talking about MSCOCO) will be completely forgotten.

If you want to add new images, you need to download the entire dataset for the previous weights, then add your images & annotations to that, and train with the entire thing.

Sign up to request clarification or add additional context in comments.

1 Comment

Okay, thanks for the reply. In my case though, what seems to have happened is that I had set the image size to be 320. However, the previous model seems to have been trained on 640 image size. Changing the image size to 640 for training seems to have significantly improved result. Previously with 320 image size, mAP50 in the second epoch of training was 0.0695. Now with 640, it is 0.731.
1

What seems to have happened is that I had set the image size to be 320. However, the previous model seems to have been trained on 640 image size. Changing the image size to 640 for training seems to have significantly improved result. Previously with 320 image size, mAP50 in the second epoch of training was 0.0695. Now with 640, it is 0.731.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.