I am currently training a model using eIQ Toolkit with the COCO2017 dataset. Due to the large size of the dataset (~116,000 images), I reduced it to approximately 20% (about 23,000 images) and filtered out very small bounding boxes.
Here are the training parameters I used.

Additionally, the following logs show the progress of the training.




Unfortunately, I cannot use a GPU due to hardware limitations, and the training has been running on the CPU. The first 11 epochs took around 16 hours to complete.
Looking at the logs, the accuracy metric seems to increase steadily. However, the absolute values are extremely low, remaining below 0.1, which is concerning. From my experience, training usually starts with an accuracy of at least 20%, but I have not seen that happen here. I've tried adjusting various training parameters, but the results have been largely similar, with accuracy failing to exceed 0.1.
My main question is: "Given the steady increase in accuracy, is this behavior considered normal?" While the accuracy seems to improve incrementally, the growth is extraordinarily slow, and it would take about a year to complete the training at this rate.
If this behavior is indeed normal, I assume I will need to upgrade my hardware to support proper training. However, if it's not normal, I am unsure where to start troubleshooting.
Any insights or suggestions would be greatly appreciated. Thank you!
eIQ-AUTO-ML-TOOLKIT