When I was training the model, the training speed was very slow, the input image size was 160*160, and the batch size was 32. Normally, the time of each step will not exceed 1 second, and in the training of the previous period of training, the training time of each step is indeed no more than 1 second. But today when training is very slow, the training time per step has reached about 3 seconds without the same parameters. But the GPU usage rate and video memory occupancy rate remain normal, and I don't know how to solve it.
CPU model: i5 10210u
graphics card model: mx250
Memory size: 16g
Graphics card memory size:4g
Hello @shaenhurst,
can you provide more information about the dataset so that we can attempt to reproduce your issue on our side and analyze it? Are you using a publicly available dataset?
Thanks,
David