Thanks for the work and note! it really helps me understand the original project, but I still have some questions about training by run.py.
I was running it on A5000, and I set batch size to 4, but the speed barely improved, although I did see the iteration of each epoch was only a quarter of before.
And what are the differences between training with canonical images and without canonical images? What's the train/loss of your successful case?
The final question is whether it is possible to reduce the times of validation Dataloader? I see it runs every epoch and cost almost half of the time, do we need to load our dataset again and again?
Thanks!
Thanks for the work and note! it really helps me understand the original project, but I still have some questions about training by run.py.
I was running it on A5000, and I set batch size to 4, but the speed barely improved, although I did see the iteration of each epoch was only a quarter of before.
And what are the differences between training with canonical images and without canonical images? What's the train/loss of your successful case?
The final question is whether it is possible to reduce the times of validation Dataloader? I see it runs every epoch and cost almost half of the time, do we need to load our dataset again and again?
Thanks!