I had some people asking me, why do I use images of 96x96? The full image shouldn't make that big of a difference?
Well, actually it does. But to be able to give a quantitative answer, I had to run model training with the full image retrieved from the camera, that is 176x144 pixels.
174x144 loss: 0.3612 - accuracy: 0.8543 - val_loss: 0.3854 - val_accuracy: 0.8390
96x96 loss: 0.0098 - accuracy: 0.9958 - val_loss: 0.6703 - val_accuracy: 0.9280
174x144
Test accuracy model: 0.8973706364631653
Test accuracy quant: 0.897370653095844
96x96
Test accuracy model: 0.9652247428894043
Test accuracy quant: 0.9609838846480068
c array size 174x1444 187624 compared to 66792 for 96x96 -> I don't need to add that it doesn't fit on the microcontroller!
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.