Now that I can eliminate the training on PC and with Jetson integrated this take a important part in NGBrain system and will be an unified system with the MCU part. I'm going to change the reward method. Instead of having the camera seeing to the recompensed (third person), the recompensed is seeing the reward (first person).
Reward now would when triangle form is lowering.
I will add several methods for reward. By forms approaching or moving away, go up or down and by leds changing colors (Maybe an Android OpenCV program watching and changing the screen color?).
After I think I can inyect the camera image (passed by some model, pooling, etc...) along the sensors data in the input layer.
Discussions
Become a Hackaday.io Member
Create an account to leave a comment. Already have an account? Log In.