This project uses an Arduino and some light sensors to help the computer generating the image to detect where the screen edges are. If you look back at our projection system above, this is the equivalent to find where are the red dots on the screen. If our system detects where those dots are, it would be able to ajust the images to be aligned around the dots. We aren't fixing the geometry distortions yet, but we could minimize this problem if we have a lot of detection points (red dots) around the image, breaking an spherical surface in a series of small flat surfaces.
I love how this little bit of simple hardware, together with some smart computer programming, achieves such nice results! It's fun to watch the little adjustment animation too.
It also makes me wonder if a constant update system could be built ... Maybe by using Phototransistors with fast reaction times, and little detection squares that flash at 60FPS. If the sensor moves out of the square the PC could then search for the new location in the close proximity, instead of having to rescan the entire picture.
Although it would look weird to have flickering lights at the edge of the screen ...
I love how this little bit of simple hardware, together with some smart computer programming, achieves such nice results! It's fun to watch the little adjustment animation too.
It also makes me wonder if a constant update system could be built ... Maybe by using Phototransistors with fast reaction times, and little detection squares that flash at 60FPS. If the sensor moves out of the square the PC could then search for the new location in the close proximity, instead of having to rescan the entire picture.
Although it would look weird to have flickering lights at the edge of the screen ...