This is a short description of a program for my robot. The robot uses only a Microsoft Kinect for its camera and depth field sensor which is hot-glued on top of a pretty normal robot platform from The Machine Labs used for robotics research.
There are two rather innovative approaches I took to this project. The first is the convolution of its visual input, which gives it more input neurons of detail toward the center.
Another aspect was to make the network recursive, feeding back outputs even from hidden layers and keeping the last few frames of video to comprise most of its input data.
The robot can be "surprised" by input frames of video that it was not expecting. In response to learning what it sees well enough, it starts moving around more and more, until another moment it is "surprised" again.
The program works mostly by predicting each moment. Something I noted which I found very interesting was that it seemed rather difficult for the program to send any signal to the motors at all but the robot does move often when it is up and running. It was moving around before I thought the code for movement should be working correctly.
My latest code is not meant to impress anyone but it should be available here: https://raw.githubusercontent.com/echoline/fanny/master/hEather.c