A few years back I bought a kit from Dawn Robotics to build a robocat which we called Mia. This included chassis, two motors, an Dagu Arduino compatible mini driver card and a pan and tilt. Also included was a battery holder. I already possessed a Raspberry Pi B and the Pi Camera.
Me and my daughter were so excited on getting this working that we built it all in one evening. This included downloading and building all the software provided by Dawn Robotics. There was plenty of excellent aspects to this code which allowed devs like myself to hook it at several levels. The web interface which used websockets was particularly instructive as it gave me a introduction to this field and HTML5 and could be used with a mobile device. The video streaming to the web inteface and some other initiatives such as the OpenCV were worthy of praise. Though it was a slight shame that Open CV wasn’t integrated more within the streaming process.
Though this all went well there existed a few problems. Especially trying to get the robocat to move forward in a straight line. Also the Raspberry Pi tended to crash due to the batteries not providing consistent voltage. I managed to get it to go a bit straighter by calibrating the two motors carefully and turning the motor speed right down. The other problem was solved by buying a battery pack and connecting both cards separately as suggested in later versions of the Dawn Robotics documentation.
Unfortunately, Dawn Robotics folded. I guess this was because of the amount of work needed to support and develop the software, while it was hard for them to make a reasonable profit on the robotics parts when competing with web stores in the far east.
Voice Control
My first project was to control Mia via voice. So I bought a clip-on bluetooth microphone. Unfortunately this proved extremely difficult to connect successfully. Well not to connect exactly but to sync to the audio input. In the end I managed but don’t remember exactly how. Strangely, the usb socket for the bluetooth dongle was significant. If I used the one nearest the card the sound quality was good, but the one above was horrendous. When I migrated the robocat to pi 2 I lost the bluetooth availability.
I also tried another approach which was to use the microphone via the web page. This also had it’s annoying issues. Chrome insists on bringing up “Allow webpage to use microphone” prompt each time. This would be ok for outside webpages but not within the same domain. The only other alternative was to convert to https. Again this was a problem as I didn’t have a certificate. Though I managed to create a snake oil one, and able to serve the webpage up securely, the streaming images was more problematic to convert.
After recording the voice I would use the Google send the clip to Google speech to turn into text. At the time they allowed unlimited access and were quire accurate. The downside was the necessity to record the whole clip into a file before sending, rather than streaming. So how to stop recording after a command. The obvious thing to do would be to look out for significant gaps and use these to splice into audio files and then send them to Google. Sox is a useful tool though it works much better with already saved files. One attempt that seemed to work was to get sox to splice into 1 second clips, test each one for blank audio and then join the others back together again before sending to google. As you can imagine the latency between everything was rather large. However I could get the robocat to react to commands like “move forward” as well as answer questions like “Who is Angelina Jolie?” correctly. The latter was done via the Wikipedia api as well as utilizing Ask for other types of question.
Computer Vision
Another field I dabbled with in the early days was Computer Vision. Though I was introduced to the fabulous Open CV through Dawn Robotics, it was unfortunately not well integrated into the system. ie you could draw a circle round an object of interest through OpenCV but you couldn’t then stream it to the web browser.
My other project in the early days was to get the robocat to identify a face or a green ball and so moving the pan and tilt to keep the tracked object in the center of vision.
September 27, 2018 at 2:41 pm
can i get the program for this??
LikeLike
September 27, 2018 at 3:20 pm
Yes the code is from dawn robotics is still here https://bitbucket.org/DawnRobotics/ – the relevant projects are py_websockets_bot, raspberry_pi_camera_streamer and raspberry_pi_camera_bot. Unfortunately not much of the original instructions for building the robot are available as dawn robotics isn’t still in business though you should still be able to get the parts. My own code which is built on top of this code can be found at https://bitbucket.org/dani_thomas/ and is the 3d VR robocat and the Wii Mario Cart. Hope you get something together.
LikeLike