Except for reading Chinese (a daily effort), the main project for me while I am visiting China is the Grandar robot platform. Under the supervision of a Chinese professor in embedded electronics, me and my classmate Anton will code AI for a robotics platform made by a company here in Shanghai (for educational purposes). A picture of this robot can be found on the company's website.
We have had a few weeks to familiarize ourselves with the robot, and work around some of its quirks. This is a rundown on the hardware on the robot:
Controller: Pentium4 3GHz CPU
Extensions: VGA monitor, web camera, PS/2 ports for keyboard/mouse, two USB connections, Ethernet. There is a wireless router on top of the robot, but it doesn't seem to be connected internally.
Sensors: 4 x SharpIR sensors (range approx. 1m), 5 x Sonar (range approx. 8m)
Actuators: 2 x motor for propulsion (max speed not yet tested).
Battery: Max battery life not yet tested.
The controller (which is for all extents and purposes, an ordinary desktop computer) runs an outdated Chinese version of Windows XP.
The motors, IR- and sonar sensors are connected to the motherboard through proprietary hardware interfaces, which can be accessed through the ASRSystem firmware (closed source drivers for this particular robot). Because these drivers are pre-compiled for MSVC, we are unfortunately unable to switch to an operating system more suited for development/deployment of a robot.
Our main gripes with the platform as such are the following:
1. The closed source drivers makes it impossible for us to change to a better suited operating environment (unless we want to write our own hardware drivers).
2. Said drivers work through a pretty clunky API, which is unsuited for embedded robotics (for example, the robot system code contains Win32 bindings capable of throwing up pop-up windows on errors, not exactly ideal for an autonomous system).
3. The CPU is in itself not bad (3GHz is faster than most embedded systems). However, due to the fact that it runs a desktop version of Windows XP rather than a more lightweight system, much power is lost. Speaking of power, it is not exactly suited for an embedded system. While the CPU is fast, it is still not fast enough to run some of the algorithms we are planning on using in the project (namely VSLAM, more on that later).
4. Chinese Windows XP with Visual Studio 2008 is frankly a bad development environment. Not to mention that we actually have to write the code on the robot (making it basically a desktop computer on wheels).
But despair not! We have thought of solutions to many of these problems!
Ideally, we would have a system that is power efficient, able to boot into a robotics application on startup (without having to log in using a keyboard/mouse) and easy to develop/deploy. We have none of this, but will have to make do :)
The first step for us was to marginalize the vendor API, and write our own wrapper (hiding all the ugly Win32 code). With this as base, we wrote a network interface using SDL_net (on Windows implemented with winsocks). With this we are able to connect to the robot remotely, making it possible to have one/several computers doing heavy computations, and sending the result to our backend on the robot. We can also control the robot remotely using for example a gamepad (as shown in the video if you follow the link below).
Pictures and movies of our test are available on my classmates blog.
Our remote computer (which will later be mounted on the robot and run the heavy AI code) is running Ubuntu Linux with SDL_net. The gui (as it is) is currently Open Computer Vision (OpenCV), which will be used for much more later during the project. The advantages here are clear: with a Linux based system we are in a much more comfortable development environment. Deployment of robot code is also simpler; as long as the Linux box has a connection to the Internet, we can remotely deploy and run code on it without having to touch the actual machine (much less write code on it).
The next blog post will concern our future plans and goals for the project, but the basic outline is to create an autonomous robot, capable of mapping and traversing an unknown environment using visual feedback. The Simultaneous Locating And Mapping problem (SLAM) will have to be addressed, and we will expand on how we plan to extend the hardware of the robot and utilize OpenCV to solve it.
We have had a few weeks to familiarize ourselves with the robot, and work around some of its quirks. This is a rundown on the hardware on the robot:
Controller: Pentium4 3GHz CPU
Extensions: VGA monitor, web camera, PS/2 ports for keyboard/mouse, two USB connections, Ethernet. There is a wireless router on top of the robot, but it doesn't seem to be connected internally.
Sensors: 4 x SharpIR sensors (range approx. 1m), 5 x Sonar (range approx. 8m)
Actuators: 2 x motor for propulsion (max speed not yet tested).
Battery: Max battery life not yet tested.
The controller (which is for all extents and purposes, an ordinary desktop computer) runs an outdated Chinese version of Windows XP.
The motors, IR- and sonar sensors are connected to the motherboard through proprietary hardware interfaces, which can be accessed through the ASRSystem firmware (closed source drivers for this particular robot). Because these drivers are pre-compiled for MSVC, we are unfortunately unable to switch to an operating system more suited for development/deployment of a robot.
Our main gripes with the platform as such are the following:
1. The closed source drivers makes it impossible for us to change to a better suited operating environment (unless we want to write our own hardware drivers).
2. Said drivers work through a pretty clunky API, which is unsuited for embedded robotics (for example, the robot system code contains Win32 bindings capable of throwing up pop-up windows on errors, not exactly ideal for an autonomous system).
3. The CPU is in itself not bad (3GHz is faster than most embedded systems). However, due to the fact that it runs a desktop version of Windows XP rather than a more lightweight system, much power is lost. Speaking of power, it is not exactly suited for an embedded system. While the CPU is fast, it is still not fast enough to run some of the algorithms we are planning on using in the project (namely VSLAM, more on that later).
4. Chinese Windows XP with Visual Studio 2008 is frankly a bad development environment. Not to mention that we actually have to write the code on the robot (making it basically a desktop computer on wheels).
But despair not! We have thought of solutions to many of these problems!
Ideally, we would have a system that is power efficient, able to boot into a robotics application on startup (without having to log in using a keyboard/mouse) and easy to develop/deploy. We have none of this, but will have to make do :)
The first step for us was to marginalize the vendor API, and write our own wrapper (hiding all the ugly Win32 code). With this as base, we wrote a network interface using SDL_net (on Windows implemented with winsocks). With this we are able to connect to the robot remotely, making it possible to have one/several computers doing heavy computations, and sending the result to our backend on the robot. We can also control the robot remotely using for example a gamepad (as shown in the video if you follow the link below).
Pictures and movies of our test are available on my classmates blog.
Our remote computer (which will later be mounted on the robot and run the heavy AI code) is running Ubuntu Linux with SDL_net. The gui (as it is) is currently Open Computer Vision (OpenCV), which will be used for much more later during the project. The advantages here are clear: with a Linux based system we are in a much more comfortable development environment. Deployment of robot code is also simpler; as long as the Linux box has a connection to the Internet, we can remotely deploy and run code on it without having to touch the actual machine (much less write code on it).
The next blog post will concern our future plans and goals for the project, but the basic outline is to create an autonomous robot, capable of mapping and traversing an unknown environment using visual feedback. The Simultaneous Locating And Mapping problem (SLAM) will have to be addressed, and we will expand on how we plan to extend the hardware of the robot and utilize OpenCV to solve it.