Description
Objective: The objective is to train a physical agent (robot) to navigate around the AUTOTRAC roadmap using imitation learning with expert demonstration.
Motivation: Physical robots are limited with the number of iterations it can make to train. (It’s harder for a human to physically set up a robot 10000 times or even more!)
Behavioral cloning is a method where an expert (in this case the driver) will show how the agent should interact with the given input (camera) and showing the ideal output (steering). Once having the dataset this becomes a supervised learning training model, that can be rolled out to the agent.
Details
The Robot
Rosbot 2.0 PRO (ROS was used for navigation and robot movement)
Specs
CPU: UpBoard with 4 GB RAM, Quad-Core Intel Atom Z8350 1.92 GHz
RGBD camera: Orbbec Astra with RGB image size 640x480 and depth image size 640x480 at 30 Hz
LiDAR: RpLidar A2, 360 degree and up to 8m range
IMU: 9-Axis Accel/Gyro/Magnetometer sensor with MPU-9250
Creating the Dataset (Expert demonstration)
To create the expert demonstration data, the robot was driven for 7 minutes and created over 10,000 data sets which were then exported into a csv to capture the linear and angular velocity and it's respective .jpg images
The following parameters were used to capture both the camera feedback and the output velocities:
Input: RGB camera at 30 Hz at 640x480
Output: Angular velocity [-1.4 to 1.4] rad/s (using ROS /twist) at 30 Hz
Constraints: Constant linear velocity of .3 meter per second
Models and Training
5 main models were trained on a NVIDIA GPU accelerated computer using PyTorch and Torchvision.
Transformations: Rescale to a 224 x 224 image, and contrast adjustment, (Grayscale only for model 5)
Optimizer: ADAM was used with a learning step of .0002
Criterion: MSE loss was used (result minus the label)
I believe that overfitting was a problem since the test results was not too great
Analyzing the matrix results
By looking at the convolutional filters that the neural network created we can see that it was able to develop edge detection for inner and outer edge. All the convolutional filters reduce to some sort of edge detection and the neural network takes that filtered data to determine the angular velocity it should have.
Real-Time Execution
In order to make this a real-time execution the internal computer of the robot will have the neural network on board. This network will be actuated upon arrival of a new camera frame which is issued at 30Hz per second. The result comes out as the required angular velocity the robot should have which is converted form pytorch output into a ROS publisher. This ROS publisher send the right commands to the motor controllers which will actuate the motors and move at the desired angular velocity.
Results
Trained model: The trained model seems to be able to analyze the image and decide on the proper angular velocity. But seems to struggle on scenarios where the expert driver did not go over, which in this case is when the half of the robot is out of the track. This is expected and an known issue with behavioral cloning since the learning is limited to the known states of the expert driver. Improvements could be done by adding more outlying scenarios which can help the model learn faster.
Real-Time Execution: While the robot was able to perform for some parts where there is not that much change in scenery, there is a underlying problem on the execution time. The robot live camera is publishing a frame at a rate of 30 Hz but the time it takes for the robot to process the data and compute the neural network reduces the run time to 15 Hz per seconds. This is enough for the system to fall out of bound and go into an unknown scenario. This can be fixed by having a faster computer or offload the neural network onto a GPU.