A few months ago I took a course on how to program a robotic automobile, Artificial Intelligence for Robotics. This course is taught by Sebastian Thurn, who heads the Google X Lab and the Google Driverless Car Project. Thurn also founded the educational organization, Udacity, that is offering this and many other free computer science and math courses.
Recently, I decided to port the code I wrote to Java and use the excellent Processing visualization library to illustrate some of the core concepts. My code is hosted on GitHub: https://github.com/stevepepple/car-robot. It a separate post, I’ll consider how this code would work in a real car. In short, a real robotic car (like Stanford’s award-winning cars Junior and Stanley) uses a map of its environment and Global Positioning System; optical/LIDAR, laser, radar, and inertia sensors; and– of course– an central, on-board computer that gathers data from the car’s sensor and controls steering, acceleration and braking.
The first step in programming a robotic car is how to locate the car within its environment. This is called localization. You can never know for sure where the car is at. Even the best global positioning system will have a margin of error within a few meters, which is not acceptable on the road. Yet, there are several localization techniques that provide us with a strong belief, statistically speaking, of where the robot is located.
The following histogram (Histogram.java in my code) shows a simple example of localization in a one-dimensional world. Each step in the graph represents the robot’s belief about it’s location. Before the robot has moved or tried to determine its location, there is an even probability distribution. The robot will then sense its location based upon a map of landmarks or other real-time observations. The belief for each part of the graph is normalized so that the total probability across steps is equal to one.
Filters and Bayes Theorem
Having a robot deterministically navigate a maze is rather simple, the real challenge presented in this course is how a robot can deal with uncertainty. There are several approaches that apply Bayes rule and other probabilistic methods that help the robot determine the location of landmarks, obstacles, other vehicles, and itself. A Kalman filter is one such method that allows a robot to continuously track the position of objects in it’s environment. The Kalman filter stores measurements as two-dimensional gaussian distributions and uses these values to estimate the current state of the environment and predict how surrounding objects will move.
Another useful technique, which is a central part of my final implementation, is a particle filter. The robot uses a particle filter to determine its location and proximity to obstacles. A particle filter creates thousands of copies of the robot and randomly distributes these copies over the grid. In essence, these particles travel along with the actual robot and makes the same measurements and calculations for each particle. The robot takes an mathematically-involved average of all of the particles and the particles are given an importance weight based on their proximity to the robot’s location, landmarks, and obstacles. Each time the robot moves these particles are re-sampled and reduced. The average of the reduced particle set gives the robot a much improved estimate of its location.
Algorithms that are often used for the routing of network data are also well-suited for creating a robot’s driving plan.
A shortest path algorithm will spider over all of the available paths and calculate the least number of steps required to arrive at the goal. Other algorithms, such as the Dijkstra or A* search pattern, use heuristics and/or policies at each point to decide upon which direction to search. A* thereby eliminates paths that are further away from the goal. Once A* finds the shortest route, this path can be reduced to a single continuous path. (During A*, I kept a updated policy and history of action of the actions taken.)
However, this path may be too tight for a car to navigate, so the robot needs to calculate a smoother path. The model used for my car simulation is quite simplistic. The car can set its speed, steering, and heading direction. It goes as speeds between 20 and 40 mph. It can steer left or right at a 45 degree angle. If the path is not smooth enough, the robot cannot stay on track and tends to oscillate back and forth away from the desired course. For the simulation, I added noise to the car movements and sensors. It’s quite fun to play with these values and see how the robot will behave under different conditions.
The remaining part of the robot that I must discuss is the PID (proportional-integral-differential) controller.
The controller will use the smooth path found by the planning algorithms to move. As the car moves, its controller calculates the error between the path and its estimated location. The proportional component steers in proportion to the path, adjusting according to the current error; The differential component prevents the proportional controller from over-steering and overshooting the smooth path by easing these proportional steering adjustments; The integral component tries to account for system”bias” over time. For instance, a real car hardware degrades and may not steer as well as it used to or it’s sensors may become more prone to error. The integral controller will help to account and correct for this sustained error over time.
All this said, here’s a video of my robot in action!