Researchers at MIT have conducted experiments with a self-driving car with optical cameras that can look around the corner and know that a vehicle is approaching, based on an analysis of the looming shadows of crossing vehicles.
During a test, the detection system, called ShadowCam, was implemented on an autonomously driving car in a parking garage. The headlights were switched off to simulate a nighttime traffic situation. The detection rate of the system was compared to that of a traditional lidar system. According to the researchers, the car equipped with the ShadowCam detected another car coming around the corner, 0.72 seconds faster than with the lidar system. The ShadowCam system also knew 86 percent of the time whether the intersecting car was a moving or stationary object. The researchers do note that they had specifically set the system to the lighting conditions in the parking garage.
The basis of this system is ShadowCam, which uses computer vision techniques to detect and classify changes in shadows on the ground. Inputs are series of video frames from a camera aimed at a specific area, such as the ground at a corner. Changes in light intensity are detected from image to image, which can be an indication of an object getting closer or the opposite. Some changes are barely noticeable with the naked eye, but ShadowCam can pick up the different properties of the shadow and the changes and determine whether it is a dynamically moving or a stationary object. The system reacts to a moving object by, for example, having the self-driving car slow down.
ShadowCam first had to be adapted for use on a self-driving car. For example, image registration was used, a technique often used in computer vision. Several images are placed on top of each other, as it were, to discover the variations between the different images. This happens, for example, in the medical sector. In addition, the researchers used visual odometry. This is often used for the navigation of robots, such as with the Mars rovers. It estimates the motion of a camera in real time by analyzing its position and geometry in a series of images. More specifically, the researchers used Direct Sparse Odometry. In essence, the properties of an environment are placed in a 3D point cloud, after which only those properties in a certain desired location are selected via the computer, such as the ground at a corner of a street.
A robot equipped with this can also move and target the area of pixels where the shadow is located, revealing the subtle deviations between the images. Then the signal still has to be amplified. The pixels that may contain shadows are boosted in color to reduce the signal-to-noise ratio. This makes extremely weak signals from shadow changes much more detectable, the researchers said. If the amplified signal exceeds a certain threshold, based on how much it differs from other shadows in the vicinity, it is classified as dynamic. Based on the strength of the signal, the system can tell the robot to slow down or stop.
In addition to the tests with the self-driving car, experiments were also carried out with an autonomous wheelchair that maneuvered through corridors in a building. The ‘obstacles’ here were people who came to walk around the corner and thus cross the path of the wheelchair. A classification precision of 70 percent was achieved. Given the limitations of the two scenarios, such as the lighting conditions in the buildings, the system is still limited. However, according to the researchers, it is very important to react within fractions of a second when it comes to fast-moving, autonomous cars. The researchers are already dreaming of further developing the system, so that eventually a kind of X-ray vision is achieved, in which speeding cars on the street can see ahead of time what is coming in side streets, for example.
The scientists will present a paper about their research at the International Conference on Intelligent Robots and Systems next week. Their research is also sponsored by the Toyota Research Institute.