Oxford Researchers Modify Nissan Leaf For Cheaper Autonomous Car
Is the future of the self-driving car one of full autonomy, or, as car manufacturers such as Ford have suggested, one of part-time autonomy? In the near-term, the latter option seems far saner, and it’s the approach that underpins new research being shown off by academics at the University of Oxford.
The RobotCar U.K. project is using a modified Nissan Leaf, an all-electric vehicle, which is fitted with around £5,000 ($7,750) worth of prototype navigation equipment. That system includes a controller PC in the trunk — which can control every function of the car — as well as cameras in the front, lasers discreetly tucked under the front and rear bumpers, and an iPad for the user interface up front.
In time, the researchers hope to develop an autonomous navigation system that costs just £100.
"We are working on a low-cost ‘auto drive’ navigation system, that doesn’t depend on GPS, done with discreet sensors that are getting cheaper all the time. It’s easy to imagine that this kind of technology could be in a car you could buy," said project Co-Leader Professor Paul Newman.
The system doesn’t use GPS because the satellite-based system is not accurate enough for the researchers’ needs. Instead, twin cameras keep an eye on the road ahead for pedestrians and so on, while the lasers create a three-dimensional map of the world around the car — this is a similar approach to that taken by Google in its autonomous vehicle research, except far cheaper (Google’s LIDAR unit alone costs $70,000) and less conspicuous.
This is where the car’s part-time autonomy comes in — at least in city environments. As Newman put it: "Our approach is made possible because of advances in 3D laser mapping that enable an affordable car-based robotic system to rapidly build up a detailed picture of its surroundings. Because our cities don’t change very quickly robotic vehicles will know and look out for familiar structures as they pass by so that they can ask a human driver, ‘I know this route, do you want me to drive?’, and the driver can choose to let the technology take over."
It’s really a matter of machine learning, the science of probability and good guesswork; and the data the researchers are using comes from the cameras and lasers, but also from road plans, aerial photographs and internet queries. The car needs to learn its environment before it can, metaphorically speaking, take the wheel. (The driver can always take back control by tapping the brakes.)
As for next steps, the team will try to get the system to understand traffic flows and learn how to evaluate best routes.