Daffodil International University
Faculty of Engineering => EEE => Topic started by: saikat07 on November 22, 2016, 10:44:23 AM
-
When Elon Musk unveiled his “Master Plan” for Tesla on the company’s blog, he argued for the electric car’s controversial Autopilot mode in stark ethical terms. It would be “morally reprehensible,” he said, to scale back or disable Tesla’s partially autonomous driving feature because, on balance, Autopilot still saves lives.
There is no doubt that Autopilot and other similar driver-assistance technologies improve safety. But as CEO of EDGE3 Technologies, a vision company developing driver-monitoring systems for vehicles, and as a former professor and head of the Machine Vision Lab at Embry-Riddle Aeronautical University, my experience suggests something else too. Namely, in the rush to achieve fully autonomous driving, we may be side-stepping the proper technology development path and overlooking essential technologies needed to help us get there.
Tesla’s Autopilot, although a great pioneering effort, is in fact a driver-assist feature, and not quite the fully autonomous capability we all dream of. In technical terms, it is a NHTSA-Level 2 autopilot system, defined as “automation of at least two primary control functions.” Such systems require you to keep your hands on the steering wheel at all times. In August, Tesla removed the Chinese words “autopilot” and “self-driving” from its China website—on the heels of an accident in Beijing in which the driver alleged Tesla misrepresented its cars’ capabilities.
There is a clear disconnect between drivers’ expectations and what the reality of Autopilot today looks like. The updated “hands on the wheel” requirement and notification that Tesla recently released does not equate to eyes on the road. Many Tesla and other car owners may wander off visually or mentally, even with their hands on the wheel. They instead hope or believe that their Autopilot enables limited self-driving (Level 3), or even full autonomous driving (Level 4), when in reality they are driving a Level 2 system.
Vehicle owners can surely be forgiven for not knowing the level of automation that their vehicles are equipped with. Since it is possible to let a vehicle drive itself on a highway for hundreds of miles, it is easy to understand how a driver may be lulled into a false sense of security believing the car is fully autonomous. This is problematic. We are starting to see the results with fender benders, and at least one deadly crash earlier this year caused when the driver was not actively engaged in watching the road. Joshua Brown’s Tesla was traveling down the freeway in Autopilot mode with no way for him to be alerted as to how dangerous the situation was. It was arguably an avoidable tragedy, had the vehicle known that Brown was watching a movie on a DVD player, as some reports have suggested.
So how do we get to the next stage of automation—that is, limited self-driving, or Level 3 automation?
NHTSA instructs that, for Level 3, vehicles have to be intelligently aware of their surroundings, understand when a problem is going to occur, and know when/how to cede control of the vehicle to the driver. Cars are certainly getting better at seeing and understanding everything around them, but they are still blind to the one factor that a limited self-driving system needs the most in order to know when/how to cede control: the driver.