Street signs are designed for people, but more and more car "vision" systems are relying on them for their own guidance. Is there a point where we need to develop physical infrastructure for AI processing?
Security researchers have demonstrated how Tesla's Autopilot driver-assistance systems can be tricked into changing speed, swerving or stopping abruptly, simply by projecting fake road signs or virtual objects in front of them.
Their hacks worked on both a Tesla running HW3, which is the latest version of the company's Autopilot driver-assistance system, and the previous generation, HW2.5.
The most concerning finding is that a fake road sign only needs to be displayed for less than half a second, in order to trigger a response from Tesla's system.
... The researchers, from Ben-Gurion University of the Negev, said their findings "reflect a fundamental flaw of models that detect objects [but] were not trained to distinguish between real and fake objects."
Similar hacks also worked on the Mobileye 630 Autopilot system, because both it and Tesla's system rely on visual recognition through the use of cameras.
The researchers confirmed that these attacks would not have fooled autopilot systems that rely on LIDAR, which measures distances and maps surroundings with the use of lasers.