Several times a week, I get emails pitching stories about the need for robust cybersecurity to protect self-driving cars from being hijacked. Some companies paint lurid pictures of hackers using clever wireless systems to take over cars’ sophisticated computers, forcing them to head into incoming traffic.

Turns out, they may be overthinking the challenge.

A group of researchers at Israel’s Ben-Gurion University of the Negev’s (BGU’s) Cyber Security Research Center tried to get autonomous cars to behave badly using a much less complicated series of tricks.

Most autonomous driving systems and advanced driver assistance systems (ADAS) rely on visual sensors, typically light detection and ranging (LiDAR). Israeli researchers looked at those and thought, “Why bother hacking computer code when you can just fool a camera?”

Researchers flashed images of traffic signs with erroneous speed-limit numbers on trees causing some vehicles to accelerate, projected images of people on streets that forced some cars to brake suddenly, and shined fake lane markers on a road that provoked one autopilot-controlled Tesla to veer into the oncoming traffic lane (in an empty mall parking lot, no researchers were harmed during filming).

“This type of attack is currently not being taken into consideration by the automobile industry. These are not bugs or poor coding errors but fundamental flaws in object detectors that are not trained to distinguish between real and fake objects and use feature matching to detect visual objects,” says Ben Nassi, lead author and a Ph.D. student at BGU.

Variations of mess-with-self-driving-cars pranks have been going on constantly since automakers introduced self-parking cars about 10 years ago. Quickly painting a white circle around a car with autonomous features could trap it like a mime inside a box only it could see. Carefully placed mirrors on the ground could cause similar problems.

Self-driving technology is still young and limited to test fleets, so this isn’t a widespread threat to the public. However, it shows that simple, inexpensive visual systems with clever image-recognition technology may not be enough to protect passengers from malicious actors.

The Israeli researchers are promoting more-sophisticated algorithms that apply context to differentiate projected images from real ones – systems that would notice that a speed limit sign wasn’t attached to anything, for example. Without such context, vehicles tend to overreact to bad data – braking or steering to avoid problems when nothing is in their path.

Some safety groups suggest more communications between vehicles and their environments, such as transponders embedded in the road that could provide positioning, speed limit, and traffic data. The problem with that approach is cost. Automakers like LiDAR because it’s inexpensive and vehicles can respond to changing conditions if networks fail.

Smart roads would require public investment to embed transponders, program messages, and maintain systems, and vehicles would need electronics to receive signals. It’s a more expensive approach, but one less likely to be fooled by a prankster with a bucket of paint or a flashlight. - Robert