Self-driving cars can be fooled by displaying virtual objects

Self-driving cars are one of the coolest innovations of the 21st century and for good reason, you could finally sleep on your daily commute.
Self driving cars can be fooled by displaying virtual objects
Self driving cars can be fooled by displaying virtual objects

  

Self-driving cars are one of the coolest innovations of the 21st century and for good reason, you could finally sleep on your daily commute (not recommended though).

However, as with any piece of technology, it comes with its flaws, many of which have been discovered before but there is one that has recently come to light.

See: Self-Driving Cars Can Be Tricked Into Misreading Street Signs

Explored by a group of researchers from the Ben-Gurion University of the Negev; the tests were done on 2 commercial advanced driver-assistance systems (ADASs) belonging to Tesla X (versions HW2.5 and HW 3.0) and Mobileye 630 in which “phantom” objects were displayed in front of the 2 vehicles.

These objects though were not any real objects and were instead merely perceptions of them but despite this, they caused the cars to incorrectly detect them as real objects and stop.

Examples of how this was done include a virtual road sign along with an image of a pedestrian displayed using a projector or a digital billboard. This led Tesla to stop in 0.42 seconds whereas Mobileye 360 stopped in 0.125 seconds at a much quicker rate.

Self-driving cars can be fooled by displaying virtual objects

This can be used maliciously by attackers in order to cause traffic jams and abrupt stops which could result in accidents.

In response to this, the researchers believe that measures should be deployed to guard against this by using a camera sensor that would be able to detect when an object was real or virtual/phantom.

Explaining, the researchers stated in their report that:

The countermeasure (GhostBusters) uses a “committee of experts” approach and combines the results obtained from four lightweight deep convolutional neural networks that assess the authenticity of an object based on the object’s light, context, surface, and depth. We demonstrate our countermeasure’s effectiveness (it obtains a TPR of 0.994 with an FPR of zero) and test its robustness to adversarial machine learning attacks.

However, before this research, previous instances have also been found by other researchers where self-driving cars could be fooled using methods such as radio spoofing and tampering with physical objects like graffiti.

However, all of these required highly skilled attackers and left plenty of space for forensic evidence to be recovered to apprehend the attackers. This attack vector though now discovered could be done remotely with far less expertise and fewer resources making it more dangerous.

The in-depth research paper is available here .

To conclude, this piece of research will be of great help for not only the aforementioned companies but also others in the industry such as Google who are focusing on developing autonomous capabilities with great accuracy.

See: Scammers Distribute Malware to Drivers in Speeding Ticket Scam

For the future, it is important to remember that this inaccuracy is no vulnerability but an indication of the underlying detection model not being so accurate and needing improvement.

  

Did you enjoy reading this article? Do like our page on Facebook and follow us on Twitter.

Total
0
Shares
Related Posts