CSIRO designs algorithm vaccination technique

In a paper accepted to the 2019 International Conference on machine Learning (ICML), CSIRO researchers have presented a new method of preventing attacks on algorithms and machine learning systems.

Similar in principle to a vaccine, the innovation is designed to limit the effectiveness of hackers who target algorithms that could drive cars, identify spam emails or diagnose diseases.

For manufacturers who are wary of adopting algorithms due to the risks associated with a system infiltration, whether it leads to the copying of proprietary information or a malicious shutdown, this technique from the CSIRO will help to ensure peace of mind.

CSIRO points out that like with any new technology, securing it from outside interference is core to its effectiveness, and manufacturing software is no different. In 2017, Italian researchers in collaboration with cybersecurity firm, Trend Micro, found that factory robots can be easily hacked due to lax safety practices.

The work by CSIRO attempts to counter hacking attempts that confuse an algorithm via adversarial attacks which trick machine learning models by inserting a layer of noise over an image. This causes the algorithm to misidentify the image, with potentially devastating effects.

“Adversarial attacks have proven capable of tricking a machine learning model into incorrectly labelling a traffic stop sign as speed sign, which could have disastrous effects in the real world,” said Dr Richard Nock, machine learning group leader at CSIRO’s Data61, the data and digital specialise arm of the national science agency.

By preparing the machine learning algorithm by layering a weak version of a hacking attack over an image, the algorithm is able to learn from this minor layer of interference and respond to hacking attacks that introduce more complex disruptions.

According to CSIRO, while algorithms applied to the manufacturing sector promise great improvements in productivity and returns on investment, their benefits depend upon safety and security. With this innovation by CSIRO, there would be a safer future for AI.

“The new techniques against adversarial attacks developed at Data61 will spark a new line of machine learning research and ensure the positive use of transformative AI technologies,” said Adrian Turner, CEO of Data61.