HomeMioTech ResearchArticle Detail

Machine Learning’s Worst Enemy

In this article, we speak to Alberto Todeschini, Faculty Director of Artificial Intelligence at the University of California, Berkeley to shed light on “Adversarial Machine Learning”. Find out how vulnerable models are to being fooled and what you need to look out for.

Alberto Todeschini, Faculty Director, Artificial Intelligence, University of California, Berkeley2019-04-26

In layperson’s terms, how does Adversarial Machine Learning work?


Machine learning works like this: we build a model during a training phase, and then we deploy it on new data. For example, I show my model photographs of edible and inedible wild mushrooms, and then I create an app that can tell me whether a mushroom I find in the forest is edible.

With adversarial machine learning, the attacker creates an input that causes these models to misbehave.

In more technical terms, what types of attacks are there?


There are a few different techniques and the field has been around for a while. Personally, I'm fascinated by modern attacks on neural networks. One of the most interesting aspects is that we are learning a lot about the limitations of neural networks by studying these attacks.

There are three types of attacks:

  • untargeted, which simply cause a model to make a mistake or degrade performance.
  • targeted, which cause a specific misclassification (say, I want a security camera to classify me as an individual that I know has security clearance so I can enter a building).
  • reprogramming attacks are more difficult to explain, but in simple terms, they can be used to steal computing resources. For example, a machine learning model gets hijacked to perform some other kind of computation. This is a type of "parasitic computing".


Could you give an example?


Source: https://arxiv.org/pdf/1707.08945.pdf

One recent example that I find fascinating is the "adversarial patch". Let's say I have a self-driving car. A targeted attack could specifically make a "Stop" sign be perceived as "Speed Limit" sign. This is done by sticking a small patch on the sign (literally just a small sticker that leaves most of the sign as before).


How vulnerable are ML models to these “adversaries”?


Very vulnerable. It turns out that many families of machine learning models, and I personally focus on deep learning models, are very brittle and furthermore, many attacks are transferable, which means that if I craft an attack against a specific model, other models could also be vulnerable. Think of hardware such as home security systems that get few software updates after they are sold. Attackers can buy one of these and spend years finding exploits.

Looking at it from a finance perspective, what should institutions be concerned about when it comes to Adversarial ML attacks?


As machine learning becomes ever more pervasive, I have no doubt that we will see a broad range of attacks in finance. This means that concerns about adversarial attacks will become increasingly part of good security practices. I expect a lot of creativity from the attackers.

Which aspects within the finance industry could be affected?

It can affect the whole spectrum. Finance is a very large and varied industry, so what we call “attack surface” is enormous. There will be new types of identity theft because I can quickly and easily clone your voice and image of your face for identification purposes. Similar techniques can also be used for powerful phishing attacks. If someone clones my voice and phones you, you may disclose some information thinking you are talking to me, but you are actually talking to an attacker.

There will be misinformation campaigns (fake news and the like) on a larger scale than we have seen so far, because we can generate an infinite amount of new text to spam social networks with, and this can be used to influence political and consumer decisions affecting the performance of individual corporations and whole economies.

Regarding trading, decisions naturally depend on a number of data inputs, and each is a potential target of poisoning attacks. These decisions also depend on predictions, so that will be the next level area of attack. To simplify greatly, we could craft an attack that makes a model misclassify bad stock as good stock. All of this is still pretty early, but given the size of this industry, there are countless highly lucrative targets.

Is there a way to detect if data has been poisoned/attacked?


It depends. The techniques are evolving and rapidly getting more and more sophisticated, so even if we harden a system against today’s poisoning and other attacks, the defence may not work on tomorrow’s new attacks. What matters is that specialists and organizations pay attention and take security seriously. Unfortunately, many organizations lag regarding security, which is expensive and requires high sophistication. We already see this with all the data breaches that happen regularly and on a large scale.


Can one learn how to defend against adversarial attacks by training the models to be robust to these attacks?


It's the usual game of cat and mouse. Every time some researcher has published a technique to make models robust to attacks, someone else, shortly after, has found a new technique that defeats the defense. I predict the situation will continue for a while.

Would training algorithms using an adversarial model make it more robust?


Yes, that has been shown. It’s not currently widely adopted, but there are good statistical reasons why this is the case.

What is the most interesting case study you’ve seen in regards to Adversarial ML?

Colleagues at MIT were able to 3D-print objects that fooled a computer vision classifier. Specifically, a plastic toy turtle, which any human would easily recognize as such, was categorized by the machine learning classifier as a rifle. Now, this may just seem like a curiosity, but imagine the opposite: a rifle that security systems classify as a turtle.

If you were a criminal out to commit an adversarial ML attack, what large scale action could you take?


Let's return to the example of a home security system that is based on some computer vision neural network. And let's say the gadget sells several million units. Historically, many companies are bad with security updates. Now you have several million homes susceptible to the same attack.