Adversarial Pattern Recognition

TUT AM-04
04 Dic 2016
08:30 - 12:30
Room Tulum 3-4 / Seats 40

Adversarial Pattern Recognition

Learning-based pattern classifiers are currently used in several applications, like biometric recognition, spam filtering, malware detection, and intrusion detection in computer networks, which are different from traditional pattern recognition tasks. The difference lies in the fact that in these applications an intelligent, adaptive adversary can actively manipulate patterns with the aim of making a classifier ineffective. Traditional machine learning techniques do not take into account the adversarial nature of classification problems like the ones mentioned above. One of the consequences is that the performance of standard pattern classifiers can significantly degrade when they are used in adversarial tasks. Pattern classifiers can be significantly vulnerable to well-crafted, sophisticated attacks exploiting knowledge of the learning and classification algorithms. Being increasingly adopted for security and privacy tasks, pattern recognition techniques will be soon targeted by specific attacks, crafted by skilled attackers. In particular, two main threats against learning algorithms have been identified, among a larger number of potential attack scenarios, respectively referred to as evasion and poisoning attacks [1-10].

Evasion attacks consist of manipulating malicious samples at test time to evade detection; these include, for instance, manipulation of malware code to have the corresponding sample undetected (i.e., misclassified as legitimate). From a practical perspective, thus, evasion attacks are already a relevant threat in several real-world application settings.

Poisoning attacks are more subtle; their goal is to mislead the learning algorithm during the training phase by manipulating only a small fraction of the training data, in order to significantly increase the number of misclassified samples at test time, causing a denial of service [3,10]. These attacks require access to the training data used to learn the classification algorithm, which is possible in some application-specific contexts; for instance, in the case of systems which are re-trained or updated online, using data collected during system operation. Another category of systems that may be subject to poisoning attacks include those systems that exploit feedback from the end users to validate their decisions on some submitted samples, and then update the classification model accordingly (e.g., PDFRate, an online tool for detecting malware in PDF files [9]).

This kind of problem has been named adversarial pattern recognition, and is the subject of an emerging research field in the machine learning community.

The purposes of this tutorial are: (a) to introduce the fundamentals of adversarial machine learning to the ICPR community; (b) to illustrate the design cycle of a learning-based pattern recognition system for adversarial tasks, (c) to present the new techniques that have been recently proposed to assess performance of pattern classifiers under attack, evaluate classifiers’ vulnerabilities, and implement defence strategies that make learning algorithms and pattern classifiers more robust against attacks; (d) to show some applications of adversarial machine learning to pattern recognition tasks like biometric recognition and spam filtering.