AAMAS-2019 Tutorial

Adversarial Machine Learning


PRESENTERS


ABSTRACT

Machine learning has seen a remarkable rate of adoption in recent years across a broad spectrum of industries and applications. Many applications of machine learning techniques are adversarial in nature, insofar as the goal is to distinguish instances which are ``bad'' from those which are ``good''. Indeed, adversarial use goes well beyond this simple classification example: forensic analysis of malware which incorporates clustering, anomaly detection, and even vision systems in autonomous vehicles could all potentially be subject to attacks. In response to these concerns, there is an emerging literature on adversarial machine learning, which spans both the analysis of vulnerabilities in machine learning algorithms, and algorithmic techniques which yield more robust learning. This tutorial will survey a broad array of these issues and techniques from both the cybersecurity and machine learning research areas. In particular, we consider the problems of adversarial classifier evasion, where the attacker changes behavior to escape being detected, and poisoning, where training data itself is corrupted. We discuss both the evasion and poisoning attacks, first on classifiers, and then on other learning paradigms, and the associated defensive techniques. We then consider specialized techniques for both attacking and defending neural network, particularly focusing on deep learning techniques and their vulnerabilities to adversarially crafted instances.


PRESENTERS' SHORT BIOS

Bo Li is an Assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. Her research interest lies in adversarial deep learning, security, privacy, and game theory. She has developed and analyzed scalable robust learning frameworks for learning algorithms in adversarial environments against evasion attacks. She has also analyzed adversarial behavior against learning algorithms in the physical world. She was a recipient of a Symantec Research Labs Graduate Fellowship. She obtained her Ph.D. degree from Vanderbilt University in 2016.
  Bo Li
lbo@illinois.edu  
  
Yevgeniy Vorobeychik is an Associate Professor at the Department of Computer Science and Engineering, Washington University in St. Louis. Previously, he was a Principal Research Scientist at Sandia National Laboratories. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on game theoretic modeling of security and privacy, adversarial machine learning, algorithmic and behavioral game theory and incentive design, optimization, agent-based modeling, complex systems, network science, and epidemic control. Dr. Vorobeychik received an NSF CAREER award in 2017, and was invited to give an IJCAI-16 early career spotlight talk. He was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award.
  Yevgeniy Vorobeychik
yvorobeychik@wustl.edu  
  

TUTORIAL NOTES