Maria Rigaki is a PhD student in the department of Computer Science at Czech Technical University (CTU) in Prague. As a member of Stratosphere Lab, she is working on security and privacy of Machine Learning as well as applications of AI in cyber security. Before that she spent many years working as a software developer and systems architect. Her work spanned several domains including designing and developing solutions for telecommunications, physical security, emergency response systems and critical infrastructures. In her spare time Maria enjoys hacking and playing bass guitar.
Machine learning (ML) is becoming an integral part of many products. With a growing number of applications incorporating ML models, it is important to ask what are the security and privacy implications. In this talk we will first introduce security and privacy attacks on ML at a practical level. Then we will focus on model stealing attacks. These attacks view a deployed model as a black box and attempt to replicate it by creating a “copy-cat” model that can be subsequently used as a “white-box” model. We will discuss why these attacks are interesting, how they work and what are the most important considerations for their success, as well as some defenses against them. We will also go through some real world examples of stealing models that are used in security applications and provide material and tools that can be used as a starting point for people interested in delving into the area of machine learning security.