Thursday 15 June 2023

ML Security at SSG: Past, Present and Future

 “Security is a process not a product” – Bruce Schneier


This year, we wrap up an eight-year-long, machine learning (ML) focused research effort at Secure Systems Group at Aalto University. In that period, we went from using ML for security, to exploring whether ML-based systems are vulnerable to novel attacks and how to defend against them.


We started off by using ML to provide security guarantees to various systems. We used ML models to e.g., detect phishing websites, detect adversaries in IoT systems, or discover financial fraud in a payment platform. However, in the process, we realised that in many domains, ML models are not just an isolated component or a tool but the core of the system. This in turn, led us to look into the security of the models themselves.


We quickly learnt that models are quite brittle – they can be fooled using evasion attacks (both in vision, and text domains), stolen by a malicious client, and are difficult to use in a privacy-preserving manner. To address these issues, we spent years looking into ways of protecting them, focusing on model extraction and ownership. We and others have shown that model extraction attacks are a realistic threat. We proposed the first model watermarking scheme designed to deter model extraction. Fingerprinting schemes have emerged as one of the most promising defences against model extraction. We have highlighted concerns with leading fingerprinting schemes. In particular, we have highlighted that robustness against malicious accusers is an understudied aspect in the literature – we have shown that all existing watermarking and fingerprinting schemes are vulnerable to malicious accusers.


Having many people with industry background, in our research, we’ve continuously focused on how our ideas can be integrated into systems, and given attention to real-world deployment considerations. In particular, we raised a concern that is typically overlooked in academic literature: practitioners have to deploy defences against multiple security concerns simultaneously; sometimes these interact negatively. Understanding the interaction between different defences remains an important open problem.


All in all, our research output has been a collective effort of many researchers across two universities, and thanks to many collaborations – academic and industrial. Secure Systems Group continues its work at the University of Waterloo, continuing on a broad range of ML security and privacy topics including the exploration of how defences against a particular concern influence other concerns, and what hardware security mechanisms can be used to secure ML models.


Poster PDF presented at the annual SSG Demo Day 2023.

SSG ML research page.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.

ML Security at SSG: Past, Present and Future

  “Security is a process not a product” – Bruce Schneier This year, we wrap up an eight-year-long, machine learning (ML) focused research e...