Whitepaper – Practical Attacks on Machine Learning Systems

This paper collects a set of notes and research projects conducted by NCC Group on the topic of the security of Machine Learning (ML) systems. The objective is to provide some industry perspective to the academic community, while collating helpful references for security practitioners, to enable more effective security auditing and security-focused code review of ML systems. Details of specific practical attacks and common security problems are described. Some general background information on the broader subject of ML is also included, mostly for context, to ensure that explanations of attack scenarios are clear, and some notes on frameworks and development processes are provided.

On the malicious use of large language models like GPT-3

(Or, “Can large language models generate exploits?”) While attacking machine learning systems is a hot topic for which attacks have begun to be demonstrated, I believe that there are a number of entirely novel, yet-unexplored attack-types and security risks that are specific to large language models (LMs), that may be intrinsically dependent upon things like … Continue reading On the malicious use of large language models like GPT-3