On the malicious use of large language models like GPT-3

(Or, “Can large language models generate exploits?”) While attacking machine learning systems is a hot topic for which attacks have begun to be demonstrated, I believe that there are a number of entirely novel, yet-unexplored attack-types and security risks that are specific to large language models (LMs), that may be intrinsically dependent upon things like … Continue reading On the malicious use of large language models like GPT-3