I don’t know how to solve prompt injection
I know how to beat XSS, and SQL injection, and so many other exploits. I have no idea how to reliably beat prompt injection! As a security-minded engineer this really bothers me. I’m excited about the potential of building cool things against large language models. But I want to be confident that I can secure them before I commit to shipping any software that uses this technology.