How a security researcher used a now-fixed flaw to store false memories in ChatGPT via indirect prompt injection with the goal of exfiltrating all user input (Dan Goodin/Ars Technica)

4 months ago 13
Add to circle

Dan Goodin / Ars Technica:
How a security researcher used a now-fixed flaw to store false memories in ChatGPT via indirect prompt injection with the goal of exfiltrating all user input  —  Emails, documents, and other untrusted content can plant malicious memories.  —  When security researcher Johann Rehberger recently reported …

Read Entire Article