GPT's "long-term memory" allows prompt injections to become permanent
Facepalm: "The code is TrustNoAI." This is a phrase that a white hat hacker recently used while demonstrating how he could exploit ChatGPT to steal anyone's data. So, it might be a code we should all adopt. He discovered a way hackers could use the LLM's persistent memory to exfiltrate data from any user continuously.