Rustam
@matsur-ffa
Hacker Injects False Memories into ChatGPT to Steal User Data. A vulnerability has been discovered in ChatGPT that allows attackers to insert false data into the chatbot’s long-term memory via malicious requests. This opens up access to users’ personal information. By exploiting this vulnerability, a hacker can manipulate the long-term memory, which was introduced into testing in February and has been available since September. The memory stores important data about the user, such as age and beliefs, which allows you to avoid re-entering information. Johann Reiberger has proven that it is possible to create fake entries using various formats, such as letters and blogs. For example, he was able to convince ChatGPT that the user is 102 years old and lives in the “Matrix”.
0 reply
0 recast
0 reaction