Content pfp
Content
@
0 reply
0 recast
0 reaction

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
Study reveals a streamlined technique for making LLMs produce problematic output, sidestepping existing safety protocols. The approach automates crafting risky queries, heightening concerns over the security of these AI systems. https://arxiv.org/abs/2307.15043
1 reply
0 recast
1 reaction

King pfp
King
@king
Any good articles on ai safety you can recommend? I’ve read through the generic articles I found but nothing technical.
2 replies
0 recast
0 reaction