Content
@
0 reply
0 recast
2 reactions
sudo rm -rf --no-preserve-root /
@pcaversaccio
ffs, please don't ask ChatGPT or other LLMs if a file is safe. First, new malware is not part of past training data used for the LLMs (even tho certain, e.g. infostealer pattern, are recycled over time), second ChatGPT cannot execute files (needed to detect behaviours that only manifest during execution), and usually malware also uses advanced obfuscation, which cannot be analysed. Use your brain and upload it to eg VirusTotal (not fool proof!), don't fucking delegate your security to an over calibrated language model.
1 reply
0 recast
7 reactions
sudo rm -rf --no-preserve-root /
@pcaversaccio
and yes this is based on real victims
0 reply
0 recast
1 reaction