Tony D’Addeo  pfp
Tony D’Addeo
@deodad
what’s the security model around all these LLM tools that you have access to you code base, shell history, etc
5 replies
0 recast
17 reactions

iSpeakNerd 🧙‍♂️ pfp
iSpeakNerd 🧙‍♂️
@ispeaknerd.eth
Security?
0 reply
0 recast
7 reactions

Steve pfp
Steve
@stevedylandev.eth
Personally only give access to code that is open source. Services like ChatGPT do have team plans with the “promise” that context shares will be private and not used as data to train on, but still kinda sketch. I don’t give AI access to my shell. Only exception is Zed can read your terminal output but that’s on a per session and request basis.
0 reply
0 recast
3 reactions

ns pfp
ns
@nickysap
It’s the “trust me bro” model if I’m not mistaken
0 reply
0 recast
2 reactions

Shashank  pfp
Shashank
@0xshash
to the large part benefits outweigh the potential risks today? github lets you prevent sharing your snippets for training
0 reply
0 recast
0 reaction

Breck Yunits pfp
Breck Yunits
@breck
E = T / A! (The only security in software businesses in the future is dev velocity)
0 reply
0 recast
0 reaction