Zach
@zd
this isn’t true few people realize that the better the LLM gets the more interesting it will be it will - know more than your smartest friend - talk to you the way you want to be talked to - make you feel seen and heard more effectively than anyone else could this will happen really quickly, and soon enough you won’t have any idea you’re talking to AI and the best part is - you won’t care
5 replies
0 recast
8 reactions
ȷď𝐛𝐛
@jenna
Brandolini’s Law will still apply
1 reply
0 recast
0 reaction
Zach
@zd
and what does that mean to you? humans already produce loads of bullshit on a daily basis, and if we assume LLMs get smarter, it would make sense that the amount of bullshit they produce would decrease, not increase in addition, the smarter they get, the more simply and easily they can parse bullshit - i can see us using them as a way to find signal in noise (which many of us are already doing) remember: one of the main reasons humans bullshit other humans is because they're reacting emotionally to something that triggered them, and LLMs don't have emotions
1 reply
0 recast
0 reaction
ȷď𝐛𝐛
@jenna
been thinking about this a lot! my current take is that the ease of generating LLM bullshit will always outpace the effort needed to discern LLM bullshit brandolini math all the way down it’s all good until you’re the one whose loan is autodenied with no recourse or or or… diligence/vigilance will always be chasing to keep up similar convo with @vt today who is maybe more optimistic than me https://warpcast.com/jenna/0x1878fd5c
2 replies
0 recast
1 reaction