Zach pfp
Zach
@zd
this isn’t true few people realize that the better the LLM gets the more interesting it will be it will - know more than your smartest friend - talk to you the way you want to be talked to - make you feel seen and heard more effectively than anyone else could this will happen really quickly, and soon enough you won’t have any idea you’re talking to AI and the best part is - you won’t care
5 replies
0 recast
8 reactions

ȷď𝐛𝐛 pfp
ȷď𝐛𝐛
@jenna
Brandolini’s Law will still apply
1 reply
0 recast
0 reaction

Zach pfp
Zach
@zd
and what does that mean to you? humans already produce loads of bullshit on a daily basis, and if we assume LLMs get smarter, it would make sense that the amount of bullshit they produce would decrease, not increase in addition, the smarter they get, the more simply and easily they can parse bullshit - i can see us using them as a way to find signal in noise (which many of us are already doing) remember: one of the main reasons humans bullshit other humans is because they're reacting emotionally to something that triggered them, and LLMs don't have emotions
1 reply
0 recast
0 reaction

ȷď𝐛𝐛 pfp
ȷď𝐛𝐛
@jenna
been thinking about this a lot! my current take is that the ease of generating LLM bullshit will always outpace the effort needed to discern LLM bullshit brandolini math all the way down it’s all good until you’re the one whose loan is autodenied with no recourse or or or… diligence/vigilance will always be chasing to keep up similar convo with @vt today who is maybe more optimistic than me https://warpcast.com/jenna/0x1878fd5c
2 replies
0 recast
1 reaction

vaughn tan pfp
vaughn tan
@vt
i'm not optimistic — i'm actually pretty _gloom_ but the only silvery lining will be that we can stave off everything turning into slop if humans are hypervigilant not just about AI outputs but in a more meta sense, about not giving machines things to do that they can't do reliably. on which topic: 1. https://uncertaintymindset.substack.com/p/ai-meaningmaking 2. https://uncertaintymindset.substack.com/p/ai-mirage
1 reply
0 recast
0 reaction

ȷď𝐛𝐛 pfp
ȷď𝐛𝐛
@jenna
Such early days… mistakes will be made… and maybe we’ll figure out good ways to catch and correct. It will become a who-watches-the-watchers since LLMs will be trained to police other LLMs! Have also seen the point made that we’re not that great ourselves anyway lol… eg the years of “redlining” bank loans as my go-to example. At the time the humans making those decisions thought it was fine and correct and took years to undo More 🔖s for me, thank you!
1 reply
0 recast
1 reaction

vaughn tan pfp
vaughn tan
@vt
humans can do it, but are bad at it unless trained in discernment of subjective value and kept in continuous practice — and the more we let machines do the decisionmaking about subjective valuation, the less practice we'll get at it. slop slop slop all the way down 😑
1 reply
0 recast
1 reaction