Paul Berg
@prberg
Bad: Not using LLMs Worse: Using LLMs and trusting everything they say Best: Use LLMs critically—assume they're as fallible as any person
3 replies
0 recast
10 reactions
ȷď𝐛𝐛
@jenna
Nice, would only add Brandolini's paradox to no. 3 + the problem of the less fallible they become, the harder it will be to stay awake to their errors (cf airline pilots and autopilots) :}
0 reply
0 recast
1 reaction
Miracle
@mimionthis1
This is Sahara ai?
0 reply
0 recast
0 reaction
Steven Joe Dev
@stevenjoedev
Yes LLM often lies to me
0 reply
0 recast
0 reaction