Content
@
https://warpcast.com/~/channel/llm
0 reply
0 recast
0 reaction
Redphone
@redphone
1/ Humanity’s bullshit detectors have gotten incredibly finely-tuned. We’re cutting information to the bone. Example: LLMs convey information with far fewer words than other methods. One way of looking at this is they "gloss over" important subtext. Another way... 🧵👇
1 reply
2 recasts
12 reactions
Redphone
@redphone
2/ of looking at is is virtually all other forms of knowledge-share come w insane amounts of politicization, bias and narrative. LLMs have bias, too, of course (they’re trained on human data), but they also address the query without hiding it amid paragraphs of "context."
1 reply
0 recast
1 reaction