Content pfp
Content
@
0 reply
0 recast
0 reaction

Haier (she) ๐ŸŽฉ pfp
Haier (she) ๐ŸŽฉ
@haier
2 replies
1 recast
3 reactions

Haier (she) ๐ŸŽฉ pfp
Haier (she) ๐ŸŽฉ
@haier
https://www.linkedin.com/posts/emollick_reasoning-ai-models-require-training-on-human-activity-7278869803119869953-DN7l?utm_medium=ios_app&utm_source=social_share_sheet&utm_campaign=copy_link
0 reply
0 recast
0 reaction

Adam pfp
Adam
@adam-
Whats missing from this statement is the roll effective prompts play in generating responses. I don't think the majority of people who interact with LLMs know how to do this well enough to bump up against this limitation. In addition to that, there's still the question of hallucinations, so that lack of expertise will be compensated by a projection of one.
0 reply
0 recast
0 reaction