Content pfp
Content
@
0 reply
0 recast
0 reaction

July pfp
July
@july
For the foreseeable future, I think that LLMs will continue to: - improve significantly on what can be measured - struggle on what cannot be measured
10 replies
0 recast
63 reactions

July pfp
July
@july
In many ways, I think the limit is what can be measured. In order to do better on what cannot be measured, we will need to convert more of what cannot be measured to what can be measured, and I think we continue to vastly underestimate what cannot be measured because it is an unknown unknown (we have no idea)
4 replies
0 recast
20 reactions

John Hoang pfp
John Hoang
@jhoang
Second this. AI takes over the known knowns space, pushing humans more into the unknown unknowns.
1 reply
0 recast
6 reactions

downshift pfp
downshift
@downshift.eth
been thinking on this a bit recently…we need models to operate on a substrate besides language for this to happen probably some substrates need to be domain-specific (chemistry, for example) (cc: @swabbie.eth @rjs)
1 reply
0 recast
3 reactions

Ξric Juta  pfp
Ξric Juta
@ericjuta
inference time compute to combat the unknown they are now our allies
0 reply
0 recast
2 reactions

Adam pfp
Adam
@adam-
In a way, LLMs currently deal with unknowns by hallucinating them as knowns and presenting them as such. The onus will still be on people to fact check or to come with ways to verify
0 reply
0 recast
0 reaction