Content pfp
Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction

Stephan pfp
Stephan
@stephancill
The human brain is several orders of magnitude less energy intensive than SOTA language models that operate at comparable performance levels This leads me to believe that we’re going in the wrong direction with the infinite scaling of compute and makes me more bullish on small reasoning/tool use models
3 replies
0 recast
12 reactions

Jason pfp
Jason
@jachian
For the small core really into LoRa’s it’s really ripe for specialized tool use models in Raspberry Pi’s actually. I honestly don’t have the imagination to think of what I’d use for a model that’s particularly good at just tool use though on an edge device
0 reply
0 recast
0 reaction