Content
@
0 reply
0 recast
0 reaction
shoni.eth
@alexpaden
finally cracked the optimization and started pushing 400,000 casts (texts) per second to inference on the mac studio
5 replies
2 recasts
17 reactions
m_j_r
@m-j-r.eth
how deep do they go?
1 reply
0 recast
0 reaction
shoni.eth
@alexpaden
wdym?
1 reply
0 recast
0 reaction
m_j_r
@m-j-r.eth
just curious if this is top-level text or if there's dialogue/inference-time reasoning.
0 reply
0 recast
0 reaction