Content
@
0 reply
0 recast
2 reactions
Michael Huang
@michaelhly
Just shipped a tool to let hub runners generate a @farcaster training corpus for LLM tuning — zero network requests. If you have your hub synced, it should be 100x+ faster in pulling data out of your hub compared to RPC-based methods. Try it out with: `pip install "farglot[cli]"`
2 replies
0 recast
3 reactions
Michael Huang
@michaelhly
Also there is a corresponding analyzer library to use your tuned models for cast classification to help with reputation ranking, spam detection, or auto-moderation: https://warpcast.com/michaelhly/0xb47dc6
1 reply
0 recast
0 reaction
Daniel Lombraña
@teleyinex.eth
Which tui are you using for building such a beautiful interface?
1 reply
0 recast
0 reaction