Vladyslav Dalechyn pfp
Vladyslav Dalechyn
@dalechyn.eth
everyone talks/thinks of truth_terminal, everyone wants to make their own now. i recently started an attempt of making something similar of this on farcaster, given that I have 0 experience in fine-tuning, prompt engineering and just know a very very high level basics of the whole llm concepts. also that's a battle test for my `fhub` library. what I found out during that speedrun: - casts pulling should give an output of a conversation tree, rather than an array of casts. each leaf might introduce a new branch of sub-context, that llm has to properly read to generate a thoughtful response - fine-tuning is certainly needed. i'm assuming most of AIs have those weird 4chan/reddit datasets sitting in their part of artificial brain, but that's more of a "i know that exists, i can try to replicate it" instead of "i can be it" type of neural bond i believe that LLMs will be inevitable in social networks. creating a better library for congesting data will result in better output. someone needs to do it.
2 replies
3 recasts
28 reactions

Vladyslav Dalechyn pfp
Vladyslav Dalechyn
@dalechyn.eth
and really given farcaster's open social graph, one can easily build a trustable dataset given the inputs of high-rep members of the protocol. data-labelling isn't a huge problem out here.
0 reply
0 recast
9 reactions

Daniel - Bountycaster pfp
Daniel - Bountycaster
@pirosb3
This is really exciting. Happy to share my learnings or building bountybot! Just reach out if interested
0 reply
0 recast
0 reaction