vincent pfp
vincent
@pixel
FarcasterGPT
5 replies
1 recast
8 reactions

vincent pfp
vincent
@pixel
1 reply
0 recast
0 reaction

vincent pfp
vincent
@pixel
I actually tried building this, and the RAG was such a cumbersome process. But now OpenAI just abstracted the RAG away. Damn. No need to slice sentences, pick embedding model, worry about embedding DB. Upload then chat. Insane.
1 reply
0 recast
1 reaction

six pfp
six
@six
What did you upload in this case?
1 reply
0 recast
0 reaction

vincent pfp
vincent
@pixel
I ran this find . -type f \( -name "*.md" -o -name "*.txt" \) -exec cat {} + > everything.md on the farcasterxyz/protocol GitHub repo then upload the everything.md
1 reply
0 recast
2 reactions

six pfp
six
@six
Every devtools company gonna have their own GPT for docs
1 reply
0 recast
2 reactions

vincent pfp
vincent
@pixel
exactly my first thought docs site will go to this Assisstant frontend rather than a Vitepress site, matter of time
1 reply
0 recast
0 reaction

//trip pfp
//trip
@heytrip.eth
You didn't notice any hallucinations?
1 reply
0 recast
0 reaction

vincent pfp
vincent
@pixel
no when I played with it just now
1 reply
0 recast
1 reaction

//trip pfp
//trip
@heytrip.eth
How many chars/tokens was your doc and how much room to spare in context window did u have? And what is approx cost per question? (This is very interesting - I had same problem as u mentioned above trying to roll my own)
1 reply
0 recast
1 reaction

vincent pfp
vincent
@pixel
context is ~60kb RAG is done by OpenAI, and it costs $20/gb/day for the context stored I have tried rolling my own LLM+RAG and it was a massive pain, this is way better
0 reply
0 recast
1 reaction