Content
@
0 reply
0 recast
0 reaction
Gabriel Ayuso
@gabrielayuso.eth
I'm no expert and I'm biased because of my current work but fine-tuning (especially LoRA) is one of the most powerful and reliable ways to get LLMs to fit your custom needs. Better than prompt engineering and even embeddings. That's all I can say.
2 replies
0 recast
5 reactions
//trip
@heytrip.eth
My pipeline has been: - test wild prompts on largest (340B+) model I can find - find best outputs and test on 24Bs - generate 100+ examples and see if I can get it tuned on something much smaller And yes, LoRA has proven to be incredible for a lot of my use cases
1 reply
0 recast
0 reaction
Gabriel Ayuso
@gabrielayuso.eth
0 reply
0 recast
0 reaction