Content
@
0 reply
0 recast
0 reaction
Gabriel Ayuso
@gabrielayuso.eth
I'm no expert and I'm biased because of my current work but fine-tuning (especially LoRA) is one of the most powerful and reliable ways to get LLMs to fit your custom needs. Better than prompt engineering and even embeddings. That's all I can say.
3 replies
0 recast
5 reactions
Abhishek Agarwal 📱
@abhishek1point0
Have you considered arguments presented here : https://warpcast.com/abhishek1point0/0x2c431a
1 reply
0 recast
0 reaction
Gabriel Ayuso
@gabrielayuso.eth
Agree with this: "Given the upfront time and computational resources required for fine-tuning, we’d recommend starting with this base first (along with the supplemental techniques mentioned) and seeing how far you get with it!" It's good to start simple.
1 reply
0 recast
1 reaction
Gabriel Ayuso
@gabrielayuso.eth
However, you can reduce costs and improve quality and performance if you move to a fine-tuned smaller model vs a larger model with few-shot. It depends on your needs and resources.
0 reply
0 recast
2 reactions