Brenner
@brenner.eth
Does anyone use fine-tuned models? Why don’t I use fine-tuned models for anything I do? Why don’t people use models fine-tuned for specific languages in Cursor or Windsurf?
3 replies
0 recast
2 reactions
Leeward Bound
@leewardbound
im working my way towards it, i have a project with a complex response format (pydantic structured outputs and tool calling) and im compiling a folder full of example input/output data, once i get up to about 50 or so very curated examples, i plan to finetune 4omini the 15 examples i have are already 99% of my tokens used, attaching every example to every prompt to get the llm to behave how i want - im expecting fine tuning to save me massively on input tokens
1 reply
0 recast
3 reactions
Brenner
@brenner.eth
And then from what interface will you call out to that inference?
1 reply
0 recast
0 reaction
Leeward Bound
@leewardbound
a rn app im building, which will respond to output from the LLM like suggested toolcalls (approve/deny modifying your profile, creating a log, etc), or specific screens/dashboards to display, or conversational flows to direct the user thru
1 reply
0 recast
1 reaction
Brenner
@brenner.eth
Hmm I don’t think I understand
1 reply
0 recast
0 reaction
Leeward Bound
@leewardbound
glad to elaborate or give a demo! it's a hobby project (not my full time focus) but im building a chat app for natural language habit tracking and journaling, it's still very much in the "prototyping and seeing what gives good results" phases, but basically using structured outputs to combine reasoning, responding, and tool calling all into one inference
0 reply
0 recast
1 reaction