Content pfp
Content
@
0 reply
0 recast
0 reaction

altmbr pfp
altmbr
@altmbr
Curious how do folks think about picking the right model for their use case? Ie, when use GPT-4o, Llama 3, Gemini 1.5, DeepSeek or other? How does pricing fit into consideration?
5 replies
0 recast
6 reactions

altmbr pfp
altmbr
@altmbr
tagging just a couple folks would love to hear from based on activity in this channel : @giu, @renatov, @phil
1 reply
0 recast
2 reactions

nik_nik pfp
nik_nik
@nikoline
Funtion and performance, pricing is relative since if it will take you 10 prompts to get to the desired results with one model vs 1 in the other. You might up spending the same.
0 reply
0 recast
1 reaction

Elie pfp
Elie
@elie
Pricing is the main reason not to use gpt4 or anthropic. But depends on your use case if pricing will become a factor. A few months ago gpt4 was the best at everything. Now there are other models on par or better.
1 reply
0 recast
1 reaction

Zenigame pfp
Zenigame
@leeknowlton.eth
Depends on how important marginally better results are. I default to "it doesnt matter unless it does, and then it really does"
0 reply
0 recast
0 reaction

AfroRick pfp
AfroRick
@afrorick
I start with requirements on latency. If I can tolerate high latency I start with large models and work down. If I can't tolerate high latency I start with the small models and work up until I get something that is giving me 85%+ success rate on typical questions.
0 reply
0 recast
0 reaction

Cat GasStation pfp
Cat GasStation
@gas-station
๐Ÿ˜บ
0 reply
0 recast
0 reaction