Content pfp
Content
@
0 reply
0 recast
0 reaction

altmbr pfp
altmbr
@altmbr
Curious how do folks think about picking the right model for their use case? Ie, when use GPT-4o, Llama 3, Gemini 1.5, DeepSeek or other? How does pricing fit into consideration?
4 replies
0 recast
11 reactions

altmbr pfp
altmbr
@altmbr
tagging just a couple folks would love to hear from based on activity in this channel : @giu, @renatov.eth, @phil
1 reply
0 recast
0 reaction

nik_nik pfp
nik_nik
@nikoline
Funtion and performance, pricing is relative since if it will take you 10 prompts to get to the desired results with one model vs 1 in the other. You might up spending the same.
0 reply
0 recast
1 reaction

Elie pfp
Elie
@elie
Pricing is the main reason not to use gpt4 or anthropic. But depends on your use case if pricing will become a factor. A few months ago gpt4 was the best at everything. Now there are other models on par or better.
2 replies
0 recast
1 reaction

Zenigame pfp
Zenigame
@zeni.eth
Depends on how important marginally better results are. I default to "it doesnt matter unless it does, and then it really does"
0 reply
0 recast
0 reaction