Justin Hunter
@polluterofminds
I was building a demo app and first I used a locally running instance of Llama 3.1. I was not happy with the results, so I swapped it out for API calls to OpenAI’s GPT-4o model. The results were significantly improved. Open source has reduced the gap, but there is still a gap.
2 replies
0 recast
8 reactions
Caden Chase
@cbxm
that's not an open source gap, that's a compute gap. were you running the 405B model? if not, it's apples to oranges. also... yes, there's still a gap.
1 reply
0 recast
0 reaction
Justin Hunter
@polluterofminds
Yep should have clarified. Open source that you can reasonably run yourself. You’re right it’s a compute problem, which means the service providers like OpenAI still lead and still win.
1 reply
0 recast
1 reaction
Caden Chase
@cbxm
unfortunately true :/ I just looked at the cost to rent a GPU that can handle the 405B and it's probably cheaper to pay the $200/mo for ChatGPT Pro though that doesn't come with API access... for programming, I'm definitely in the "just use Claude and Cursor" camp, but I think you can get reasonable results out of Llama3.1:7B for other tasks.
1 reply
0 recast
1 reaction