James
@jimmysb1
I am putting together 3 TB of GPU capacity to run 3 concurrent Llama 3 405B models - mainly to have the cross reference edit each other and do its own coding...so I want redundancy in the system. Currently running two shitty AMD systems with 2 40B Llama 3 models. Any hardware suggestions besides Nvidia as the base GPU's and any suggestion on Github repo softwrae to run them and make them agents - currently use a crappy Ollama interface on both.
1 reply
10 recasts
36 reactions
notdevin
@notdevin.eth
The other projects I’ve seen that tried to use AMD were not stoked on their choice. Get the m4 off you don’t want nvidia but you should just get nvidia What are you asking about GitHub?
1 reply
0 recast
3 reactions
James
@jimmysb1
If there's a better Llama interface than Ollama!!
0 reply
0 recast
0 reaction