shiyuebtc
@shiyuebtc
I am putting together 3 TB of GPU capacity to run 3 concurrent Llama 3 405B models - mainly to have the cross reference edit each other and do its own coding...so I want redundancy in the system. Currently running two shitty AMD systems with 2 40
0 reply
0 recast
0 reaction