PostArchitekt
@postarchitekt
So running Ollama with Deepseek R1 70B on a 4090 is kinda cool. But the thinking and process is painfully at about 1.5 tokens/sec. Definitely not usable of course, but for testing results and comparing it does work
0 reply
0 recast
0 reaction