Content pfp
Content
@
0 reply
0 recast
0 reaction

Max Jackson pfp
Max Jackson
@mxjxn.eth
Got the new RTX 4080 running in the PC last night. Spent the evening downloading Llama 3.1 and getting it running locally. It replies fast as hell to prompts! And of course gave it a test run on some GPU-hungry games. Today I'm going to get ComfyUI and SDXL up and running, and make sure I can render in Blender without issues.
8 replies
3 recasts
20 reactions

๐ŸŒˆ YON pfp
๐ŸŒˆ YON
@yonfrula
i've got llama3.1 and llama2-uncensored and they're fun to play with. i still don't use them too much, only because i don't have a nice ui, everything is from the terminal. would love to setup comfyui some day, but yeah, might need that 4080.
1 reply
0 recast
2 reactions

Max Jackson pfp
Max Jackson
@mxjxn.eth
same currently in the terminal with them. Gonna try making an agent with a nice GUI that I can run locally. Maybe train it on my casts, give it an agenda and have it cast for me :) Pretty easy to use langchain with just a little dev experience... https://js.langchain.com/v0.2/docs/integrations/llms/ollama/
0 reply
0 recast
0 reaction