Content pfp
Content
@
0 reply
0 recast
0 reaction

johnjjung.eth 🛟 pfp
johnjjung.eth 🛟
@jj
What’s your local LLM setup? I’ve been using LMStudio with continue vscode extension Still playing around with models but the llama3-32k context window has been surprisingly good Haven’t been able to run llama3 70B yet because of the amount of ram needed
0 reply
1 recast
2 reactions

fran.eth ↑ pfp
fran.eth ↑
@fran.eth
obsidian + copilot community plugin (inference from open router) 10 $degen
0 reply
0 recast
1 reaction