Content pfp
Content
@
0 reply
0 recast
0 reaction

Stephan pfp
Stephan
@stephancill
need more r/localllama energy in here
6 replies
4 recasts
26 reactions

𝚐π”ͺ𝟾𝚑𝚑𝟾 pfp
𝚐π”ͺ𝟾𝚑𝚑𝟾
@gm8xx8
It would be like just a handful of people chatting, maybe fewer. no offense to anyone lol
0 reply
0 recast
1 reaction

Sangohan 🟠 pfp
Sangohan 🟠
@sangohan
Yesterday, I tested by closing a well advanced window that I had spent two hours refining. I ended up with a window showing a GPT that "thinks" but only with data limited to October 2023πŸ€¦β€β™‚οΈ. I'll come back when it's connected to the real world πŸ˜‚
0 reply
0 recast
0 reaction

koisose.lol pfp
koisose.lol
@koisose
already using llama decentrally with @gaianet cc @mashby2023 @diskrancher.eth one of my project to create commit based on one of the file diff string https://github.com/koisose/auto-commit-gaia
0 reply
0 recast
0 reaction

Habi007.eth🫧🎩⭐ pfp
Habi007.eth🫧🎩⭐
@hk-habibur
🀣🀣
0 reply
0 recast
0 reaction

Anthony PeteπŸ§ΎπŸŽ©πŸ–πŸŽ­πŸŒ³ pfp
Anthony PeteπŸ§ΎπŸŽ©πŸ–πŸŽ­πŸŒ³
@odogwupete
I don’t understand it enough yet πŸ˜‚ But the slap πŸ˜‚πŸ˜‚πŸ˜‚
0 reply
0 recast
0 reaction

Eren🎩 pfp
Eren🎩
@baeshy.eth
πŸ˜‚πŸ˜‚
0 reply
0 recast
0 reaction