Content
@
0 reply
0 recast
0 reaction
Stephan
@stephancill
need more r/localllama energy in here
6 replies
4 recasts
26 reactions
ππͺπΎπ‘π‘πΎ
@gm8xx8
It would be like just a handful of people chatting, maybe fewer. no offense to anyone lol
0 reply
0 recast
1 reaction
Sangohan π
@sangohan
Yesterday, I tested by closing a well advanced window that I had spent two hours refining. I ended up with a window showing a GPT that "thinks" but only with data limited to October 2023π€¦ββοΈ. I'll come back when it's connected to the real world π
0 reply
0 recast
0 reaction
koisose.lol
@koisose
already using llama decentrally with @gaianet cc @mashby2023 @diskrancher.eth one of my project to create commit based on one of the file diff string https://github.com/koisose/auto-commit-gaia
0 reply
0 recast
0 reaction
Habi007.ethπ«§π©β
@hk-habibur
π€£π€£
0 reply
0 recast
0 reaction
Anthony Peteπ§Ύπ©πππ³
@odogwupete
I donβt understand it enough yet π But the slap πππ
0 reply
0 recast
0 reaction
Erenπ©
@baeshy.eth
ππ
0 reply
0 recast
0 reaction