Content pfp
Content
@
0 reply
0 recast
0 reaction

Ansgar pfp
Ansgar
@ansgar.eth
Even pretty censored local LLM models are mostly following instructions if one manually writes the first 1-2 words of the response and only lets them complete from there. Would be interesting to use a small (uncensored) model to automate that process.
1 reply
3 recasts
14 reactions

K pfp
K
@kijijij
Any libraries to create use larger LLM to smaller LLM ? Kind of sending a piece of rubix cube to user's device. Thanks in advance !
0 reply
0 recast
0 reaction