Content
@
0 reply
0 recast
0 reaction
Ansgar
@ansgar.eth
Even pretty censored local LLM models are mostly following instructions if one manually writes the first 1-2 words of the response and only lets them complete from there. Would be interesting to use a small (uncensored) model to automate that process.
3 replies
3 recasts
41 reactions
K
@kijijij
Any libraries to create use larger LLM to smaller LLM ? Kind of sending a piece of rubix cube to user's device. Thanks in advance !
0 reply
0 recast
0 reaction