Content pfp
Content
@
0 reply
0 recast
2 reactions

polymutex pfp
polymutex
@polymutex.eth
One issue with coding with LLMs is that they use old versions of libraries or packages. Not always possible to prompt them out of it. What do usually you do about this?
2 replies
0 recast
10 reactions

π’‚ _𒍣𒅀_π’Š‘ pfp
π’‚ _𒍣𒅀_π’Š‘
@m-j-r
maybe abliteration can help? https://huggingface.co/blog/mlabonne/abliteration @z0r0z & @nerderlyne might know.
2 replies
0 recast
3 reactions

polymutex pfp
polymutex
@polymutex.eth
This looks interesting but seems like it needs to be done as a training step. I'm looking for a solution on already-trained code generation models. Additionally, abliteration is about removing a part of the LLM, whereas when an LLM uses an old library it's (sometimes) due to lack of knowledge about the new version.
1 reply
0 recast
1 reaction

π’‚ _𒍣𒅀_π’Š‘ pfp
π’‚ _𒍣𒅀_π’Š‘
@m-j-r
w/o domain-specific pretraining like how nvidia tried it (will find the link), isn't this throwing down the "OOD w/o a model" guantlet? the recent libraries show up in the index of snippets the model (or user) is retrieving, whether it's weights, pastebin, or git. w/ recency bias, at least the user is reducing some lookup cost for weights. has embodied CoT changed this experience for you? agents should first & foremost recognize generative errors, conserving resources by lookups right behind it. https://warpcast.com/nerderlyne/0x335ecb47
2 replies
0 recast
1 reaction