Content pfp
Content
@
0 reply
0 recast
0 reaction

lhl pfp
lhl
@lhl
Yesterday, something incredibly cool was released. The first (afaik) open LoRA MoE proof of concept - it lets you run 6 llama2-7B experts on a single 24GB GPU. Here's my setup notes: https://llm-tracker.info/books/howto-guides/page/airoboros-lmoe
0 reply
0 recast
2 reactions