Content pfp
Content
@
0 reply
0 recast
2 reactions

exynos.base.eth pfp
exynos.base.eth
@exynos
Some of the PEFT (parameter-efficient fine-tuning) methods to reduce training cost: - LORA: Low-Rank Adaptation - DORA: Weight-Decomposed Low-Rank Adaption Reference: https://developer.nvidia.com/blog/introducing-dora-a-high-performing-alternative-to-lora-for-fine-tuning/ #dev #ml #deeplearning #llm #web3
0 reply
0 recast
0 reaction