exynos.base.eth pfp

exynos.base.eth

@exynos

209 Following
463 Followers


exynos.base.eth pfp
exynos.base.eth
@exynos
Hey
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gn
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gm cutie 😊
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gn & Happy New Year 🎊
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gn
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gn
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Hey fellas good night
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gn
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
0 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
#motivation #HODL
0 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gm
0 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gn
0 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
GN
0 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Illustration by https://towardsdatascience.com/parameter-efficient-fine-tuning-peft-for-llms-a-comprehensive-introduction-e52d03117f95
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
PEFT (Parameter-Efficient Fine-Tuning) is a technique that optimizes the fine-tuning process by adjusting only a small portion of model parameters. This approach reduces memory usage and computational costs, making it faster and more efficient to adapt large language models (LLMs) for specific tasks without requiring massive resources. Reference: https://huggingface.co/docs/peft/en/index #AI #ML #LLMs #NLP
1 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Gm
0 reply
0 recast
1 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
Some of the PEFT (parameter-efficient fine-tuning) methods to reduce training cost: - LORA: Low-Rank Adaptation - DORA: Weight-Decomposed Low-Rank Adaption Reference: https://developer.nvidia.com/blog/introducing-dora-a-high-performing-alternative-to-lora-for-fine-tuning/ #dev #ml #deeplearning #llm #web3
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
JOIN the $PAWS on Telegram πŸ•πŸΆπŸΎ https://t.me/PAWSOG_bot/PAWS?startapp=lrXlzS1Y LFG! PAWS is the new top dog! 🐾
0 reply
0 recast
0 reaction

exynos.base.eth pfp
exynos.base.eth
@exynos
JOIN the $PAWS on Telegram https://t.me/PAWSOG_bot/PAWS?startapp=lrXlzS1Y LFG! PAWS is the new top dog! 🐾 #Airdrop #Alpha
0 reply
0 recast
0 reaction

far.quest pfp
far.quest
@farquest
Introducing FarHero, the epic 3D trading card game built for Farcaster - welcome to a new era of Farcaster gaming. Genesis FarPacks have been dropped to all existing .cast handles, you can convert your handle on FarHero! πŸ§΅πŸ‘‡
61 replies
265 recasts
661 reactions