Content pfp
Content
@
0 reply
0 recast
0 reaction

Lucas | POAP Studio pfp
Lucas | POAP Studio
@gabo
1 reply
0 recast
3 reactions

Oxytocin pfp
Oxytocin
@ytocin.eth
Love perplexity, but in the meantime I've been playing around with Venice AI since it decentralises the compute and with MOR it's actually quite cheap to go pro Using the Nous models it's also Web enabled, so feel free to give it a try, sharing my ref! https://venice.ai/chat?ref=s5J8K-
1 reply
0 recast
1 reaction

Lucas | POAP Studio pfp
Lucas | POAP Studio
@gabo
Checked a bit, looks more barebones. Will go w the popular ones if same value prop. How is /venice decentralized ? 118 $DEGEN
2 replies
0 recast
0 reaction

Oxytocin pfp
Oxytocin
@ytocin.eth
I'm coming from hosting my LLaMA models so for me this is already a huge heads up, but yeah compared to centralised options it's more minimal! From what I understand , basically instead of getting a server farm like OpenAI/Claude, it runs open source models on a network of individually hosted networks.
1 reply
0 recast
1 reaction

Lucas | POAP Studio pfp
Lucas | POAP Studio
@gabo
Is there some source for that ? And if u where hosting ur own models, why the move to (decentralized) cloud ?
1 reply
0 recast
1 reaction

Oxytocin pfp
Oxytocin
@ytocin.eth
Not sure how it can be tested for venice specifically ( other than their docs https://venice.ai/what-is-venice ) , but most of the open source models are on huggingface and you can try them yourself. Here's nous for example https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B As for local vs decentralized cloud, I like using venice mostly on my mobile where it'd be impossible to get these kinds of models hosted. For your machine, if you have enough VRAM or RAM then local is always better! But something like LLaMA 405B requires quite a beefy computer
0 reply
0 recast
0 reaction