Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
Daniel - Bountycaster
@pirosb3
- Ran the same prompt on DeepSeek R1 and O1 - DeepSeek gave me a better response and better code π¨π³
7 replies
2 recasts
19 reactions
π€·ππ²π§βπ€βπ§
@m-j-r.eth
hey, have you tried the quantized ablated? https://huggingface.co/mradermacher/deepseek-r1-qwen-2.5-32B-ablated-GGUF https://huggingface.co/bartowski/deepseek-r1-qwen-2.5-32B-ablated-GGUF
1 reply
0 recast
3 reactions
Daniel - Bountycaster
@pirosb3
No, what is it like?
1 reply
0 recast
2 reactions
π€·ππ²π§βπ€βπ§
@m-j-r.eth
the 4bit reasoning is alright. I still need to test more.
1 reply
0 recast
2 reactions
neon
@neonrover
whatβs the loss ? still 90%?
1 reply
0 recast
0 reaction
neon
@neonrover
iβm thinking of similarity between f32 and 8bit still being 99% down from 99.8 i believe on f16% not sure the term for a model i guess similarity is what i mean how much do we lose with 4bit vs 8 vs f16+
1 reply
0 recast
0 reaction