Lefteris Karapetsas
@lefteris.eth
Ran an experiment with my blood panel: converted it to CSV and sent it to 2 doctors and 3 LLMs. Not only did the LLMs match the doctor conclusions, they had more details and were easier to digest than the doctors. We are entering a new era of accessible diagnostics. But how do we query LLMs with sensitive data like health records without compromising privacy? Anyone working on solutions? Local inference, encrypted computation, zero-knowledge pipelines anything else??
4 replies
0 recast
9 reactions
s5eeo
@s5eeo
For privacy purposes, it’s easy to download and set up a model for local inference with ollama for example. It’s especially easy as a developer and takes just a few lines of code. Open source models small enough to be run on a laptop (e.g. Gemma 3) have gotten very powerful. This is not the most user-friendly option but has been working well for me.
1 reply
0 recast
0 reaction
0xOmen
@0x-omen.eth
I use Venice.ai when passing on real cases. Being paranoid (and concerned about HIPPA) I remove all patient identifiers. This limits the LLMs memory for a specific patient tho I've written about my experience giving AI hard cases, which reminds me I need to get my latest case published https://warpcast.com/0x-omen.eth/0x7e0d8240
0 reply
0 recast
1 reaction
HH
@hamud
just build a gpu centre at home.
0 reply
0 recast
0 reaction
Kieran Daniels
@kdaniels.eth
ZK HIPAA is a billionaire dollar protocol
0 reply
0 recast
2 reactions