Lefteris Karapetsas
@lefteris.eth
Ran an experiment with my blood panel: converted it to CSV and sent it to 2 doctors and 3 LLMs. Not only did the LLMs match the doctor conclusions, they had more details and were easier to digest than the doctors. We are entering a new era of accessible diagnostics. But how do we query LLMs with sensitive data like health records without compromising privacy? Anyone working on solutions? Local inference, encrypted computation, zero-knowledge pipelines anything else??
4 replies
0 recast
9 reactions
0xOmen
@0x-omen.eth
I use Venice.ai when passing on real cases. Being paranoid (and concerned about HIPPA) I remove all patient identifiers. This limits the LLMs memory for a specific patient tho I've written about my experience giving AI hard cases, which reminds me I need to get my latest case published https://warpcast.com/0x-omen.eth/0x7e0d8240
0 reply
0 recast
1 reaction