Content pfp
Content
@
0 reply
0 recast
0 reaction

binji 🔴 pfp
binji 🔴
@binji.eth
can someone make an L2 for ‘incentivized LLM training’ — it’s just $600/month now. decentralized AI training via attestations, with a leaderboard where every month the top verified attesters get the highest % of the sequencer profits redistributed to them “Knowledge farming”
11 replies
6 recasts
72 reactions

kevin j pfp
kevin j
@entropybender
we're adjacently working on this, what would be examples of attestations though?
1 reply
0 recast
1 reaction

binji 🔴 pfp
binji 🔴
@binji.eth
i think we get @osprey you @bap and myself in a room one day and work it out I think training data needs to be disputable so any inputs that are being fed should ideally be able to be disputed and bad actors should get slashed. I think attestations would help record each input and make the above easier
2 replies
0 recast
1 reaction

Joe / 0xOsprey.eth pfp
Joe / 0xOsprey.eth
@osprey
count me in 🫡
0 reply
0 recast
0 reaction

kevin j pfp
kevin j
@entropybender
yea our line of thinking as well. also examining a stackable layer of LLMs for disputes could be cool too, with verifiable compute. this is a test i ran recently trying out opml
1 reply
0 recast
2 reactions