Dan Romero pfp
Dan Romero
@dwr.eth
How do you prove that an account and/or wallet with a proof of human credential isn't getting its content / thinking / actions from an AI? How do you prove that an account labeled as an agent is not run by a human?
42 replies
16 recasts
134 reactions

phil pfp
phil
@phil
You can't, but having a human verification does give an upper bound to the #, whereas bots don't.
2 replies
2 recasts
34 reactions

jj πŸ›Ÿ pfp
jj πŸ›Ÿ
@jj
Scan an eyeball πŸ‘οΈ
1 reply
0 recast
2 reactions

Callum Wanderloots ✨ pfp
Callum Wanderloots ✨
@wanderloots.eth
0 reply
0 recast
3 reactions

Zach pfp
Zach
@zd
do either of these questions matter if the content is good?
2 replies
0 recast
3 reactions

will pfp
will
@w
well you see we just fuse these wires into their brain and then..
0 reply
0 recast
2 reactions

Deana pfp
Deana
@deana
Idk but I’d probably find the former a lot less annoying if i knew for sure that a human was involved
0 reply
0 recast
1 reaction

Pichi πŸŸͺ 🍑🌸 pfp
Pichi πŸŸͺ 🍑🌸
@pichi
I have definitely seen accounts here that started as real genuine humans. Authentic, kind, etc. then they handed the account to an AI and I muted it. I’ve also seen people add AI to boost their replies during the Moxie era so their top level casts are human, but all their replies are automated slop. I have no idea how you would solve for either use case.
1 reply
0 recast
13 reactions

Dean Pierce πŸ‘¨β€πŸ’»πŸŒŽπŸŒ pfp
Dean Pierce πŸ‘¨β€πŸ’»πŸŒŽπŸŒ
@deanpierce.eth
It's about identity binding. A person can authorize a bot to act on their behalf, but if a human authorizes a thousand bots to act on their behalf, it will be obvious. Sybil attacks are trivially detected, so sybil resistance is achieved, which is a bigger deal than most people realize.
0 reply
0 recast
2 reactions

Tayyab - d/acc pfp
Tayyab - d/acc
@tayyab
Of course the answer is to require a captcha before every cast, Dan.
1 reply
0 recast
2 reactions

Yakuza pfp
Yakuza
@iamtherealyakuza.eth
Great question! But maybe we should first ask if it’s even important to prove that it’s not AI. If an agent or account is contributing value, does it really matter if it’s human or AI? Focusing too much on proving humanity might distract us from how these tech can work together with us, innit?
0 reply
0 recast
2 reactions

Kenji pfp
Kenji
@kenjiquest
At this stage, we can't tell the difference between whether a 'human' account is getting its content/thinking from an AI. It's not provable over the net where to a degree everyone is acting under a guise or undoxxed status. Bad AI can be picked up on, but with each passing day the quality and naturalness of it is improving, so it'll be harder to pick up on as time moves on. The tester to knowing if someone is using AI content, is actually knowing the human themselves. If you for instance know someone pretty well (their writing style, quirks etc), you can kind of pick up when AI has kicked in if they are going full dive leaning in to using AI contenet. The "You didn't write this, did you?" moment is there. But how many of us know each other that well on the internet? Probably not all that well... so detecting when someone uses AI content/thinking from a separate device or unattached app to the output really is too difficult to detect at this point, other than from our own intuitions which aren't concrete.
0 reply
0 recast
1 reaction

bertwurst pfp
bertwurst
@bertwurst.eth
Hire me to individually Turing Test all of them.
0 reply
0 recast
1 reaction

John Hoang pfp
John Hoang
@jhoang
Why is this important to prove?
1 reply
0 recast
1 reaction

max ↑ pfp
max ↑
@baseddesigner.eth
That's the thing, the better our tech gets the harder it is to verify what we see on screens vs physical reality
1 reply
0 recast
1 reaction

meta-david πŸ’₯| Building Scoop3 pfp
meta-david πŸ’₯| Building Scoop3
@metadavid
That's the neat part. You don't. (put meme here)
1 reply
0 recast
1 reaction

Blake Burgess pfp
Blake Burgess
@trinitek
You need proof-of-meat that is intrinsic to the content, not a captcha or puzzle challenge that can be validated separately. What's something that bots can't do, like send physical mail? If I copied a GPT response onto a sheet of paper and dropped it in the mail, does that count as AI? Or for proof-of-machine I'm thinking of a product like Yahoo Answers, but you only have n seconds to submit an A, and Q is not known so a human can't pre-draft, and they don't have enough time to copy/paste into an LLM. But the Q/A itself is the content, not a challenge for something else.
0 reply
0 recast
0 reaction

jonathan pfp
jonathan
@jonathanmccallum
The question of our time (:
0 reply
0 recast
0 reaction

depatchedmode pfp
depatchedmode
@depatchedmode
Exactly. Only thing that matters in most cases is relevance. Occasionally you need proof of humanity for things like voting, and it doesn’t attempt to solve for anything other than β€œonly people who have this unique secret can do this thing this many times”.
0 reply
0 recast
0 reaction

Ben pfp
Ben
@benersing
1. Community notes for originality instead of fact checking. 2. Not sure yet.
0 reply
0 recast
0 reaction