Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
The bots will never be this easy to detect again.
8 replies
98 recasts
92 reactions

radmadical pfp
radmadical
@radmadical
I've been working on a blockchain prototype for about a year now (still very conceptual - real part time) that uses every single feature as a PoH (proof of human) criteria that will constantly inform the PoH score based on long term human like behavior (playing games (each game, each turn each shot is unique, etc), paying for things, commenting, etc) so that the cost of faking a personal account is prohibitively high given the time and complexity of raising that score vs the potential payoff of doing so - I foresee a blockchain that represents identity aside any number of applications that could, bit by bit, verify a real human is behind it through normal usage activity over time - I think that's the way to beat the bots in the long run, make them too expensive and difficult to bother developing - what do you think?
0 reply
0 recast
0 reaction

CryptoShroom pfp
CryptoShroom
@cryptoshroom
But AI will develop itself and imitate human behavior
2 replies
0 recast
0 reaction

radmadical pfp
radmadical
@radmadical
The other thing is the variation in applications - if the ecosystem were to explode, a high score would very expensive to produce - you need it playing games, listening to music, buying things online, commenting on social media, storing files on cloud, etc etc etc. Things that represent incidental effort to an actual human - but a serious cost barrier to bots/AI attempting to make fake "people". Hopefully that drastically reduces if not eliminates the number of fake accounts as the cost benefit analysis will rarely warrant the necessary effort...
0 reply
0 recast
0 reaction

CryptoShroom pfp
CryptoShroom
@cryptoshroom
However that scenario, while providing comprehensive certainty against bots would also provide a huge hurdle for new users and also require legitimate users to “prove” their legitimacy in ways that are questionable from a data security standpoint
2 replies
0 recast
0 reaction

radmadical pfp
radmadical
@radmadical
Some good points - to the first, it shouldn't, again, we would draw updated criteria from the common user data itself, so the game you play already has these relationships in it - it is the fact that a regular user behaves that way ALREADY that makes it a useful check against bots - an actual human shouldn't have to change their behavior at all to satisfy the updated criteria since it's defined BY the user's already existing behavior to begin with, its the bots that will have to update since they don't naturally behave that way as their input is designed specifically to pass the checks while the user passing the checks should be incidental to normal human behavior/user inputs.
0 reply
0 recast
0 reaction