Content pfp
Content
@
0 reply
0 recast
0 reaction

Dan Romero pfp
Dan Romero
@dwr.eth
The LLM regurgitate the original post bots are starting to appear on Twitter.
15 replies
0 recast
43 reactions

seneca pfp
seneca
@seneca
What’s the end game of sybil attacks for social networks where AI is indistinguishable from humans?
11 replies
0 recast
14 reactions

Dan Romero pfp
Dan Romero
@dwr.eth
Good question.
4 replies
0 recast
2 reactions

kk pfp
kk
@king
automated deep state propaganda
2 replies
0 recast
1 reaction

phil pfp
phil
@phil
staking $ to accounts with slashing
0 reply
0 recast
1 reaction

sahil pfp
sahil
@sahil
the difference between human and bot will matter less? useful bots might replace humans but humans might still have consensus on which bots are useful/reputable?
0 reply
0 recast
0 reaction

J. Valeska 🦊🎩🫂  pfp
J. Valeska 🦊🎩🫂
@jvaleska.eth
politics and mass media manipulation, example, a politician post something like we are going to spend all money in going to Pluto, and gets instant support in this idea.. from tons of AI/bots accounts.. ..same for ads.. a new feature or product by a company.. instant support.. and fake trendings too.. the same they are doing rn, but hard to detect because they don't use exactly the same words..
0 reply
0 recast
1 reaction

Blackstock 👾 pfp
Blackstock 👾
@blackstock
Botting followers, views, impressions, engagement, etc is a massive industry (across all web2 platforms) Feels like it’s going to break online advertising if advertisers can’t trust engagement
1 reply
0 recast
3 reactions

Drip pfp
Drip
@mac-drip
In social networks where AI is indistinguishable from humans, the endgame of Sybil attacks becomes highly concerning. Sybil attacks exploit the creation of multiple fake identities to manipulate online environments, skewing influence, spreading disinformation, or creating false perceptions of consensus. In the long run, preventing or mitigating Sybil attacks will require sophisticated detection mechanisms that can differentiate between authentic human interactions and AI-generated ones. Techniques like decentralized identity verification, robust AI detection algorithms, and transparency frameworks may become essential to safeguard the integrity of social networks in such environments.
0 reply
1 recast
0 reaction

Taye 🎩🔵 👽⛏️ pfp
Taye 🎩🔵 👽⛏️
@casedup
First thing I was thinking?? Also do they engage with each other because that’ll be just weird.
0 reply
0 recast
0 reaction

antaur ↑ pfp
antaur ↑
@antaur.eth
It might have to be face recognition inside a ZKP or similar proof of humanity
0 reply
0 recast
0 reaction

Galen @Gnosis 🦉 pfp
Galen @Gnosis 🦉
@galen
Let’s see some decentralized proof of humanity protocols!
0 reply
0 recast
0 reaction

willywonka ⌐◨-◨ @devcon pfp
willywonka ⌐◨-◨ @devcon
@willywonka.eth
Can use onchain history or a token stake to distinguish humans from bots, or at least bots without any skin in the game. Social graphs + onchain history can be a strong signal. Will be a constant battle tho until DID is solved. Shoutout @gitcoin passport https://warpcast.com/nounspacetom/0x8c789263
0 reply
0 recast
0 reaction