Mike | Abundance 🌟
@abundance
Not enough focus on incentivizing quality. When there are so many projects, tokens, systems and mechanisms that incentivize engagement - and only a few small projects that try to promote quality content - it's not surprising that we get so much engagement farming. You can't solve this with LLMs alone bc you're filtering too many user accounts that mostly use the platform normally but may also want to profit from the incentives (or maybe they're just caught in the LLM filter regardless). But if that's the approach there needs to be greater bottom-up user feedback on spammers. If the incentives were stronger to create quality content you'd have "quality content farming" - by both users and bots. No one would be complaining about either.
2 replies
1 recast
0 reaction
Trigs
@trigs
I think they're just solving it algorithmically, which is also part of the core of your point: They aren't looking at the content, they're only looking at the engagement metrics (how many times they've posted and how much engagement that has generated from the OP). Not that llm's could actually assess quality content either, but they're not even going that far, just stopping right at engagement as the only qualitative factor.
1 reply
0 recast
0 reaction
Mike | Abundance 🌟
@abundance
💯. Big part of the problem. LLM can potentially get better at detecting real bots/spammers (as well as quality content) but they need to be trained by data from users. Without this bottom-up feedback users will be caught in these filters for normal behavior
1 reply
0 recast
1 reaction