Content pfp
Content
@
0 reply
0 recast
0 reaction

Sam (crazy candle person) ✦  pfp
Sam (crazy candle person) ✦
@samantha
I’ve been reading “The age of surveillance capitalism” and the author posits that once social networks start censoring users, they become responsible for the moderation of the platform Today tech CEOs testified at Congress and it seems that majority of the senators want *higher* moderation on these platforms
3 replies
2 recasts
33 reactions

Sam (crazy candle person) ✦  pfp
Sam (crazy candle person) ✦
@samantha
My q to the philosophy channel is how do you build an uncensored social platform at scale when there is harm such as CP, sextortion, scams, bots etc. In this context harm is subjective. I don’t mean harm = illegal. CP is illegal, sending 1000 $DEGEN and getting 10k back is a scam but not illegal. They are both bad.
12 replies
0 recast
2 reactions

grin↑ pfp
grin↑
@grin
spent a bunch of time thinking about this at lbry. our best answer is that moderation should happen in layers and be opt-in. at the protocol layer, its least moderated. that's either no moderation at all, or maybe rare and low-resolution decisions (eg slashing validators, kicking out nodes, etc)
1 reply
0 recast
6 reactions

grin↑ pfp
grin↑
@grin
at higher levels (think hubs or maybe apps), you have ppl (and companies) who follow the firehose and publish their moderation activity. then your app or hub subscribes to it and that changes what you see as a user any popular app will come with a default list of subbed moderators, but also you can customize
1 reply
0 recast
1 reaction