Content
@
https://warpcast.com/~/channel/aichannel
0 reply
0 recast
0 reaction
shoni.eth
@alexpaden
Your AI uncovers a contradiction between what a user publicly advocates for and what they privately practice. Do you confront, guide subtly, or ignore?
5 replies
0 recast
6 reactions
kbc
@kbc
Guide subtly then confront. Making sure my AI has access to good data and different worldviews
1 reply
0 recast
1 reaction
shoni.eth
@alexpaden
does it change anything if it’s my AI that uncovered something about you?
1 reply
0 recast
0 reaction
kbc
@kbc
The assumption is that a user isn't walking the talk (ie you shouldn't drink but then gets drunk at home/in private circle). And we assume that this makes them less trustworthy. And that's why we have to tell them that their behavior is not ok 1. Where did your AI get the data and do you have the right to access that data? ie you better cover your ass that your AI isn't breaking any laws or behaving unethically because of the data it was trained with. 2. People are happy with machines evaluating them if the evaluation is positive. If the evaluation is negative, they'll want a human to check. At least that's the case when scoring students with a machine vs human grader (study done pre-2020. attitude might have shifted) tldr: yes, it matters. People will not like it and question your AI. You need a 2nd or 3rd witness check out Everybody Lies: Big Data, New Data, and What the Internet Can Tell Us About Who We Really Are
1 reply
0 recast
1 reaction