0 reply
0 recast
0 reaction
9 replies
3 recasts
31 reactions
1 reply
0 recast
4 reactions
1 reply
0 recast
5 reactions
0 reply
0 recast
2 reactions
0 reply
0 recast
3 reactions
0 reply
0 recast
0 reaction

counterpoint - this particular example is actually a good thing in every way
+ OP clearly states that they were not using it for therapy, did not seek therapy, did not believe the machine could give unbiased therapeutic responses
+ the robot was not in "therapy mode", hadn't been asked for any help or advice
+ OP was working through a project and introduced a human concern (mentioned a stressor)
+ the robot wasn't a cold distant robot; it heard a human concern and responded humanely (in a manor respectful of the human concerns, that is relative to this person's situation)
this is like the opposite of the "skynet problem", where a cold, hard robot does math, math, math, and decides that wiping out humanity is the best solution. in this case, the robot pre-emptively gave a "caring" (not really, i know how inference works) response. good.
if you're working with a robot in a warehouse and you stub your toe, do you want it to say "oh babe im so sorry lemme kiss it" or "damaged meatbag now belongs in trash"? 0 reply
0 recast
0 reaction
0 reply
0 recast
0 reaction
0 reply
0 recast
1 reaction
1 reply
0 recast
0 reaction