Content pfp
Content
@
0 reply
0 recast
0 reaction

Jordan Olmstead pfp
Jordan Olmstead
@jchanolm.eth
Simulating Justice: https://jchanolm.substack.com/p/simulating-justice-part-2?r=ru74j Updating Rawls' Veil of Ignorance thought experiment by running simulations to determine principles of justice rational agents would select to govern emerging technologies like AGI and biotech.
1 reply
1 recast
1 reaction

xh3b4sd ↑ pfp
xh3b4sd ↑
@xh3b4sd.eth
I read through some of your writing there, because I was very curious about the ideas that you mention. Having read a couple different passages, I am a bit torn inside. Experimenting and simulating is super cool, I am now not sure what your takeaways are, if there are any in concrete form. Is this a work in progress? I couldn't find a bottom line nor deduce where this goes next. I am not sure whether I read this right, but in the first part it appeared to me you mix a lot of analysis with your own rather subjective assumptions. And reading those assumptions and statements, I felt like they were not very logical or conclusive. In any case, I would like to brainstorm on those things together a bit more or maybe just chit chat. The direction you are going here is certainly super interesting even if there turns out to be some form of disagreement philosophically.
1 reply
0 recast
0 reaction

Jordan Olmstead pfp
Jordan Olmstead
@jchanolm.eth
Heyy. Yeah it’s part 2 of a 4 part series. Sure, happy to chat
1 reply
0 recast
0 reaction

xh3b4sd ↑ pfp
xh3b4sd ↑
@xh3b4sd.eth
Ok cool. Can you tell me what "justice" is? How is that defined for you and why is it relevant?
1 reply
0 recast
0 reaction

Jordan Olmstead pfp
Jordan Olmstead
@jchanolm.eth
Justice is a state of affairs where people are treated fairly and impartially, and receive appropriate consequences for their actions. I am focusing on the *fairness* piece here. In Part I of the series I hone in on how social contract theorists like Rawls and Nozick believe that when it comes to government, principles for justice/fairness are best derived by thinking through the arrangements rational actors would select for governance and why. It’s relevant because the objective of the piece is to start a conversation for what just regulation of emerging technologies would look like, by exploring what principles of justice rational AI agents placed in the Veil of Ignorance might select for regulating emerging tech. I think this is interesting, from a political philosophy perspective, because the arrangements selected by the agents are rough proxy for what humans might select. Rough, but better than a pure thought experiment.
1 reply
0 recast
0 reaction

xh3b4sd ↑ pfp
xh3b4sd ↑
@xh3b4sd.eth
"receive appropriate consequences" sounds good. Consequences in social systems are usually interpreted as effects of somebody's own actions, which is how it should be. My sense is that what is often touted as justice or fairness has not much to do with direct individual side effects, but rather consequences through higher arching system dynamics. For instance "an eye for an eye" makes sense because you get punished for what you yourself have done. On the other hand, dictated resource redistribution doesn't sound very fair to me from neither point of view. Taking something away from the working man only to give it to another working man sounds pretty arbitrary and frankly, inflammatory. We disincentivize competence for one cohort, and reassure incompetence for another. That's bad. What I am trying to say on a meta level is that there is no justice or fairness in complex systems. Those notions should rather be looked at as tradeoffs. Our systems should optimize for reasonable tradeoffs instead of fairness.
1 reply
0 recast
0 reaction