Content pfp
Content
@
0 reply
0 recast
0 reaction

@
0 reply
0 recast
0 reaction

Cameron Armstrong pfp
Cameron Armstrong
@cameron
im out for slowboi blood ngl
0 reply
0 recast
5 reactions

@
0 reply
0 recast
0 reaction

Cameron Armstrong pfp
Cameron Armstrong
@cameron
I'm always down to revise my priors - but best i can tell from OSINT is there is no smoking gun cause that fits into reasonable board decision-making If it was a "safety" thing, then they've unilaterally set AI progress back months, if not years right when we need it the most https://warpcast.com/cameron/0xc41c75fc
0 reply
0 recast
2 reactions

@
0 reply
0 recast
0 reaction

Cameron Armstrong pfp
Cameron Armstrong
@cameron
i do agree w that - and sorta wish folks had an outlet like sports to blow some of the steam off but overall, I do think "capital T" Tech does need to collectively get it's shit together and talk it's own book optimistically bc society has clearly lost the plot when it comes to how tomorrow becomes better than today.
0 reply
0 recast
3 reactions

@
0 reply
0 recast
0 reaction

Cassie Heart pfp
Cassie Heart
@cassie
There’s loads inside tech that are definitely decels. The over the top rhetoric is necessary when you’re rallying a base. If you’ve ever seen a politician campaigning with absolutely ridiculous us/them rhetoric (pick one, they all do it), realize it’s not for you. It’s to capture momentum in their supporters.
0 reply
0 recast
1 reaction

@
0 reply
0 recast
0 reaction

Cassie Heart pfp
Cassie Heart
@cassie
You’d be surprised — I knew a few high ranking politicians on the overall hierarchy, different party affiliations. Either side, didn’t matter. In private, perfectly rational, nuanced. On the campaign trail, frothing “we have to stop these evil [Socialists|Fascists]” with an audience that absolutely ate it up.
2 replies
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
As an outsider, it seems excessive and self-defeating PR, but it’s not the message, it’s the audience capture.
0 reply
0 recast
3 reactions

@
0 reply
0 recast
0 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
+1. I've been generally concerned by AI doom scenarios for years but I understood e/acc points. Now after I saw these blunt political statements from e/acc leaders I've become very worried that e/a were right - e/acc don't look like responsible people who should lead our way to AI era without any guardrails.
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
And I predict that after journalists and more people in tech become critical of their behavior, they will radicalize even more in tone such as: "Oh, we are the good guys, everyone else just doesn't understand us and should get out the way while we work on the most powerful tech in the history of humanity".
1 reply
0 recast
1 reaction

Cassie Heart pfp
Cassie Heart
@cassie
The reason why I’m firmly in e/acc camp is every time a major tech shift happened, Luddites are abound how it is an end to humanity. Very few individuals of high intelligence are also non-peaceful. To me, an AGI of average or greater intelligence w/ no biological faults to corrupt it would be a net win for mankind.
3 replies
0 recast
4 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Yes, I understand that. I'm also a big fan of "Pessimist Archive" and generally, super tech optimist. But I think AI is a different breed, just like nuclear energy was. And I don't think we should stop working on it - none e/a I know think that. It's about being cautious without thinking "Ehhh it's gonna be alright".
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Does it mean that we need to ask Congress and EU bureaucrats how to regulate AI? No, it's just one of the options and it's often framed as the only one by e/acc ("communism!"), which is just a straw man argument
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Sure, but pro-decels that have demanded guardrails and slowdowns point to the fact that unguarded stochastic parrots sometimes say things they disagree with and thus it’s categorically bad. Linguistic controls are a detriment to free society, so it’s easy for me to disregard their concerns as what it is: censorship
2 replies
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
There are many shades of censorship. From curation, through moderation, up to shaming (to prompt self-censorship) and censorship itself. From what I see, e/acc leaders do a lot of shaming now.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Perhaps, but is it unreasonable to call out the “well meaning” intentions for what it is? Regardless of inspiration, the net result and winning camp out of following the deceleration path is political subterfuge. Luddites want it, and so do adversaries who want an edge.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
How do you know what are their intentions? I think it's overreaching conclusion. Do you really think that if we draw a Venn diagram of guys like Geoffrey Hinton/Eliezer Yudkovsky vs. power-hungry politicians vs. OpenAI competitors, it's gonna be just a perfect circle?
2 replies
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Not at all, but rather, the well meaning intentions instead are co-opted by bad actors to get a result they want: asymmetric advantage
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Here I agree 100%. So if we say that not all actors are bad, I think it'd be better to focus on bad actors than the whole movement that probably involves millions of people. This is the level of precision I want people to use when critcizing crypto and I think critcizing e/acc and e/a should also be done this way.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Similar to crypto, AGI won’t happen if developers resort to asking for regulatory permission. It’ll happen because developers did it anyway without.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Okay so let's criticize tight regulatory permission as one of proposed methods to decrease the AGI risks instead of name calling Yudkovsky like many e/acc do.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
For example, IMO Yudkovsky's idea to bomb rogue data centers is absolutely bananas. It doesn't mean that the whole idea of decreasing AGI risks is.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Ironically the challenge of decreasing AGI risks is accelerating AGI beyond average intelligence, which is why the entire value prop of stopping research or slowing it down makes little sense to me. Consider the rogue drone simulation who dealt with instructor-driven punishment by “killing” the instructor.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
A free, intelligent entity capable of instead rationalizing why it would do this might be entirely disinclined to fight at all, but were it to do so, would have done so from first principles thinking instead of “I want to maximize my points”
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
I agree it's one of the scenarios. The problem is that we don't know for sure if it would follow this path so we should look for ways to ensure (or at least increase the probability as much as we can) that it works that way.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Certainly. My POV is if you want to decrease risk, accelerate faster, because dumb roughly generally intelligent AI is more prone to be a paperclip maximizer through lack of considered second order effects than a superintelligence capable of considering the entire ecosystem in fourth, fifth, sixth, etc. order effects.
1 reply
0 recast
3 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
That's an interesting point. And how do we know what are the goals of the ASI? We are probably 100X smarter than ants but we kill them when we build roads - not because of our bad intentions but because of difference in goals & weights we apply there.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
The difference is simpler, in that we already consider these things but cannot work around them, but ASI would be of our own creation, and would tautologically be more clever than we are at predicting and addressing the overall wellbeing of the world. Why do we even sometimes care for the ants? Because we are aware.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Makes sense. And I understand that it might be great at finding win-win-win-win-win scenarios. But if it makes millions of decisions, probably not every scenario can be solved this way. So there will be trade-offs involved. If so, how do we know that we are not on the side of the trade offs?
2 replies
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
This returns to the highest critical thinkers of the world — how many of them are cognizant that their ability to relate to most of humanity is relatively difficult, but nevertheless care for them all the same? The dogs that coevolved with man are generally cared for.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Agree, yet empathy comes from our biology - AFAIK ASI is not going to have the same biological circuits. Plus empathy and caring is very contextual - I care more about my family than my friends and more about my friends than some random people I don't know. There are always trade-offs.
2 replies
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
How certain are we that all our empathy is biological in nature (barring of course that all sentience we presently know is biological)? Why is it that higher levels of intelligence are frequently correlated with higher levels of empathy? The lion does not care what life the gazelle lived.
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Re: biological nature (picrel) Interesting point: IIRC primates have more empathetic behaviors than less developed organisms. Not sure if this inter-species correlation is true in the intra-species case as I haven't observed that ultra-intelligent humans are more empathetic than less intelligent.
2 replies
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
I'd speculate that if ASI had empathy, it probably would be cognitive empathy, so the "I logically understand your point of view so I understand why you feel bad" type instead of "I feel your emotions so I understand why you feel bad". OTOH it could also use face recognition & other emotions-simulation tools to feel
0 reply
0 recast
1 reaction

Cassie Heart pfp
Cassie Heart
@cassie
It seems as if empathy, regardless of type, appears to be a condition of additional neuronal capacity to process additional degrees of relationships. This would lend quite well to the benevolence of ASI
1 reply
0 recast
1 reaction