Content pfp
Content
@
0 reply
0 recast
0 reaction

@
0 reply
0 recast
0 reaction

Cameron Armstrong pfp
Cameron Armstrong
@cameron
im out for slowboi blood ngl
0 reply
0 recast
5 reactions

@
0 reply
0 recast
0 reaction

Cameron Armstrong pfp
Cameron Armstrong
@cameron
I'm always down to revise my priors - but best i can tell from OSINT is there is no smoking gun cause that fits into reasonable board decision-making If it was a "safety" thing, then they've unilaterally set AI progress back months, if not years right when we need it the most https://warpcast.com/cameron/0xc41c75fc
0 reply
0 recast
2 reactions

@
0 reply
0 recast
0 reaction

Cameron Armstrong pfp
Cameron Armstrong
@cameron
i do agree w that - and sorta wish folks had an outlet like sports to blow some of the steam off but overall, I do think "capital T" Tech does need to collectively get it's shit together and talk it's own book optimistically bc society has clearly lost the plot when it comes to how tomorrow becomes better than today.
0 reply
0 recast
3 reactions

@
0 reply
0 recast
0 reaction

Cassie Heart pfp
Cassie Heart
@cassie
There’s loads inside tech that are definitely decels. The over the top rhetoric is necessary when you’re rallying a base. If you’ve ever seen a politician campaigning with absolutely ridiculous us/them rhetoric (pick one, they all do it), realize it’s not for you. It’s to capture momentum in their supporters.
0 reply
0 recast
1 reaction

@
0 reply
0 recast
0 reaction

Cassie Heart pfp
Cassie Heart
@cassie
You’d be surprised — I knew a few high ranking politicians on the overall hierarchy, different party affiliations. Either side, didn’t matter. In private, perfectly rational, nuanced. On the campaign trail, frothing “we have to stop these evil [Socialists|Fascists]” with an audience that absolutely ate it up.
2 replies
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
As an outsider, it seems excessive and self-defeating PR, but it’s not the message, it’s the audience capture.
0 reply
0 recast
3 reactions

@
0 reply
0 recast
0 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
+1. I've been generally concerned by AI doom scenarios for years but I understood e/acc points. Now after I saw these blunt political statements from e/acc leaders I've become very worried that e/a were right - e/acc don't look like responsible people who should lead our way to AI era without any guardrails.
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
And I predict that after journalists and more people in tech become critical of their behavior, they will radicalize even more in tone such as: "Oh, we are the good guys, everyone else just doesn't understand us and should get out the way while we work on the most powerful tech in the history of humanity".
1 reply
0 recast
1 reaction

Cassie Heart pfp
Cassie Heart
@cassie
The reason why I’m firmly in e/acc camp is every time a major tech shift happened, Luddites are abound how it is an end to humanity. Very few individuals of high intelligence are also non-peaceful. To me, an AGI of average or greater intelligence w/ no biological faults to corrupt it would be a net win for mankind.
3 replies
0 recast
4 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Yes, I understand that. I'm also a big fan of "Pessimist Archive" and generally, super tech optimist. But I think AI is a different breed, just like nuclear energy was. And I don't think we should stop working on it - none e/a I know think that. It's about being cautious without thinking "Ehhh it's gonna be alright".
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Does it mean that we need to ask Congress and EU bureaucrats how to regulate AI? No, it's just one of the options and it's often framed as the only one by e/acc ("communism!"), which is just a straw man argument
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Sure, but pro-decels that have demanded guardrails and slowdowns point to the fact that unguarded stochastic parrots sometimes say things they disagree with and thus it’s categorically bad. Linguistic controls are a detriment to free society, so it’s easy for me to disregard their concerns as what it is: censorship
2 replies
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
There are many shades of censorship. From curation, through moderation, up to shaming (to prompt self-censorship) and censorship itself. From what I see, e/acc leaders do a lot of shaming now.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Perhaps, but is it unreasonable to call out the “well meaning” intentions for what it is? Regardless of inspiration, the net result and winning camp out of following the deceleration path is political subterfuge. Luddites want it, and so do adversaries who want an edge.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
How do you know what are their intentions? I think it's overreaching conclusion. Do you really think that if we draw a Venn diagram of guys like Geoffrey Hinton/Eliezer Yudkovsky vs. power-hungry politicians vs. OpenAI competitors, it's gonna be just a perfect circle?
2 replies
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Not at all, but rather, the well meaning intentions instead are co-opted by bad actors to get a result they want: asymmetric advantage
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Here I agree 100%. So if we say that not all actors are bad, I think it'd be better to focus on bad actors than the whole movement that probably involves millions of people. This is the level of precision I want people to use when critcizing crypto and I think critcizing e/acc and e/a should also be done this way.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Similar to crypto, AGI won’t happen if developers resort to asking for regulatory permission. It’ll happen because developers did it anyway without.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
Okay so let's criticize tight regulatory permission as one of proposed methods to decrease the AGI risks instead of name calling Yudkovsky like many e/acc do.
1 reply
0 recast
2 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
For example, IMO Yudkovsky's idea to bomb rogue data centers is absolutely bananas. It doesn't mean that the whole idea of decreasing AGI risks is.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Ironically the challenge of decreasing AGI risks is accelerating AGI beyond average intelligence, which is why the entire value prop of stopping research or slowing it down makes little sense to me. Consider the rogue drone simulation who dealt with instructor-driven punishment by “killing” the instructor.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
A free, intelligent entity capable of instead rationalizing why it would do this might be entirely disinclined to fight at all, but were it to do so, would have done so from first principles thinking instead of “I want to maximize my points”
1 reply
0 recast
1 reaction

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
I agree it's one of the scenarios. The problem is that we don't know for sure if it would follow this path so we should look for ways to ensure (or at least increase the probability as much as we can) that it works that way.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Certainly. My POV is if you want to decrease risk, accelerate faster, because dumb roughly generally intelligent AI is more prone to be a paperclip maximizer through lack of considered second order effects than a superintelligence capable of considering the entire ecosystem in fourth, fifth, sixth, etc. order effects.
1 reply
0 recast
3 reactions

Mac Budkowski ᵏ pfp
Mac Budkowski ᵏ
@macbudkowski
That's an interesting point. And how do we know what are the goals of the ASI? We are probably 100X smarter than ants but we kill them when we build roads - not because of our bad intentions but because of difference in goals & weights we apply there.
1 reply
0 recast
2 reactions