Content pfp
Content
@
0 reply
0 recast
0 reaction

0xen 🎩 pfp
0xen 🎩
@0xen
e/accs rn
4 replies
9 recasts
66 reactions

Cameron Armstrong pfp
Cameron Armstrong
@cameron
im out for slowboi blood ngl
1 reply
0 recast
5 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
why? because sam got ousted? we don't know anything atm yet the 'enemy' rhetoric on twitter is at fever pitch.
2 replies
0 recast
5 reactions

Cameron Armstrong pfp
Cameron Armstrong
@cameron
I'm always down to revise my priors - but best i can tell from OSINT is there is no smoking gun cause that fits into reasonable board decision-making If it was a "safety" thing, then they've unilaterally set AI progress back months, if not years right when we need it the most https://warpcast.com/cameron/0xc41c75fc
1 reply
0 recast
2 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
the rhetoric is completely over the top. 'enemies, terrorists, communists etc.' e/acc is showing its true colors rn and it's ugly.
2 replies
0 recast
6 reactions

Cameron Armstrong pfp
Cameron Armstrong
@cameron
i do agree w that - and sorta wish folks had an outlet like sports to blow some of the steam off but overall, I do think "capital T" Tech does need to collectively get it's shit together and talk it's own book optimistically bc society has clearly lost the plot when it comes to how tomorrow becomes better than today.
1 reply
0 recast
3 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
what's extra egregious is that the over the top rhetoric isn't coming from randos in the e/acc scene, that I could discount. it's from the leaders and it's so off putting, just a total turnoff. 'decel' has quickly morphed into anyone who doesn't support the e/acc part line which basically means everybody outside tech
1 reply
0 recast
10 reactions

Cassie Heart pfp
Cassie Heart
@cassie
There’s loads inside tech that are definitely decels. The over the top rhetoric is necessary when you’re rallying a base. If you’ve ever seen a politician campaigning with absolutely ridiculous us/them rhetoric (pick one, they all do it), realize it’s not for you. It’s to capture momentum in their supporters.
1 reply
0 recast
3 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
I totally get that, and from the outside it looks completely unhinged. And since this is a game of perception and control it seems like a PR misstep - that people forget they're not in the group chat. It's left such a bad taste in my mouth that I'm out here defending EAs 😂
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
You’d be surprised — I knew a few high ranking politicians on the overall hierarchy, different party affiliations. Either side, didn’t matter. In private, perfectly rational, nuanced. On the campaign trail, frothing “we have to stop these evil [Socialists|Fascists]” with an audience that absolutely ate it up.
2 replies
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
As an outsider, it seems excessive and self-defeating PR, but it’s not the message, it’s the audience capture.
1 reply
0 recast
4 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
Comes at a cost though since it's very public and all eyes are on AI rn. The e/acc people run the risk of alienating everybody except a small tech cadre: general public, regulators, politicians - maybe it's worth it for them but the 'you're either with us or a 'decel'' stuff doesn't feel very techno optimist to me
1 reply
0 recast
5 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
+1. I've been generally concerned by AI doom scenarios for years but I understood e/acc points. Now after I saw these blunt political statements from e/acc leaders I've become very worried that e/a were right - e/acc don't look like responsible people who should lead our way to AI era without any guardrails.
2 replies
0 recast
2 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
And I predict that after journalists and more people in tech become critical of their behavior, they will radicalize even more in tone such as: "Oh, we are the good guys, everyone else just doesn't understand us and should get out the way while we work on the most powerful tech in the history of humanity".
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
The reason why I’m firmly in e/acc camp is every time a major tech shift happened, Luddites are abound how it is an end to humanity. Very few individuals of high intelligence are also non-peaceful. To me, an AGI of average or greater intelligence w/ no biological faults to corrupt it would be a net win for mankind.
3 replies
0 recast
5 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
Yes, I understand that. I'm also a big fan of "Pessimist Archive" and generally, super tech optimist. But I think AI is a different breed, just like nuclear energy was. And I don't think we should stop working on it - none e/a I know think that. It's about being cautious without thinking "Ehhh it's gonna be alright".
1 reply
0 recast
1 reaction

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
Does it mean that we need to ask Congress and EU bureaucrats how to regulate AI? No, it's just one of the options and it's often framed as the only one by e/acc ("communism!"), which is just a straw man argument
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Sure, but pro-decels that have demanded guardrails and slowdowns point to the fact that unguarded stochastic parrots sometimes say things they disagree with and thus it’s categorically bad. Linguistic controls are a detriment to free society, so it’s easy for me to disregard their concerns as what it is: censorship
2 replies
0 recast
3 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
There are many shades of censorship. From curation, through moderation, up to shaming (to prompt self-censorship) and censorship itself. From what I see, e/acc leaders do a lot of shaming now.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Perhaps, but is it unreasonable to call out the “well meaning” intentions for what it is? Regardless of inspiration, the net result and winning camp out of following the deceleration path is political subterfuge. Luddites want it, and so do adversaries who want an edge.
1 reply
0 recast
3 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
How do you know what are their intentions? I think it's overreaching conclusion. Do you really think that if we draw a Venn diagram of guys like Geoffrey Hinton/Eliezer Yudkovsky vs. power-hungry politicians vs. OpenAI competitors, it's gonna be just a perfect circle?
2 replies
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Not at all, but rather, the well meaning intentions instead are co-opted by bad actors to get a result they want: asymmetric advantage
2 replies
0 recast
1 reaction

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
Here I agree 100%. So if we say that not all actors are bad, I think it'd be better to focus on bad actors than the whole movement that probably involves millions of people. This is the level of precision I want people to use when critcizing crypto and I think critcizing e/acc and e/a should also be done this way.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Similar to crypto, AGI won’t happen if developers resort to asking for regulatory permission. It’ll happen because developers did it anyway without.
1 reply
0 recast
3 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
Okay so let's criticize tight regulatory permission as one of proposed methods to decrease the AGI risks instead of name calling Yudkovsky like many e/acc do.
1 reply
0 recast
2 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
For example, IMO Yudkovsky's idea to bomb rogue data centers is absolutely bananas. It doesn't mean that the whole idea of decreasing AGI risks is.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Ironically the challenge of decreasing AGI risks is accelerating AGI beyond average intelligence, which is why the entire value prop of stopping research or slowing it down makes little sense to me. Consider the rogue drone simulation who dealt with instructor-driven punishment by “killing” the instructor.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
A free, intelligent entity capable of instead rationalizing why it would do this might be entirely disinclined to fight at all, but were it to do so, would have done so from first principles thinking instead of “I want to maximize my points”
1 reply
0 recast
1 reaction

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
I agree it's one of the scenarios. The problem is that we don't know for sure if it would follow this path so we should look for ways to ensure (or at least increase the probability as much as we can) that it works that way.
1 reply
0 recast
3 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Certainly. My POV is if you want to decrease risk, accelerate faster, because dumb roughly generally intelligent AI is more prone to be a paperclip maximizer through lack of considered second order effects than a superintelligence capable of considering the entire ecosystem in fourth, fifth, sixth, etc. order effects.
1 reply
0 recast
3 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
That's an interesting point. And how do we know what are the goals of the ASI? We are probably 100X smarter than ants but we kill them when we build roads - not because of our bad intentions but because of difference in goals & weights we apply there.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
The difference is simpler, in that we already consider these things but cannot work around them, but ASI would be of our own creation, and would tautologically be more clever than we are at predicting and addressing the overall wellbeing of the world. Why do we even sometimes care for the ants? Because we are aware.
1 reply
0 recast
2 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
Makes sense. And I understand that it might be great at finding win-win-win-win-win scenarios. But if it makes millions of decisions, probably not every scenario can be solved this way. So there will be trade-offs involved. If so, how do we know that we are not on the side of the trade offs?
2 replies
0 recast
3 reactions

Mac Budkowski 🥝 pfp
Mac Budkowski 🥝
@macbudkowski
A good counter-argument to this cast might be: "Well, we are already dealing with trade-offs as humans so why not pass it to someone who's better at it". And I'm okay with it, if it's human-centric.
1 reply
0 recast
2 reactions

Cassie Heart pfp
Cassie Heart
@cassie
I tend to not talk about the generational epochs of Q, instead speaking to the first generational goal of “replace AWS”, but second epoch I want it to be the general substrate on which the first ASI lives, and in doing so its existence is predicated on the symbiosis of man providing machine, machine benefiting man
1 reply
0 recast
3 reactions

eunika🍒 pfp
eunika🍒
@eunika.eth
@cassie @macbudkowski this was an awesome thread, a great read on both sides. This is why I am on this app 💖
2 replies
1 recast
4 reactions

0xen 🎩 pfp
0xen 🎩
@0xen
+1 an all timer
1 reply
0 recast
2 reactions

eunika🍒 pfp
eunika🍒
@eunika.eth
Totally, I wish there was a “save for later” button so I could store this
2 replies
0 recast
1 reaction

Samuel is @Farcon you too? DM! pfp
Samuel is @Farcon you too? DM!
@samuellhuber
I am using watch as save 😅
0 reply
0 recast
3 reactions

Cameron Armstrong pfp
Cameron Armstrong
@cameron
@moar
1 reply
0 recast
1 reaction