Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
The basic way you do “decentralized” or “distributed” is to avoid ever having a single point of failure (SPOF) for anything (the difference between the two I think boils down to how easy it is to assemble 2 or more fully redundant manifestations of a full system but the distinction is irrelevant for discussing SPOFs) But critical path theory/theory of constraints suggests that any system with a closed boundary (closed in what sense? 🤔) will always have a bottleneck. At best you can make sure it moves around, which is an indicator of growth. Can you be truly SPOF-resistant? Or will you always have things like Infura or L2 sequencers etc, to take Ethereum as an example.
2 replies
0 recast
5 reactions

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
One way to model this is that adding additional cross-checks/validators just adds resilience to that SPOF. The chain is still a SPOF with many validators, it's just harder to fail. Your node is still a SPOF if you self-host, you just don't have to trust someone else's infra. Also, cross-checks always add latency, which cannot be regained by adding protocol. This is maybe while I believe that in addition to SPOF-resilience measures (like democracy), I have made a personal focus on SPOF-mobility. Delegation/revocation, permissionless interfaces, these can be used to nimbly adapt to SPOF failures when they do happen.
1 reply
0 recast
0 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
> cross-checks always add latency, which cannot be regained by adding protocol Correction: Can be regained by introducing delegation to a lighter-weight protocol.
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
I don’t think personal nodes count as spof if you can fail over to peers. True spofs are global singletons within a trust space. Like Intel fabs and Boeing are spof for us economy.
2 replies
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
The latency point feels important and cap-theoremish? Also a local spof may become effectively global if there isn’t sufficient partition resistance. No use in failover nodes if you can’t reach them.
1 reply
0 recast
1 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
That's a good set of ingredients. So there's three dimensions I'm counting, for each component of a system: - Resilience of the SPOF - Availability of Alternatives - Ease of Transition
1 reply
0 recast
1 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
I like this because while I'd previously identified these two categories as two distinct types of decentralization, and had adopted some terminology from Mark Miller ("strongly coupled" vs "weakly coupled" decentralization), that didn't feel quite right, b/c they aren't alternatives: They're measures of different parts of a system. "Strong coupling" = Adding resilience/hardness to the SPOF, potentially at some cost. Maybe should just be called "Resilience" or something, which itself should probably refer to the specific conditions this system is resilient against. This is what I see Ethereum as maximizing: Ensuring there is a SPOF some where with the highest public guarantees that can be offered. "Weak coupling" = Easier to route around other nodes. Here lightness/consent/configurability matters most. More related to the availability of alternatives & cost effectiveness of switching. This is the kind of thing I'm trying to bring to eth via gator.metamask.io
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
I think *automatic and transparent* failover with hot-swappability for any admin-end actions is the holy grail. DNS almost meets this standards but then you have sketchy shit like BGP level governance especially across borders. Also feels like you’re focusing on a positive spin on spof as one strong and robust default mode with failover being an exception handled by “backup” nodes that are weaker, like a spare tire. I’m thinking a set of equal peers that load balance by default and flexibly reconfigure load management upon failure. At all levels. Including switching “vendors”
1 reply
0 recast
0 reaction

Dan Finlay 🦊 pfp
Dan Finlay 🦊
@danfinlay
On that 2nd point: So I definitely adopted "SPOF" here to represent something a little different, I'm basically doing an actor model analysis, and I think under ocap literature you could call this a "VAT". This "actor" may well be a collection of load balanced peers, but it can be abstracted as providing a single reliable service, and the collection of providers & the way they are load balanced can be treated as one service that is more "resilient" than maybe one of the providers, probably at some cost (latency, performance). So I don't mean a SPOF here must be one machine or something. In this model, the entire Ethereum blockchain can be considered a SPOF. The entire IPFS protocol could be considered a SPOF. They might have the best resilience of any SPOF, but it still represents a single system with some ultimate set of resilience properties.
1 reply
0 recast
0 reaction

Venkatesh Rao ☀️ pfp
Venkatesh Rao ☀️
@vgr
Ocap? And yeah that’s a confusing use of SPOF. I think spof is best reserved for a single physical instance of a tangible class that constitutes an abstract actor. You can usually find one by going upstream. Eg if there are a bunch of interchangeable clients, the spof is a codebase upstream. To address it, you’d have a diversity of flavors. Like Linux is a good example. There are individual Linux computers in the billions, almost all functionally interchangeable with some peers for many functions, and thousands of configured flavors, resting on dozens of distributions. The spof is some small subset of kernel code gated by BDFL Linus I guess.
2 replies
0 recast
0 reaction