Cassie Heart pfp
Cassie Heart
@cassie
Execution and well-conceived integration are a mandatory minimum of realizing the potential of new innovations. Grab a coffee (or hot chocolate if you're @greg), sit back, and relax, we're going on a deep dive.
9 replies
20 recasts
161 reactions

Cassie Heart pfp
Cassie Heart
@cassie
The Apple ][ is frequently hailed as one of the major milestones in personal computers. Not only did it make home computing more accessible than ever, it delivered many things we now take for granted that were incredible technological achievements for the time. Did you catch them in the picture above?
1 reply
0 recast
12 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Let's look at an example of what others in the industry were offering to help demonstrate the difference: The Commodore PET. See the difference now? The Apple ][ could do color - something no other personal computer had been doing yet. The Apple ][ had floppy disks, faster and at greater capacity than any other. How?
1 reply
0 recast
7 reactions

Cassie Heart pfp
Cassie Heart
@cassie
At the time, video signals for home computers utilized NTSC composite video, the kind that worked with television monitors. These sets supported color through chrominance subcarrier frequency modulation - a technique that at the time typically required expensive chips to support.
1 reply
0 recast
9 reactions

Cassie Heart pfp
Cassie Heart
@cassie
But supporting limited colors could be achieved cheaply, if you could get the timing right, by emitting the signals in a particular way. Wozniak was predictably nerdsniped by this, and used this trick with the MOS 6502 and its timing set to 1.023 MHz, 2/7 of the NTSC color subcarrier rate.
2 replies
0 recast
9 reactions

Cassie Heart pfp
Cassie Heart
@cassie
The integration of controlled timing from the deepest part of the hardware down to the software unlocked color displays for home users, while its competitor, the PET, using the exact same chip, could not do this. But the division runs deeper. A sentiment was growing that floppy disks were needed for non-hobbyist PCs.
1 reply
0 recast
7 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Woz stood up to the challenge again, this time, taking the raw hardware physically controlling the disk, and augmenting it with a cheaper software and timing-driven approach, reducing the overall chips in use (and cost, saving Apple >$300 per drive), while simultaneously making it faster and able to store more data.
1 reply
0 recast
8 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Commodore's PET disk drive design ultimately required two processors on the scale of the Apple ]['s main central processing unit just to make it function, and for all that expense, still couldn't display color.
2 replies
0 recast
8 reactions

Cassie Heart pfp
Cassie Heart
@cassie
As I am wont to do, I see a heavy parallel here to the crypto industry. When we look at crypto today, the innovations that are happening are frequently very downstream of ossified architectural designs, which results in huge efficiency loss.
1 reply
1 recast
10 reactions

Cassie Heart pfp
Cassie Heart
@cassie
The original block chain was designed out of a minimalist point of view to most succinctly encapsulate transactions of a singular coin, encoded as simple Forth-like scripts, and to be provably referenced in its inclusion via the use of Merkle proofs.
1 reply
0 recast
8 reactions

Cassie Heart pfp
Cassie Heart
@cassie
At the time, Merkle trees were one of the most efficient means to collect contiguous segments of data and verify inclusion. The compactness of the proof, however, leaves a little bit to be desired, especially when scaling out to the whole of human commerce, or trying even more loftily to be a world computer.
2 replies
0 recast
7 reactions

Cassie Heart pfp
Cassie Heart
@cassie
For starters, to verify a given transaction in the original design, you'd have to hold the entire history to self verify. Many moved to incorporating the use of a Merkle root of the overall ledger state in the block header so as to avoid such dilemmas, but still mandated full nodes synchronize full state.
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Additionally, some have used alternatives to Merkle trees, such as Ethereum's choice of PATRICIA-Merkle trees, and doing a proper proof requires more sibling hashes for every ascent of the tree, currently about 3,584 bytes. Opting to reincode as a binary tree, it still requires nearly a kilobyte.
1 reply
0 recast
6 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Thankfully, bandwidth has improved, but so too have our cryptographic techniques: KZG commitments/proofs offer a very succinct scheme for proving inclusion in a set (accurately, a vector) - it is constant size: the item you wish to prove was included, its position in the set, and the elliptic curve point. Nice!
1 reply
0 recast
8 reactions

Cassie Heart pfp
Cassie Heart
@cassie
What's not so nice: the time to compute larger sets. Complexity wise, constructing the commitment is O(n^2), not to mention you also have to have constructed a secure reference string equal in the number of elements (accurately, the degree) to the maximum size of the set.
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
For Ethereum, at 263.80M accounts, we're talking about a secure reference string of 2^28, and that's just for today! If you thought contributing to the Ethereum KZG ceremony was long for 2^15 elements, you'd be in a much worse world of pain (by most recent report of such a setup, a single contribution took two days).
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
What about a compromise? Keep the tree structure, lose the sibling proofs, with each traversal step of the tree as a KZG commitment/proof. At log16(263.80M) ≈ 7, this reduces the proof size to 7 points (48 bytes each), or 336 bytes. Not bad! If you followed this so far, congrats, you now know what a Verkle tree is!
2 replies
0 recast
9 reactions

Cassie Heart pfp
Cassie Heart
@cassie
So what is Ethereum doing with verkle trees? Nothing yet! Instead, Ethereum is using the simple vector commitments for up to 4096 elements (32 bytes each), allowing up to six of these commitments to be posted as transactions per block, with a dynamic fee market of its own. How's that going for them?
1 reply
0 recast
7 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Inscriptions showed up (as I predicted), more L2s than there are slots per block showed up, and this is just beginning to ramp up. The competitive fee market separated from regular transactions has resulted in the original significant 100x+ price reduction currently sitting closer to a modest 3x.
1 reply
0 recast
6 reactions

Cassie Heart pfp
Cassie Heart
@cassie
If pressure continues, this will trend soon towards blobs becoming _more expensive_ to use than calldata, on top of the accounts producing the blobs still needing to exist in the world state PATRICIA-Merkle tree. But hey, no worries, verkle trees will make that more efficiently verifiable, right? Right?
1 reply
0 recast
6 reactions

Cassie Heart pfp
Cassie Heart
@cassie
According to certain active voices in the space, they're going to be bringing "the next 1 billion users onchain". What does that look like? Ethereum has accumulated some cruft over time, so despite a healthy average over the last year of about 500k daily active addresses, we're sitting at 263.80M accounts.
1 reply
0 recast
4 reactions

Cassie Heart pfp
Cassie Heart
@cassie
This is a poor estimate (before maxis surely @ me for this), but we need to get an idea for how much cruft we can reasonably assume will accumulate from historic data. At a ratio of ~528 addresses per active user, we're talking about adding 528 billion new addresses.
2 replies
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Even with succinct (10 points for a tree that large, 480 bytes) proofs, the sheer scale of data that must be held is staggering. Not to mention each full node must hold it! So what about sharding? Ah, about that...
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Given L2s have been put into a turf war for very limited blob space at high prices, some concessions will need to be made here, and ultimately many of these billions of addresses would then never be able to live on the L1 proper. But wait – these L2s are mostly just variations of Ethereum itself.
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
How will they manage the scale? Many of them rely on being centralized, making them little more than glorified databases with proofs. But even this has its limits – namely, gas limits. Gas limits exist on Ethereum to ensure that many different computers can successfully verify a block quickly.
1 reply
0 recast
6 reactions

Cassie Heart pfp
Cassie Heart
@cassie
While many of these centralized L2s make promises on the roadmap to decentralize, the likelihood of this is contingent on never increasing the gas limit too far – as one can very easily see with high throughput chains like Solana, the centralizing effect of large blocks becomes limited to very high end hardware.
1 reply
0 recast
7 reactions

Cassie Heart pfp
Cassie Heart
@cassie
But the billions of people out there just waiting to be on-chain are not in one country. They do not share the same laws, or proximate latencies to the centralized sequencers, and invariably, the beauty of decentralization creating unfettered access to a new economy becomes walled off, limited by geography and law.
1 reply
0 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
Even still, billions will not fit on a single sequencer. So many L2s would have to work in tandem. The crux of the issue is: they still have to get consensus at the L1 before they can reconcile state, leading to minutes-long latencies between chains in the best trustless case.
1 reply
0 recast
6 reactions

Cassie Heart pfp
Cassie Heart
@cassie
All the while, all this magnificent cryptographic novelty is being used at great expense – to build a slower, more expensive floppy disk and monochrome screen. Let's consider what Quilibrium's finalized architecture looks like. Quilibrium also utilizes KZG commitments, and has a partitioned layered proof structure.
1 reply
1 recast
5 reactions

Cassie Heart pfp
Cassie Heart
@cassie
But where it differs is where the magic happens. At the highest level, replicated across the network, is a mere 256 points (in our case 74 bytes each, or ~19KB). The 256 points are aligned in a bipartite graph, with 65536 points, forming the collective core shard commitments.
2 replies
0 recast
7 reactions

Nico Gallardo 🍄 pfp
Nico Gallardo 🍄
@nicnode
VDF 🧐🧐🧐
0 reply
0 recast
0 reaction