mert pfp
mert
@0xmert
hi @vitalik.eth — I am genuinely interested in your thoughts on the nomenclature here if you can spare a few mins seems there are a ton of disagreements on all dimensions and it would be helpful imo https://x.com/0xMert_/status/1804244166584516865
3 replies
8 recasts
126 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
So my current understanding of the scheme (from reading https://www.zkcompression.com/learn/core-concepts): 1. You have a new class of accounts. For these accounts, only the hash of their state is stored onchain. 2. To interact with these accounts, you make a tx which specifies the pre-state-hashes of those N accounts and the post-state-hashes and provides a validity proof (which I assume means a ZK-SNARK) 3. The new state is required to be public (which is reasonable; otherwise you can send someone a random amount of money and their account will become inaccessible to them, you can get around this by making it a utxo system but that would be a significant limitation) 4. QUESTION: the docs say 128 Bytes for the validity proof. What proof scheme is this? 5. QUESTION: do the contents of a transaction have to be made public, or just the state delta? I guess this feels to me like a stateless client architecture more than anything else.
5 replies
4 recasts
154 reactions

Vitalik Buterin pfp
Vitalik Buterin
@vitalik.eth
What confuses me is, how does this achieve anything like the numbers you're claiming??? It feels like if you're doing it separately per-tx, the overhead of verifying a SNARK would be *higher* than the cost of doing a few bit-twiddling and hashing operations (which is what eg. token transfers are). The gains of ZK rollups come from having *one* SNARK wrapping *many* transactions.
4 replies
3 recasts
38 reactions

mert pfp
mert
@0xmert
thanks for the reply! will just bump the response from the founder of Light for the questions: https://warpcast.com/swen-sjn/0xa6ac8750
1 reply
1 recast
6 reactions

Ajit Tripathi pfp
Ajit Tripathi
@chainyoda
Mert is "modularity's man in monolithic Kremlin"
2 replies
1 recast
5 reactions

swen pfp
swen
@swen-sjn
Thank you for taking the time! (3) It's actually a UTXO model underneath! Currently using a data structure based on Concurrent Merkle trees to allow for the root to be patched on-chain and thus allowing for some concurrency:(https://drive.google.com/file/d/1BOpa5OFmara50fTvL0VIVYjtg-qzHCVc/view) Each transaction specifies input and output accounts, appends outputs to the tree, and nullifies the inputs. (4) The proof scheme is Groth16 over bn254, applying a bitmask to the X coordinate in the client and recovering the Y coordinate from X on-chain. (5) The protocol requires the base layout for each input and output account to be public (simplified: ownerContract, lamports, dataHash). The smart contract invoking the protocol manages the account's data field, so its visibility depends on that contract.
0 reply
0 recast
11 reactions

Danny pfp
Danny
@mad-scientist
How will that help reduce state size? If anything, it opens it to an attack to increase chain state. Anyone can make cheap proofs for transactions that maximize the new state size for each smart contract. (Sorry, I know I'm asking the wrong person, but it's not like I'm going to get an answer there).
0 reply
0 recast
0 reaction