Content pfp
Content
@
https://onchainsummer.xyz
0 reply
26 recasts
26 reactions

jesse.base.eth 🔵 pfp
jesse.base.eth 🔵
@jessepollak
we aligned internally today that our medium term scaling goal is to target 1 Ggas/s on @base our current target is 2.5 mgas/s, so ~400x, with a long term goal of pushing even further past >1000x it's pronounced "gigagas" and this is our broadband moment
29 replies
40 recasts
379 reactions

androidsixteen pfp
androidsixteen
@androidsixteen.eth
What's your OKR for managing state growth and ensuring full nodes can stay synced with the network on consumer hardware? Throughput is great, but you're just big blocking unless you have a plan for staying decentralized
2 replies
0 recast
1 reaction

shazow pfp
shazow
@shazow.eth
Isn't this conflating decentralization requirements of L1's? It's important for an L1 to be runnable on consumer hardware because there isn't another chain to fall back to when things go wrong. L2's have that luxury, so we can make more generous assumptions (like ignoring full state transition history).
1 reply
0 recast
1 reaction

androidsixteen pfp
androidsixteen
@androidsixteen.eth
Are you suggesting that nobody besides the sequencer should need to run a full node for the L2?
1 reply
0 recast
1 reaction

shazow pfp
shazow
@shazow.eth
I'm suggesting no one needs to be able to replay the entire history from genesis for L2's, as an example. Yes, many infrastructure participants need to be able to run verifying nodes but those can be partial/light, etc.
1 reply
0 recast
1 reaction

androidsixteen pfp
androidsixteen
@androidsixteen.eth
Splitting optimistic vs. zk rollups: - zk: main assumption here would be that the prover can keep up with the throughput, which still takes into consideration another property besides gas/s - optimistic: don't you need a robust set of nodes that can check the sequencer's work?
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
1. Yes, but worst case the sequencer goes down and we have to force include transactions to escape 2. Yes, but they only need the last agreed-upon state (checkpoint) + new proposed state to check the work.
1 reply
0 recast
1 reaction

androidsixteen pfp
androidsixteen
@androidsixteen.eth
1. agreed, it's a partial truth to suggest that we just need to crank gas/s up, without mentioning proving times, let alone having validity proofs live 2. honest question, why don't sequencers just post massive blocks now?
1 reply
0 recast
1 reaction

shazow pfp
shazow
@shazow.eth
1. Improving of the bottlenecks that prevent us from cranking up gas/s *is* what opstack is working on, don't think it's a generous read otherwise? 2. The blocks are as big as the compressed calldata requires for the L2 transactions? Why would they be more massive?
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
2. Oh you mean why they can't crank up the computation gas limit per block? Yea, basically what you said: Infrastructure partner nodes need to be able to keep up (but in L2 land we can do optimizations that we can't do in L1 land).
1 reply
0 recast
0 reaction

androidsixteen pfp
androidsixteen
@androidsixteen.eth
1. I'm not shitting on op team, I'm shitting on "this is our broadband moment" - empty marketing without discussing the work ahead to use validity proofs (which many other teams are also innovating on) 2. Exactly. What optimizations outside of checkpointing are possible?
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
1. IMO it's intentionally misreading negatively into it, "rollups as broadband" has been discussed at length for months already.
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
2. L2s can: decide to not worry about client diversity the way L1 has to, because there's no stake slash risk from bugs. An L2 can decide to not even use the EVM if it doesn't want to. An L2 doesn't need historic transaction receipts, it can operate off monthly snapshots just fine.
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
(More off the top of my head) Sequencers can exercise parallelism in ways L1 could not, because they de facto control transaction flow.
1 reply
0 recast
0 reaction

androidsixteen pfp
androidsixteen
@androidsixteen.eth
1. Agree to disagree here. We all have our kool-aid thresholds :) 2. Great points, I like what MegaETH is doing here: https://github.com/megaeth-labs. They're moving state into RAM and improving state trie reads, as well as parallelizing execution. Alt-VM compromises devx, but I get what you're saying.
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
From the perspective of "infrastructure partners", anyone relying on rollup state offchain (e.g. Circle) needs to be confident that some rollup transaction is valid. This either means ZK proof, or wait the full optimistic challenge window, or execute it yourself. But think of it from Circle's perspective:
1 reply
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
I could be running a node that is only storing state relevant to the contracts I care about. I'm not a validator, I'm not producing blocks, I can bootstrap off of checkpoints fairly quickly. In this particular example, there's easy optimizations we just can't do for a generalized L1 node, right?
3 replies
0 recast
0 reaction

shazow pfp
shazow
@shazow.eth
Regarding the marketing, honestly I agree with you -- I'm not a fan of tech history analogies in general. Are we in the dialup era? Are we in the iphone era? Are we in the HTTP 1.1 era? They're all silly and flawed analogies that mislead more than help, but people love memes.
1 reply
0 recast
0 reaction

androidsixteen pfp
androidsixteen
@androidsixteen.eth
Despite all our back and forth, glad we're in agreement here There's a ton of hopium right now. Look at the comments on OP's thread lol
0 reply
0 recast
0 reaction