Disclosure: The views and opinions expressed right here belong solely to the writer and don’t characterize the views and opinions of crypto.information’ editorial.
The second quarter of 2025 has been a actuality test for blockchain scaling, and as capital retains pouring into rollups and sidechains, the cracks within the layer-2 mannequin are widening. The unique promise of L2s was easy: scaling up L1s, however the prices, delays, and fragmentation in liquidity and person expertise maintain stacking up.
Abstract
- L2s had been meant to scale Ethereum, however they’ve launched new issues, whereas counting on centralized sequencers that may turn into single factors of failure.
- At their core, L2s deal with sequencing and state computation, utilizing Optimistic or ZK Rollups to decide on L1. Every comes with trade-offs: lengthy finality in Optimistic Rollups and heavy computational prices in ZK Rollups.
- Future effectivity lies in separating computation from verification — utilizing centralized supercomputers for computation and decentralized networks for parallel verification, enabling scalability with out sacrificing safety.
- The “whole order” mannequin of blockchains is outdated; transferring towards native, account-based ordering can unlock large parallelism, ending the “L2 compromise” and paving the best way for a scalable, future-ready web3 basis.
New tasks like stablecoin funds begin questioning the L2 paradigm, asking if L2s are really safe, and are their sequencers extra like single factors of failure and censorship? Usually, they’ll find yourself taking a pessimistic view that maybe fragmentation is solely inevitable in web3.
Are we constructing a future on a stable basis or a home of playing cards? L2s should face and reply these questions. In spite of everything, if Ethereum’s (ETH) base consensus layer had been inherently quick, low-cost, and infinitely scalable, your complete L2 ecosystem as we all know it now can be redundant. Numerous rollups and sidechains had been proposed as “L1s’ add-ons” to mitigate the elemental constraints of the underlying L1s. It’s a type of technical debt, a fancy, fragmented workaround that has been offloaded onto web3 customers and builders.
You may also like: Truthful launch is the damaged promise of crypto | Opinion
And to reply these questions, it’s essential to deconstruct your complete idea of an L2 to its elementary elements, to disclose a path towards a extra sturdy and environment friendly design.
An anatomy of L2s
Construction determines perform. It’s a primary precept in biology that additionally holds in laptop techniques. To resolve the right construction and structure of L2s, we should study their capabilities fastidiously.
At its core, each L2 performs two crucial capabilities: Sequencing, i.e., ordering transactions; in addition to computing and proving the brand new state. A sequencer, whether or not a centralized entity or a decentralized community, collects, orders, and batches person transactions. This batch is then executed, leading to an up to date state (e.g., new token balances). This state should be settled on the L1 for safety by way of Optimistic or ZK Rollups.
Optimistic Rollups assume all state transitions are legitimate, and depend on a problem interval (typically 7 days) the place anybody can submit fraud proofs. This creates a serious UX trade-off, lengthy finality instances. ZK Rollups use zero-knowledge proofs to mathematically confirm the correctness of each state transition earlier than it hits L1, enabling near-instant finality. The trade-off is that they’re computationally intensive and complicated to construct. ZK provers themselves could be buggy, resulting in catastrophic penalties, and formal verification of those, if possible in any respect, may be very costly.
Sequencing is a governance and design selection for every L2. Some choose a centralized answer for effectivity (or possibly for that censorship energy; who is aware of), whereas others choose a decentralized answer for extra equity and robustness. In the end, L2s resolve how they wanna do their very own sequencing.
State Declare Era and Verification is the place we are able to do a lot, a lot better in effectivity. As soon as a batch of transactions is sequenced, computing the following state is a purely computational process, and that may be performed utilizing only a single supercomputer, targeted solely on uncooked velocity, with out the overhead of decentralization in any respect. That supercomputer may even be shared amongst L2s!
As soon as this new state is claimed, its verification turns into a separate, parallelized course of. A large community of verifiers can work in parallel to confirm the declare. Such can be the very philosophy behind Ethereum’s stateless shoppers and high-performance implementations like MegaETH.
Parallel verification is infinitely scalable
Parallel verification is infinitely scalable. Irrespective of how briskly L2s (and that supercomputer) produce claims, the verification community can all the time catch up by including extra verifiers. The latency right here is exactly the verification time, a set, minimal quantity. That is the theoretical optimum through the use of decentralization successfully: to confirm, to not compute.
After sequencing and state verification, the L2’s job is almost full. The ultimate step is to publish the verified state to a decentralized community, the L1, for final settlement and safety.
This last step exposes the elephant within the room: blockchains are horrible settlement layers for L2s! The primary computational work is finished off-chain, but L2s should pay an enormous premium to finalize on an L1. They face a twin overhead: the L1’s restricted throughput, burdened by its whole, linear ordering of all transactions, creates congestion and excessive prices for posting information. Moreover, they have to endure the L1’s inherent finality delay.
For ZK Rollups, that is minutes. For Optimistic Rollups, it’s compounded by a week-long problem interval, a obligatory however expensive safety trade-off.
Farewell, the “whole order” fantasy in web3
Since Bitcoin (BTC), individuals have been attempting arduous to squeeze all transactions of a blockchain right into a single whole order. We’re speaking about blockchains in any case! Sadly, this “whole order” paradigm is a expensive fantasy and is clearly overkill for L2 settlement. How ironic, that one of many world’s largest decentralized networks and the world’s laptop behaves identical to a single-threaded desktop!
It’s time to maneuver on. The longer term is native, account-based ordering, the place solely transactions interacting with the identical account have to be ordered, unlocking large parallelism and true scalability.
World ordering in fact implies native ordering, however it’s also an extremely naive and simplistic answer. After 15 years of “blockchain”, it’s time that we open our eyes and handcraft a greater future. The distributed techniques scientific area has already transitioned from the Nineteen Eighties’ robust consistency idea (which is what blockchains implement) to 2015’s robust eventual consistency mannequin that unleashes parallelism and concurrency. Time for the web3 trade to maneuver on as properly, to go away the previous behind and comply with forward-looking scientific progress.
The age of the L2 compromise is over. It’s time to construct on a basis designed for the longer term, from which the following wave of web3 adoption will come.
Learn extra: Web3 is open, clear, and depressing to construct on | Opinion
Xiaohong Chen
Xiaohong Chen is the Chief Know-how Officer at Pi Squared Inc., engaged on quick, parallel, and decentralized techniques for funds and settlement. His pursuits embrace program correctness, theorem proving, scalable ZK options, and making use of these methods to all programming languages. Xiaohong obtained his BSc in Arithmetic at Peking College and PhD in Laptop Science on the College of Illinois Urbana-Champaign.



