Imperative Pitfalls Pt. 2 (Execution Bottleneck)
As well as providing poor UX and sub-optimal outcomes, relying on the imperative execution-based model leads to inevitable market inefficiencies and rent-seeking.
The Promise of Modular Scalability: Unfulfilled?
The modular blockchain paradigm promised to improve scalability by decoupling execution from the rest of the blockchain stack and outsourcing it to specialized layers. The promise was that these layers could optimize for fast, scalable execution without having to worry about decentralization. By enabling proofs on the underlying L1, they could outsource security to the parent chain and operate with lower security guarantees on the execution layer itself.
Many believed that L2 fees would become negligible as innovations on the consensus and DA layers opened up cheap blobspace. The idea that execution would become the next bottleneck led to a massive focus on faster models for execution, especially parallelization, as the primary method of scaling throughput.
This narrative has not panned out as expected. In practice, demand for execution is far outstripped by demand for outcomes. The imperative approach conflates the two, when in fact, outcomes are the real bottleneck.
The Outcome Bottleneck
Sequencers on L2s create a form of artificial scarcity when it comes to execution, such that they can convert demand for an asset into demand for priority ordering, extracting value from asset holders. If a given outcome (e.g. buy Asset A at Price P) has limited availability, demand can easily outstrip supply, no matter how much execution an L2 can support.
In an efficient market, when demand outstrips supply, price moves in favor of the supplier. But on imperative execution layers, sequencers wield monopolistic control over how supply and demand are allocated. With the power to decide which transactions get included and in what order, they can extract value that should be accruing to the seller.
Instead of charging fees that reflect the economic reality of their costs (i.e. the cost of blobspace and computation), sequencers can extract arbitrarily high fees due to their position as an intermediary. This not only results in worse outcomes for all users, but it also decreases economic activity.
Read this thread for a deeper illustration of why this phenomenon occurs:
The imperative approach values demand for execution over demand for the results of that execution—i.e. outcomes. This undermines the promise of modular scalability. Instead, we need a system that optimizes for outcomes.
Last updated