Simplifying the L1
2025 May 03
See all posts
Simplifying the L1
Special thanks to Fede, Danno Ferrin, Justin Drake, Ladislaus and
Tim Beiko for feedback and review
Ethereum aims to be the world ledger: the platform that stores
civilization's assets and records, the base layer for finance,
governance, high-value data authentication, and more. This requires two
things: scalability and resilience.
The Fusaka hard fork aims to increase the amount of data space available
to L2 data by 10x, and the current
proposed 2026 roadmap includes a similarly large increase for the
L1. Meanwhile, the merge upgraded Ethereum to proof of stake, Ethereum's
client diversity has
improved rapidly, work on ZK
verifiability, work quantum resistance is progressing, and
applications are getting more and
more robust.
The goal of this post will be to shine the light on an aspect of
resilience (and ultimately scalability) that is just as important, and
easy to undervalue: the protocol's simplicity.
One of the best things about Bitcoin is how beautifully
simple the protocol is:

There is a chain, which is made up of a series of blocks. Each block
is connected to the previous block by a hash. Each block's validity is
verified by proof of work, which means... checking that the first few
bytes of its hash are zeroes. Each block contains transactions.
Transactions spend coins that were either created through the mining
process, or outputted by previous transactions. And that's pretty much
it. Even a smart high school student is capable of fully wrapping their
head around and understanding the Bitcoin protocol. A programmer is
capable of writing a client as a hobby project.
Keeping the protocol simple brings a number of benefits that are key
to Bitcoin or Ethereum being a credibly neutral
and globally trusted base layer:
- It makes the protocol simpler to reason about, increasing
the number of people who understand and can participate in
protocol research, development and governance. It reduces the risk that
the protocol gets dominated by a technocratic class that has a high
barrier to entry.
- It greatly decreases the cost of creating new
infrastructure that interfaces with the protocol (eg. new
clients, new provers, new logging and other developer tools).
- It reduces long-term protocol maintenance
costs
- It reduces the risk of catastrophic bugs, both in
the specification itself and in the implementation. It also makes it
easier to verify that there are no such bugs.
- It reduces the social attack surface: there's fewer
moving parts, and so fewer places to guard against special
interests.
Historically, Ethereum has often not done this (sometimes because of
my own decisions), and this has contributed to much of our excessive
development expenditure, all kinds of security
risk,
and insularity of R&D culture, often in pursuit of benefits that
have proven illusory. This post will describe how Ethereum 5
years from now can become close to as simple as Bitcoin.
Simplifying the consensus
layer

Simulation of 3-slot finality in 3sf-mini
The new consensus layer effort (historically called the "beam chain")
aims to use all of our learnings in consensus theory, ZK-SNARK
development, staking economics and other fields over the last ten years
to create a long-term optimal consensus layer for Ethereum. This
consensus layer is well-positioned to be much simpler than the status
quo beacon chain. Particularly:
- The 3-slot finality redesign removes the concept of
separate slots and epochs, committee shuffling, and many other parts of
the protocol specification that are related to efficiently handling
these mechanisms (as well as other details, eg. sync committees). A
basic implementation of 3-slot finality can be made in about
200 lines of code. Unlike Gasper, 3-slot finality also has
near-optimal security properties.
- The reduced number of active validators at a time
means that it becomes safer to use simpler implementations of the fork
choice rule.
- STARK-based aggregation protocols mean that anyone
can be an aggregator, and we do not have to worry about trusting
aggregators, over-paying for repeated bitfields, etc. The complexity of
the aggregation cryptography itself is significant, but it is at least
highly encapsulated
complexity, which has much lower systemic risk toward the
protocol.
- The above two factors also likely enable a simpler and more
robust p2p architecture.
- We have an opportunity to rethink how validator entry, exit,
withdrawal, key transition, inactivity leak and other related mechanisms
work, and simplify them - both in the sense of reducing
line-of-code (LoC) count, and in the sense of creating more legible
guarantees of eg. what the weak subjectivity period is.
The nice thing about the consensus layer is that it is relatively
disconnected from EVM execution, which means that there is a relatively
wide latitude to continue to make these types of improvements. The
harder challenge is how to do the same on the execution layer.
Simplifying the execution
layer
The EVM is increasingly growing in complexity, and much of that
complexity has proven unnecessary (in many cases my own fault): a
256-bit virtual machine that over-optimized for highly specific forms of
cryptography that are today becoming less and less relevant, and
precompiles that over-optimized for single use cases that are barely
being used.
Attempting to address these present-day realities piecemeal will not
work. It took a huge amount of effort to (only partially!) remove the SELFDESTRUCT opcode,
for a relatively small gain. The recent EOF debate shows the challenges
of doing the same thing to the VM.
As an alternative I recently proposed a more radical approach:
instead of making medium-sized (but still disruptive) changes to the EVM
for the sake of a 1.5x gain, perform a transition to a new and much
better and simpler VM for the sake of a 100x gain. Like the Merge, we
have fewer points of disruptive change, but we make each one much more
meaningful. Specifically, I suggested we replace
the EVM with either RISC-V, or
another VM that is the VM that Ethereum ZK-provers will be written
in. This gives us:
- A radical improvement in efficiency, because
(within provers) smart contract execution will run directly, without the
need for interpreter overhead. Data from Succinct shows a potential
100x+ performance improvement in many cases.
- A radical improvement in simplicity: the RISC-V
spec is
absurdly simple compared to the EVM. Alternatives (eg. Cairo) are
similarly simple.
- All the benefits that motivated EOF (eg. code
sections, more static analysis friendliness, larger code size
limits)
- More options for developers: Solidity and Vyper can
add backends to compile to new VMs. At the same time, if we choose
RISC-V, then developers who write in more mainstream languages will be
able to port their code over to the VM.
- Removal of the need for most precompiles, perhaps
with the exception of highly-optimized elliptic curve operations (though
those too will go away once quantum computers hit)
The main downside of this approach is that unlike EOF, which is ready
today, with a new VM it will take a relatively longer amount of time for
these benefits to reach developers. We can mitigate this by also adding
some limited but high-value EVM improvements (eg. contract code size
limit increase, DUP/SWAP17-32) that could be implemented in the
short term.
This gives us a much simpler VM. The main challenge is: what do
we do with the existing EVM?
A
backwards compatibility strategy for VM transition
The biggest challenge with meaningfully simplifying (or even
improving without complexifying) any part of the EVM is how to
balance accomplishing the desired goals with preserving backwards
compatibility for existing applications.
The first thing that is important to understand is: there
isn't one single way to delineate what is the "Ethereum
codebase" (even within a single client).

The goal is to minimize the green area: the logic
that nodes have to run in order to participate in Ethereum
consensus: computing the current state, proving, verifying, FOCIL,
"vanilla" block building.
The orange area cannot be decreased: if an execution
layer feature (whether a VM, a precompile, or another mechanism) is
either removed from the protocol spec, or its functionality is altered,
clients that care about processing historical blocks will have to keep
it - but, importantly, new clients (or ZK-EVMs, or formal provers) can
simply ignore the orange area entirely.
The new category is the yellow area: code that is
very valuable for understanding and interpreting the chain
today, or for optimal block building, but is not part of
consensus. One example that exists today is Etherscan (and
some block
builders') support for ERC-4337 user operations. If we replace some
large Ethereum feature (eg. EOAs, including their support for all kinds
of old transaction types) with an onchain RISC-V implementation, then
consensus code would be considerably simplified, but specialized nodes
would likely continue using their exact same code to interpret them.
Importantly, the orange and yellow areas are encapsulated
complexity, anyone looking to understand the
protocol can skip them, implementations of Ethereum are free to skip
them, and any bugs in those areas do not pose consensus risks.
This means that code complexity in the orange and yellow areas has far
fewer downsides than code complexity in the green area.
The idea of moving code from the green area to the yellow area is
similar in spirit to how Apple ensures long-term backwards compatibility
through
translation layers like Rosetta.
I propose, inspired by recent
writings from the Ipsilon team, the following process for a VM
change (using EVM to RISC-V as an example, but it could also be used for
eg. EVM to Cairo, or even RISC-V to something even better):
- We require any new precompiles to be written with a
canonical onchain RISC-V implementation. This gets the
ecosystem warmed up and started working with RISC-V as a VM.
- We introduce RISC-V as an option for developers to
write contracts alongside the EVM. The protocol natively
supports both RISC-V and EVM, and contracts written in one or
the other can freely interact with each other.
- We replace all precompiles, except elliptic curve
operations and KECCAK (as these require truly optimal speed),
with RISC-V implementations. That is, we do a hardfork
that removes the precompile and simultaneously changes the code of that
address (DAO-fork-style) from being empty to being a RISC-V
implementation. The RISC-V VM is so simple, that this is a net
simplification even if we stop here.
- We implement an EVM interpreter in RISC-V (this is
happening anyway, because of ZK-provers) and push it onchain as a smart
contract. Several years after the initial release, existing EVM
contracts switch to being processed by being run through that
interpreter.

Once step 4 is done, many "implementations of the EVM" will remain
and be used for optimized block building, developer tooling and chain
analysis purposes, but they will no longer need to be part of the
critical consensus spec. Ethereum consensus would
"natively" understand only RISC-V.
Simplifying by
sharing protocol components
The third and most easily underrated way to reduce total protocol
complexity is to share one standard across different parts of the stack
as much as possible. There is typically very little or no benefit to
using different protocols to do the same thing in different places, but
such patterns appear anyway, largely because different parts of protocol
roadmapping don't talk to each other. Here are a few specific examples
of places where we can simplify Ethereum by ensuring that components are
maximally shared across the stack.
One single shared erasure
code

We need an erasure code in three places:
- Data availability sampling - clients verifying that
a block has been published
- Faster P2P broadcasting - nodes being able to
accept a block after receiving n/2 of n pieces, creating an optimal
balance between latency reduction and redundancy
- Distributed history storage - each piece of
Ethereum's history being stored in many chunks, such that (i) the chunks
can be independently verified, and (ii) n/2 chunks in each group can
recover the remaining n/2 chunks, greatly reducing the risk that any
single chunk gets lost
If we use the same erasure code (whether Reed-Solomon, random linear
codes, or otherwise) across the three use cases, we get some important
advantages:
- Minimize total lines of code
- Increase efficiency because if individual nodes
have to download individual pieces of a block (but not the whole block)
for one of the use cases, that data can be used for the other use
case
- Ensure verifiability: the chunks in all three
contexts can be verified against the root
If different erasure codes are used, they should at least be
compatible erasure codes: for example, a Reed-Solomon code
horizontally and a random linear code vertically for DAS chunks, where
the two codes operate over the same field.

Ethereum's serialization format is today arguably only
semi-enshrined, because the data can be re-serialized and broadcasted in
any format. The only exception is signature hashes for transactions, as
there a canonical format is required for hashing. In the future,
however, the degree of enshrinement of serialization formats will
increase further, for two reasons:
- With full account abstraction (EIP-7701), the full
transaction contents will be visible to the VM
- As gas limits go higher, the execution block data will need
to be put into blobs
When this happens, we have an opportunity to harmonize serialization
across the three layers of Ethereum that currently need it: (i)
execution layer, (ii) consensus layer, (iii) smart contract calling
ABI.
I propose we use SSZ.
SSZ is:
- Easy to decode, including inside smart contracts
(because of its 4-byte-based design and smaller number of edge
cases)
- Already widely in use in the consensus layer
- Highly similar to the existing ABI, making tooling
relatively easy to adapt
There are efforts to migrate more fully to
SSZ already; we should keep these efforts in mind when planning future
upgrades, and build on them.
One single shared tree

Once we migrate from EVM to RISC-V (or an alternative minimal VM),
the hexary Merkle Patricia tree will become by far the largest
bottleneck to proving block execution, even in the average case.
Migrating to a binary
tree based on a much more optimal hash function will greatly improve
prover efficiency, in addition to reducing data costs for light clients
and other use cases.
When we do this, we should also use the same tree structure for the
consensus layer. This ensures that all of Ethereum, consensus and
execution alike, can be accessed and interpreted using the same
code.
From here to there
Simplicity is in many ways similar to decentralization. Both are
upstream of a goal of resilience. Explicitly valuing simplicity requires
some cultural change. The benefits are often illegible, and the cost of
extra effort and turning away some shiny features is felt immediately.
However, as time goes on, the benefits become more and more evident -
and Bitcoin itself is an excellent example.
I propose that we follow
the lead of tinygrad, and have an explicit max line
of code target for the long-term Ethereum specification, with
the goal of making Ethereum consensus-critical code close to as simple
as Bitcoin. Code that has to do with processing Ethereum's historical
rules will continue to exist, but it should stay outside of
consensus-critical code paths. Alongside this, we should have a general
ethos of choosing the simpler option where possible, favoring
encapsulated complexity over systemic complexity, and making design
choices that provide clearly legible properties and guarantees.
Simplifying the L1
2025 May 03 See all postsSpecial thanks to Fede, Danno Ferrin, Justin Drake, Ladislaus and Tim Beiko for feedback and review
Ethereum aims to be the world ledger: the platform that stores civilization's assets and records, the base layer for finance, governance, high-value data authentication, and more. This requires two things: scalability and resilience. The Fusaka hard fork aims to increase the amount of data space available to L2 data by 10x, and the current proposed 2026 roadmap includes a similarly large increase for the L1. Meanwhile, the merge upgraded Ethereum to proof of stake, Ethereum's client diversity has improved rapidly, work on ZK verifiability, work quantum resistance is progressing, and applications are getting more and more robust.
The goal of this post will be to shine the light on an aspect of resilience (and ultimately scalability) that is just as important, and easy to undervalue: the protocol's simplicity.
One of the best things about Bitcoin is how beautifully simple the protocol is:
There is a chain, which is made up of a series of blocks. Each block is connected to the previous block by a hash. Each block's validity is verified by proof of work, which means... checking that the first few bytes of its hash are zeroes. Each block contains transactions. Transactions spend coins that were either created through the mining process, or outputted by previous transactions. And that's pretty much it. Even a smart high school student is capable of fully wrapping their head around and understanding the Bitcoin protocol. A programmer is capable of writing a client as a hobby project.
Keeping the protocol simple brings a number of benefits that are key to Bitcoin or Ethereum being a credibly neutral and globally trusted base layer:
Historically, Ethereum has often not done this (sometimes because of my own decisions), and this has contributed to much of our excessive development expenditure, all kinds of security risk, and insularity of R&D culture, often in pursuit of benefits that have proven illusory. This post will describe how Ethereum 5 years from now can become close to as simple as Bitcoin.
Simplifying the consensus layer
Simulation of 3-slot finality in 3sf-mini
The new consensus layer effort (historically called the "beam chain") aims to use all of our learnings in consensus theory, ZK-SNARK development, staking economics and other fields over the last ten years to create a long-term optimal consensus layer for Ethereum. This consensus layer is well-positioned to be much simpler than the status quo beacon chain. Particularly:
The nice thing about the consensus layer is that it is relatively disconnected from EVM execution, which means that there is a relatively wide latitude to continue to make these types of improvements. The harder challenge is how to do the same on the execution layer.
Simplifying the execution layer
The EVM is increasingly growing in complexity, and much of that complexity has proven unnecessary (in many cases my own fault): a 256-bit virtual machine that over-optimized for highly specific forms of cryptography that are today becoming less and less relevant, and precompiles that over-optimized for single use cases that are barely being used.
Attempting to address these present-day realities piecemeal will not work. It took a huge amount of effort to (only partially!) remove the SELFDESTRUCT opcode, for a relatively small gain. The recent EOF debate shows the challenges of doing the same thing to the VM.
As an alternative I recently proposed a more radical approach: instead of making medium-sized (but still disruptive) changes to the EVM for the sake of a 1.5x gain, perform a transition to a new and much better and simpler VM for the sake of a 100x gain. Like the Merge, we have fewer points of disruptive change, but we make each one much more meaningful. Specifically, I suggested we replace the EVM with either RISC-V, or another VM that is the VM that Ethereum ZK-provers will be written in. This gives us:
The main downside of this approach is that unlike EOF, which is ready today, with a new VM it will take a relatively longer amount of time for these benefits to reach developers. We can mitigate this by also adding some limited but high-value EVM improvements (eg. contract code size limit increase, DUP/SWAP17-32) that could be implemented in the short term.
This gives us a much simpler VM. The main challenge is: what do we do with the existing EVM?
A backwards compatibility strategy for VM transition
The biggest challenge with meaningfully simplifying (or even improving without complexifying) any part of the EVM is how to balance accomplishing the desired goals with preserving backwards compatibility for existing applications.
The first thing that is important to understand is: there isn't one single way to delineate what is the "Ethereum codebase" (even within a single client).
The goal is to minimize the green area: the logic that nodes have to run in order to participate in Ethereum consensus: computing the current state, proving, verifying, FOCIL, "vanilla" block building.
The orange area cannot be decreased: if an execution layer feature (whether a VM, a precompile, or another mechanism) is either removed from the protocol spec, or its functionality is altered, clients that care about processing historical blocks will have to keep it - but, importantly, new clients (or ZK-EVMs, or formal provers) can simply ignore the orange area entirely.
The new category is the yellow area: code that is very valuable for understanding and interpreting the chain today, or for optimal block building, but is not part of consensus. One example that exists today is Etherscan (and some block builders') support for ERC-4337 user operations. If we replace some large Ethereum feature (eg. EOAs, including their support for all kinds of old transaction types) with an onchain RISC-V implementation, then consensus code would be considerably simplified, but specialized nodes would likely continue using their exact same code to interpret them.
Importantly, the orange and yellow areas are encapsulated complexity, anyone looking to understand the protocol can skip them, implementations of Ethereum are free to skip them, and any bugs in those areas do not pose consensus risks. This means that code complexity in the orange and yellow areas has far fewer downsides than code complexity in the green area.
The idea of moving code from the green area to the yellow area is similar in spirit to how Apple ensures long-term backwards compatibility through translation layers like Rosetta.
I propose, inspired by recent writings from the Ipsilon team, the following process for a VM change (using EVM to RISC-V as an example, but it could also be used for eg. EVM to Cairo, or even RISC-V to something even better):
Once step 4 is done, many "implementations of the EVM" will remain and be used for optimized block building, developer tooling and chain analysis purposes, but they will no longer need to be part of the critical consensus spec. Ethereum consensus would "natively" understand only RISC-V.
Simplifying by sharing protocol components
The third and most easily underrated way to reduce total protocol complexity is to share one standard across different parts of the stack as much as possible. There is typically very little or no benefit to using different protocols to do the same thing in different places, but such patterns appear anyway, largely because different parts of protocol roadmapping don't talk to each other. Here are a few specific examples of places where we can simplify Ethereum by ensuring that components are maximally shared across the stack.
One single shared erasure code
We need an erasure code in three places:
If we use the same erasure code (whether Reed-Solomon, random linear codes, or otherwise) across the three use cases, we get some important advantages:
If different erasure codes are used, they should at least be compatible erasure codes: for example, a Reed-Solomon code horizontally and a random linear code vertically for DAS chunks, where the two codes operate over the same field.
One single shared serialization format
Ethereum's serialization format is today arguably only semi-enshrined, because the data can be re-serialized and broadcasted in any format. The only exception is signature hashes for transactions, as there a canonical format is required for hashing. In the future, however, the degree of enshrinement of serialization formats will increase further, for two reasons:
When this happens, we have an opportunity to harmonize serialization across the three layers of Ethereum that currently need it: (i) execution layer, (ii) consensus layer, (iii) smart contract calling ABI.
I propose we use SSZ. SSZ is:
There are efforts to migrate more fully to SSZ already; we should keep these efforts in mind when planning future upgrades, and build on them.
One single shared tree
Once we migrate from EVM to RISC-V (or an alternative minimal VM), the hexary Merkle Patricia tree will become by far the largest bottleneck to proving block execution, even in the average case. Migrating to a binary tree based on a much more optimal hash function will greatly improve prover efficiency, in addition to reducing data costs for light clients and other use cases.
When we do this, we should also use the same tree structure for the consensus layer. This ensures that all of Ethereum, consensus and execution alike, can be accessed and interpreted using the same code.
From here to there
Simplicity is in many ways similar to decentralization. Both are upstream of a goal of resilience. Explicitly valuing simplicity requires some cultural change. The benefits are often illegible, and the cost of extra effort and turning away some shiny features is felt immediately. However, as time goes on, the benefits become more and more evident - and Bitcoin itself is an excellent example.
I propose that we follow the lead of tinygrad, and have an explicit max line of code target for the long-term Ethereum specification, with the goal of making Ethereum consensus-critical code close to as simple as Bitcoin. Code that has to do with processing Ethereum's historical rules will continue to exist, but it should stay outside of consensus-critical code paths. Alongside this, we should have a general ethos of choosing the simpler option where possible, favoring encapsulated complexity over systemic complexity, and making design choices that provide clearly legible properties and guarantees.