It has been almost 7 weeks since we released the major update from Alphanet V1 to V2, and approximately 5 since the WeaveVM gas-economic integration in Alphanet.
Since then, Alphanet has seen significant spikes in data ingress, feature development, and usage. Today, we are thrilled to announce the arrival of Alphanet V3.
Alphanet V2 Recap
Before diving into the changes of Alphanet V3, let’s take a look at what V2 brought to us and how it improved upon V1.
Alphanet V2 was a hard fork, introducing a new state and integrated gas-economics. We reduced block time by half to 1 second, achieving 500 mgas/s and approximately 62 MBps data throughput.
In that period, V2 saw more than 1.5M transactions over 2.8M blocks, resulting in an average of ~0.53 tx per block. To learn more about Alphanet V2 changelogs, check out the previous announcement.
What’s New in Alphanet V3
Alphanet V3, the Lovelace upgrade, occurred at block height 2817713 of Alphanet V2, coinciding with Ada Lovelace’s birthday (Dec 10th, 1815). The upgrade was implemented without a hard fork or dropping the state, while maintaining the same JSON-RPC endpoints for both devnet and alphanet. Let’s jump into the major V3 changes.
Reth: Major ExExes Improvements, Stability and Performance
Alphanet V3 has major upgrades on the Reth Execution Extensions (ExEx) level. The upgrades include major refactoring of the WeaveVM ExExes architecture design, microservicing the ExExes, and fixing issues related to Arweave GraphQL queries - reducing latency from 25s to 30ms (~800x faster) which resulted in an 80-90% success rate of Precompile 0x17 requests under 1s.
The ExEx improvements also include the R&D ExExEx (Extended Execution Extensions), which we will explore more in the following weeks of December. This enhancement will enable WeaveVM ExEx to scale with an exponential number of extensions without latency issues or thread blocking in Reth bottlenecks. Last but not least, in terms of WeaveVM’s Reth custom fork, the EVM team has reached PR #123.
Infra: ARIO Gateways and Turbo for ANS-104 Data Items Bundling
The Lovelace upgrade enhances WeaveVM’s infrastructure stack, particularly the Arweave interface. In V3, we have migrated our ANS-104 data items bundler to ARIO’s Turbo. While using the same address to post data items to Arweave, our migration to Turbo has resulted in faster availability on Arweave and optimistic data retrieval from Arweave’s arweave.net gateway.
The infrastructure upgrades don’t stop here - we will be launching our own ARIO Arweave gateway! The WeaveVM team has been working closely with the ARIO team to launch our own Arweave gateway dedicated to WeaveVM data protocols.
Our gateway will be customized to index the network’s data protocols only, enabling better scalability, faster indexing, fewer constraints, and more efficient GQL queries. This also contributes to gateway decentralization on Arweave! More details about our custom gateway will be shared in December.
UX: WeaveVM Transaction Tags
The new JSON-RPC method eth_sendWvmTransaction
on WeaveVM enables sending KV tags along with your signed transactions.
A tagging protocol has been shipped on WeaveVM, enabling many new use cases! The new JSON-RPC allows you to send up to 2048 bytes of total tags size (json_stringified_tags.len() <= 2048
) to the WeaveVM RPC.
The tags don’t pass through EL/CL, but go directly from the RPC server (which in WeaveVM’s case can be considered as a sequencer by analogy), and are stored from the RPC into WeaveVM’s GBQ tables (along with WeaveVM-Arweave ExEx tables). This new tags protocol shares the same level of trust and optimism as WeaveVM ExExes provide; the main difference is how it’s implemented and the pipeline that the data (tags) passes through during the process.
What about fees? Tags longevity? Failure?
- Fees: Adding up to 2048 bytes worth of tags requires no fees other than the tx overhead (type 2)
- Tags longevity: There are no guarantees for “permanent” tags as we don’t settle them on Arweave along with the tx (full block). Tags have a minimum longevity period of >= 1 year, as decided by the WeaveVM team (keeping tx tags for longer periods could be determined by several onchain activity factors, TBD)
- Failure handling:
eth_sendWvmTransaction
is optimistic by default - it will index any submitted tx through the RPC regardless of tx status. However, in parallel, there is a cronjob that checks WeaveVM txs availability on Arweave every ~6h and prunes any tx tags that are not settled on WeaveVM.
What does it unlock?
This new non-breaking transaction type enables several possibilities on WeaveVM, and for the first time in the EVM L2s space, such as:
- Data protocols
- Messaging protocols
- Onchain indexing
- Data composability
- Proper data type labeling (rendering-aware tx data)
- Alternative option for EIP-4844 txs
- Storage for validiums
What’s next?
A precompile to read transaction tags from the smart contract API is in development. This will make it possible to fetch WeaveVM data from dApps and simplify a process that would usually need an oracle.
- Code example
- Transaction example (explorer UI tag parsing is WIP)
Metrics: Highest Active Data Throughput
WeaveVM Alphanet V3, since December 10th, has been running at 0.15-0.20% network usage of total capacity, amounting to 93-125 KBps of data ingress. This calculation doesn’t account for the data encoding that the main data ingressers on WeaveVM utilize (archivers, borsh-brotli), which would effectively double the network usage to 0.3-0.4%, or approximately 186-250 KBps.
Notably, even without considering the encoding impact, the current network usage of data ingress is more or less equal to the combined data throughput of all rollups listed on rollup.wtf.
Archivers: WeaveVM Substrate Data Pipeline, New EVM Network and a Higher TDVA
In Alphanet V2, WeaveVM offered support only to EVM networks as part of the Archiver tools. With the Lovelace upgrade, we are opening doors for non-EVM networks, starting with Substrate networks. The first Substrate Archiver instance is for Humanode Network, which launched Archivers for both its EVM and Substrate networks. Read more about Substrate Archiver here.
The Substrate Archiver has been designed to adopt the same architecture design, data pipelines, interfaces, and endpoints as the WeaveVM Archiver (EVM), and a node instance can be launched with just a few config file tweaks!
On the WeaveVM Archiver news, WeaveVM welcomes Dymension Hub L1 and Humanode EVM network (chain IDs 1100 and 5234 respectively). These two new networks have helped WeaveVM’s TDVA exceed the $3B threshold!
Ecosystem: WeaveVM Has an Oracle
Last but not least, developers building on top of WeaveVM can now make use of our newest ecosystem partner, SEDA oracles, to compute with any external data source at any scale. Check out SEDA’s WeaveVM integration to power storage price oracles and bring large, high-frequency datasets onchain.
Keeping up to date with WeaveVM
WeaveVM is hitting new milestones at a rapid pace and constantly evolving – follow along on GitHub and track new announcements on partners, features and blog posts on X, @weavevm.
Get help and share feedback in the Discord: dsc.gg/wvm!