ParaVM Whitepaper
ParaX ZkVM Whitepaper.pdf
2MB
PDF
ParaX ZkRollup is a versatile type 2.5 ZkRollup solution that offers full compatibility with all EVM opcodes. It builds upon Reth as the sequencer, leveraging highly optimized components such as the transaction pool and state storage. Combined with consensus mechanisms like diemBFT or hutstuff, it achieves exceptional levels of transactions per second (tps) and minimal block confirmation times. Furthermore, the choice of using ProtoStar and similar Incremental Verifiable Computation (NIVC) proof systems significantly reduces the hardware requirements for Provers, enhances parallel efficiency, and reduces circuit sizes. This holistic approach aims to create an efficient and high-performance ZkRollup solution within the ParaX ecosystem.
2023 is a remarkable year. Within this year, there has been not only a widespread adoption of ChatGPT but also the “Cambrian explosion” of the ZK proof system. From Nova to SuperNova, and then to HyperNova, progressing from HyperPlonk to ProtoStar, and then to ProtoGalaxy, the academic community consistently pushes the boundaries of what is achievable.
Ethereum has also continued to advance along the “Rollup-centric” roadmap, with the imminent launch of Proto-Danksharding and Danksharding. Meanwhile, ZkEVM has officially ignited the “ZkEVM Wars” on Twitter, all claim them to be EVM equivalent.
However, until now, we haven’t witnessed a highly satisfactory ZK Rollup. Starknet/ZkSync has been deviating further from Type4, requiring developers to make significant adjustments or even rewrite their code to deploy on Starknet/ZkSync. Scroll’s prover has strict hardware requirements.
Most current Rollups are emulating Ethereum, and it seems like the winner is the one that resembles Ethereum the most. However, we believe that the current Rollups themselves merely serve as tools for “outsourcing computation.” The real significance lies in the data on Layer 1. These smart contract Rollups should not strive to develop features already well-established in Ethereum, but should instead focus on features which Ethereum doesn’t provide, such as high performance, privacy, rapid block confirmation and finalization. This approach would contribute more to the entire Ethereum ecosystem.
Our ZK Rollup will offer the following key features:
We will develop our solution based on the state-of-the-art Ethereum client - Reth, developed by the Paradigm team. Reth offers blazing-fast EVM execution and low-latency RPC services, as well as remarkable block synchronization speed. We will optimize the underlying data structures, EVM state storage, and transaction pool etc. Combining with an extremely fast BFT consensus algorithm (such as DiemBFT or HutStuff) to achieve this goal. You can read more about Reth in Paradigm’s article.
Scroll/PSE uses the Halo2 framework developed by Zcash and combines it with polynomial commitment schme KZG10. This choice presents a few issues. Firstly, KZG10 involves extensive FFT calculations, demanding high memory requirements without GPU acceleration. Secondly, Halo2 struggles with efficient recursion or recursive proofs, which leads to performance overhead. This results in the need to aggregate proofs from a substantial number of blocks due to the large inputs (opcodes from several blocks) and high order of the SRS (Structured Reference String) setup (> 2^22), causing extremely high memory consumption. Thirdly, KZG10 relies on trusted setups and pairing curves, making it non-post-quantum secure.
ZkSync and Polygon ZkEVM have chosen to develop using the Plonky2 framework. This choice also presents some problems. Firstly, Plonky2′s recursive overhead is higher compared to the overhead of newer IVC proof systems. Secondly, the higher recursive overhead makes parallelism difficult to achieve since parallelism can lead to overhead that offsets the benefits.
To address these challenges, we will use ProtoStar to build our ZK proof system. ProtoStar excels in generating proofs recursively with minimal overhead compared to running a full verifier circuit (in comparison to Halo 2/Plonky2). This will significantly reduce the performance impact of recursion, enhance prover performance, and allow proof generation to commence without waiting for a required number of blocks. Instead, multiple op-code inputs can be folded into a single NP-instance, achieving “realtime proving."
Unlike HyperNova’s folding scheme, which is limited to the same circuit, ProtoStar has no such restrictions. This makes ProtoStar particularly suitable for ZkVM and ZkEVM scenarios. Different opcode circuits can be written without the need to use switches to toggle circuits, reducing circuit size and achieving a “à la carte cost profile.” Furthermore, different circuit inputs can be folded together.
As described in Towards a Nova-based ZK VM, IVC-type proof systems’s proof generation process can be parallelized, maximizing the usage of the entire Prover network.
Finally, ProtoStar provides excellent support for lookup arguments and high-degree gates, which are essential components when constructing such large circuits as in ZkEVM (including zk unfriendly components).
The ProtoStar ecosystem is rapidly evolving, and our team will collaborate with the community to develop and maintain this proof system.
Our Rollup is positioned as Type 2.5 and will modify gas consumption only for certain zk unfriendly opcodes, without altering the original MPT tree and hash algorithms. This approach serves as an alternative to snarkifying the Ethereum EVM, in the future, the entire solution can be contributed to Ethereum.
We will ensure that all L2 transactions can be retrieved on L1. This means that even if the sequencer behaves maliciously, users can reconstruct the entire L2 state from L1. After evaluating various data availability schemes like Celesia and Eigenlayer, we still consider Proto-Danksharding and future Dank-sharding as the optimal solution. Celesia’s reliance on fraud proofs doesn’t meet our requirements for data validity and security, and Eigenlayer’s solution is complex and not yet deployed on the mainnet.
Additionally, since Ethereum won’t store Danksharding blobs for extended periods, we will maintain official data copies and pin them to IPFS in v1 to ensure long-term data availability.
Currently, proof generation speed represents the final step for the “massive adoption” of ZK. Even unicorns like zkSync haven’t taken this step (zkSync blocks are finalized after 24 hours). By choosing an IVC (Incrementally Verifiable Computation) proof system like ProtoStar, we aim to maximize Prover capabilities, making proof generation processes realtime, parallelized, and recursive.
ProtoStar eliminates the need for FFT computations from the algorithmic level, removing the most non-parallelizable part of the proof generation process. According to data from the Sparthan paper, protocol-based proof generation times can be up to twice as fast as the fastest existing Grouth16 framework. We anticipate ProtoStar achieving proof generation speeds similar to Sparthan, resulting in approximately twice the speed of Grouth16.
Combined with parallelism, as mentioned by @Ceperezz.eth in the Zuzalu presentation, we expect to increase proof generation speed to approximately 10-100 times that of Grouth16 while reducing required hardware requirements to one-tenth.
ProtoStar is primarily used for proof generation. Before being put on-chain on L1, we will use FFlonk to compress proofs and verify their correctness, reducing L1 verification costs and L2 gas fees.
There’s a possibility of malicious behavior by sequencers, such as discarding user transactions. To address this, we will provide L2 users with the ability to forcibly insert transactions on L1. Sequencers must include these transactions in the next batch; otherwise, they won’t be able to continue submitting batches.
Note: The above solution is not yet fully finalized, optimizations and modifications are possible in the future. For the latest version of the whitepaper, please refer to the official whitepaper.
A Rollup can be expressed using the following formula:
Where:
- 𝜎′ represents the new state.
- 𝜎 represents the previous state.
- 𝑇 represents the transactions.
- 𝛾 is the State Transition Function (STF).
Currently, many Rollups are smart-contract-rollups, where the state transition process of the blockchain is moved to the “off-chain” Layer 2 computation. The computed result (new state root) and the data used for the computation (transactions) are stored on Layer 1. By using transactions and the old state root, it’s possible to verify whether the new state root is correct, thus validating whether the outcomes of the “outsourced computation” align with the intended requirements.
Currently, the verification of “outsourced computation” primarily involves two types of proofs: validity proofs and fraud proofs. Fraud proofs challenge the correctness of computations at the bytecode level through binary searching, aiming to demonstrate errors. Validity proofs, on the other hand, verify the correctness of inputs and the computation process itself, aiming to demonstrate correctness. We are inclined to adopt a Zk Rollup approach based on validity proofs to minimize the time required for message finality from L2 to L1, without the need for a 7-day challenge time window typically associated with fraud proofs.
Our ZK Rollup will primarily consist of the following components:
- 1.Block production.
- 2.Sync L1→L2 Merkle Tree root of Txs to L2 (for verification of L2 contracts to execute further actions, e.g., claim).
- 3.Sync L2→L1 Merkle Tree root of Txs to L1 (for verification of L1 contracts to execute further actions, e.g., claim).
- 4.Generate execution trace (witness).
- 1.Assign tasks to the Prover network.
- 2.Return proofs generated by Provers to the Sequencer, which submits them to L1 to finalize state changes.
- 1.Generate proofs.
- 2.Fold proofs.
- 1.Constraint the correctness of public inputs (public input circuits).
- 1.Block header.
- 2.Transactions.
- 3.Execution trace.
- 4.Lookup tables:
- Fixed table.
- Bytecode table.
- ECDSA signature table.
- Keccak table.
- RW table.
- MPT table
- 2.Constrain the execution of the EVM on L2 (EVM circuits).
- 1.Correctly transition the opcode context (gas, codehash, program counter, stack pointer).
- 2.Enforce the execution of each opcode according to EVM specifications.
- 3.Constrain the consistency of Storage/ Stack/Memory data access (State circuits).
- 1.Verify the correctness of proofs.
- 1.Main contracts for L2 on L1.
- 2.Store L2 block’s state transitions (stored in batches, validity proof doesn’t need to be detailed for each block’s stateRoot).
- 3.Indirectly call Verifier and Bridge contracts.
- 1.This bridge can be deployed on both L1 and L2.
- 2.Store L1→L2 Merkle Tree root on L1 (synced to L2 by the Sequencer, trustful process requiring trust in Sequencer).
- 3.Store L2→L1 Merkle Tree root on L2 (synced to L1 by the Sequencer, trustless process, no trust needed).
- 4.On L1, manage Locking of L1→L2 assets.
- 5.On L2, manage Minting/Burning of assets.
- 6.Provide cross-chain interfaces for L1→L2 and L2→L1.
- 7.Offer claim interfaces (can be automatically claimed on both ends by off-chain bots, with fees collected upfront).
- 8.Incorporate a universal cross-chain message format (to, data, value, with “to” being an address on other chains).
- 1.Store committed/finalized transactions of L2.
- 2.Currently considering an approach based on EIP-4844.
- 1.Pending: The initial state of a transaction after being generated.
- 2.Included: The transaction has been packed into a block but has not yet been submitted to L1.
- 3.Committed: The transaction has been packed into a block but hasn’t been verified on L1.
- 4.Finalized: The transaction has been packed into a block and has been verified on L1.
- 5.Failed: The transaction either didn’t pass verification or failed for some reason.
The validity proof doesn’t involve a challenge process, so there’s no need for each block to have a state root on L1. Sequencers usually batch a significant number of blocks together before submitting them to L1 to save on gas fees.
Because batches of blocks are submitted, a batch typically results in a single new state root. Thus, the data on L1 from L2 isn’t a chain of blocks, but rather a chain of batches. All blocks within a batch share the same timestamp.
The actual timestamp of blocks on L2 requires a separate contract to provide.
The lifecycle of a block consists of the following stages:
- 1.Pending: The block has been packed but hasn’t yet been sent to L1.
- 2.Committed: The block becomes “committed” once it’s uploaded to L1.
- 3.Finalized: The block becomes “finalized” after being proven by the validity proof.
Transactions on Layer 2 (L2) typically fall into three categories:
- 1.Cross-chain transactions(fromL1 or other compatible L2s).
- 2.Governance transactions (from L1 or L2).
- 3.User transactions.
These transactions have varying priorities:
- 1.Cross-chain governance transactions
- 2.Cross-chain transactions
- 3.Governance transactions
- 4.User transactions
Transactions with higher priority are placed at the front during transaction ordering.
The reason for giving priority to cross-chain transactions is that they involve interactions between two different chains, often occurring asynchronously. These transactions are only valid within a certain time window, and modifications to the state by other user transactions might lead to transaction failures. Rolling back the state for L2 in such cases can be a complex process.
User-submitted transactions on Layer 2 are initially placed into a public transaction pool (MEV won’t be implemented in V1). At this stage, the transactions are marked as pending.
Once the number of transactions in the transaction pool reaches the BLOCK_GAS_LIMIT, the sequencer orders these transactions using a sequencing algorithm, executes them, and creates a block. When the number of blocks reaches the MINIMUM_BATCH_SIZE, the sequencer constructs a batch containing these blocks and submits it to the ZkEVM main contract on L1. Because the transactions have been uploaded to L1, the status of all blocks within the batch transitions to committed.
Following the implementation of EIP-4844 proto-danksharding, we will save gas costs by submitting L2 transactions to blobs rather than using calldata. This change will decrease the gas cost per byte from around 16 to 1, resulting in approximately 16 times cost savings. With the full implementation of Danksharding, which introduces erasure codes and probabilistic sampling, this reduction will be even more remarkable. It’s important to note that Ethereum periodically clears blobs, prompting us to introduce additional committees and storage chains in the medium to long term to store copies of transaction data. In the short term (V1), data will be centrally maintained and pinned to the IPFS network.
During the execution of transactions, the sequencer generates an execution trace. The coordinator allocates this trace to different Provers, who then consolidate these instances into a single input. Finally, Provers generate a single proof returned to the sequencer. The sequencer submits the proof along with the old state root to the ZkEVM contract on-chain, finalizing all transactions within that batch. If the verification is successful, the statuses of these transactions transition to finalized.

Figure 1: L1 & L2 Architecture
General message passing between L1 and L2 can be efficiently achieved through a Merkle Tree. In the bridge contracts on both L1 and L2, we will maintain their respective Merkle Tree Roots.
The Merkle Tree Root in the bridge contract on L1 contains digests of all messages from L1→L2. The Sequencer only needs to synchronize this Merkle Tree Root to L2. Anyone can then enact cross-chain messages on L2 by submitting Merkle proofs, which could involve actions like asset minting or contract calls. The account that enacts the cross-chain message receives a fee reward. It’s important to note that the process of synchronizing the L1 Merkle Tree Root to L2 requires trust, as the ZK proof system is not operational on L1, and L2 can’t verify if assets on L1 are genuinely locked. This trust could be resolved through consensus, with native token staking on the Rollup backing it, or through the use of light clients.
Similarly, the Merkle Tree Root in the bridge contract on L2 includes digests of all messages from L2→L1. The Sequencer only needs to synchronize this Merkle Tree Root to L1.
The reason for separating this portion of the Merkle Tree Root from the state root is twofold: first, to facilitate cross-chain transactions with an independent verification system, and second, to save gas. The depth of the Merkle tree for cross-chain transactions is lower than that of the state tree.

Figure 2: Bridge Process
The Ethereum Virtual Machine (EVM) functions as a state machine composed of several components:
- Stack
- Memory
- Storage
- Program Counter
- Gas
For each call, Geth (Go Ethereum) creates an EVM instance, and each call represents a state transition process. The inputs to this process are the user-provided calldata, which drives the state machine to transition to the next state.
When outsourcing computation to this state machine, there are several things we need to ensure:
- The previous state has been correctly recognized.
- The input to the state transition function (STF) is correct.
- The state transition (ST) process itself is correct.
If we consider Rollup as this outsourced state machine, the requirements are similar:
- The previous state root has been correctly recognized (validated by a validity proof or after the challenge period of a fraud proof).
- The input transactions are normal transactions (with verifiable signatures, correct hashes, etc.).
- The EVM executed state transitions according to the specification, and the execution process strictly maintains consistency in reading and writing to prevent the insertion of extra opcodes.
As the previous state root has been correctly recognized, using merkle proofs as inputs can prove that the storage values read during this state transition process are correct. And due to our ensured consistency in reading and writing, the data used throughout the execution process is accurate.
The purpose of the circuit is to prove that the operation of this state machine aligns with expectations. By providing correct inputs to the circuit pins, the resulting trace generated by the circuit can be encoded as a polynomial, often represented using the simplest form of arithmetic constraints, such as Rank-1 Constraint Systems (R1CS):
Here, 𝑧 is the input on the pins, 𝐴 represents the left input of the circuit, 𝐵 represents the right input of the circuit, and 𝐶 is the output. If this polynomial equation holds true, it indicates that all constraints are satisfied. In other words, the operation of the entire state machine aligns with expectations. L2 has correctly executed the outsourced computation, and the newly generated state root can be trusted and submitted to L1.
There are various methods to verify the polynomial satisfaction, such as zero checks based on the residue theorem or sum-checks based on the multiplicative linear equation (MLE) technique.
Validating the correctness of inputs to the state machine can be complex. For instance, verifying signatures requires ECDSA circuit constraints. However, ECDSA involves hash functions that rely on bitwise operations. Integrating these non-ZK-friendly hash functions directly into the main circuit could result in a significant increase in circuit size and slower proof generation speed.
To address these challenges, a solution involves the use of Lookup Tables. These tables are implemented as sub-circuits to constrain the correctness of specific operations. This approach helps to sidestep the introduction of non-ZK-friendly components, often referred to as “Overhead.”
ZkEVM relies heavily on various Lookup Tables to accommodate these challenges, including but not limited to:
- Fixed Table: Used to constrain bitwise operations and range checks.
- Bytecode Table: Dependent on the keccak table, it ensures that the currently executed opcode indeed belongs to a specific bytecode segment.
- ECDSA Signature Table: Constrained to validate the correctness of transaction signatures.
- Keccak Table: Ensures the correctness of hashes, such as codehash, blockHash, txHash, and MPT (Merkle Patricia Trie) hashes.
- RW (Read/Write) Table: Ensures the consistency of reading and writing operations for stack, memory, and storage.
- MPT Table: Ensure that a Merkle proof, along with its corresponding value, matches the Merkle tree root.
By using Lookup Tables and sub-circuits, ZkEVM can overcome the challenges associated with verifying complex inputs, all while preventing the incorporation of non-ZK-friendly components and managing potential overhead.
The task of verifying the execution process of the state machine is indeed intricate but provides a clearer understanding of the operation. To illustrate this, let’s take the example of verifying the opcode ADD:

Figure 3: Add opcode circuit
- First, we need to verify that the opcode belongs to the bytecode segment [𝑝𝑐,𝑝𝑐+1] of the currently executing contract. This task can be accomplished by performing a lookup in the pre-generated bytecode table.
- We would check if opId is equal to OpcodeId(0x01) for ADD.
- Additionally, we would verify if the opcode at position pc in the bytecode table is ADD.
- Next, we need to ensure that after the opcode executes, the program counter (pc), stack pointer (stack_pointer), and gas counter (gas) are updated correctly.
- For instance, we would check if the new gas counter is the old gas counter plus 3, and similarly for other context variables.
- Finally, we need to verify that the op-code’s execution process is correct.
- In the case of ADD, it involves operating on the stack. We need to verify that the stack operations align with expectations. For example, ADD will clear the top two elements of the stack and push the result back to the top.
- This verification requires using lookup tables to ensure stack data read and write consistency.
Similarly, the reading and writing of Memory and Storage also need to be constrained using lookup tables to ensure consistency. The value read in the next operation should match the value written in the previous operation.
This verification process involves intricate steps to ensure the correctness of each operation and transition in the state machine execution. The use of lookup tables and sub-circuits aids in maintaining clarity while handling the complexity of these tasks. The provided image illustrates an example circuit for the opcode ADD verification process.
a) Verify that the opcode is ADD and belongs to the bytecode segment [pc, pc+1]:
- opId === OpcodeId(0x01) for ADD
- lookup(bytecode table, 1, pc) === ADD
b) Validate correct context switching:
- gc + 3
- stack_pointer + 1
- pc + 1
- gas + 3
c) Confirm the accuracy of opcode execution:
- lookup(Stack, rw table, sp) == 40
- lookup(Stack, rw table, sp+1) == 80..
- lookup(Stack, rw table, sp) == 120
Our Rollup relies on the ProtoStar proof system, which offers an exceptionally efficient folding scheme with minimal overhead and supports NIVC (Non-Uniform IVC). As a result, there’s no longer a need to switch circuits to reduce circuit sizes. Additionally, ProtoStar provides highly efficient lookup arguments.
The ZkEVM’s circuit is notably intricate, often necessitating the utilization of two key features from the Plonk protocol:
a) Custom Gates: These specialized circuit components perform specific computations beyond the standard logic gates, enhancing the flexibility of the circuit design.
b) Lookup Arguments: These enable efficient referencing of external data within the circuit, optimizing its efficiency.
ProtoStar effectively supports both of these features, making it the most fitting ZK Proof System for ZkEVM at present.

Figure 4: NIVC
The folding process of ProtoStar can also be parallelized, enabling multiple provers to concurrently work on folding different instances. These instances are subsequently merged into a single instance, as illustrated in the provided image.

Figure 5: ProtoStar parallelization
Given the current state of ProtoStar’s engineering implementation being less mature, your team intends to contribute code to foster its development. This collaborative effort aims to accelerate the advancement of ProtoStar, ultimately benefitting the entire ecosystem.
The entire ZkEVM circuit is composed of the following components, with the circuit’s public inputs being:
- Execution Trace
- Access List
- Merkle Proof
- Block Header
- Transactions
- Contract Bytecode
The execution trace encompasses the execution process of all opcodes. The access list includes the addresses that the transaction is about to modify, along with the corresponding account proofs. If the address is a contract account (CA), the access list also contains all storage slots that will be modified within that CA, along with their respective storage proofs.
The contract bytecode records the bytecode of all contracts involved in the transaction. The block header includes the previous block and the current block’s corresponding header. Transactions encompass all the transactions that drive this particular state transition.
The main circuits are:
- EVM Circuit
- State Circuit
- Aggregation Circuit (FFlonk)
The State Circuit provides data to the EVM Circuit through bus mapping and ensures the consistency of Stack, Memory, and Storage read/write operations. The Aggregation Circuit includes a built-in ProtoStar verifier, which converts ProtoStar proofs into FFlonk proofs. The proof size can approach that of Grouth16 while maintaining lower on-chain verification costs than Grouth16.
The supplementary circuits include:
- MPT Circuit
- Keccak Circuit
- Public Input Circuit
- RLP Circuit
- Tx Circuit
- Bytecode Circuit
These circuits are used to constrain the lookup tables and will not be embedded within the main circuit.

Figure 6: Circuit Architecture
ProtoStar is based on Plonk, and in the ultra Plonk arithmetic scheme, the circuit is primarily represented as a large matrix. The rows and columns of this matrix represent steps and inputs, respectively. ZkEVM leverages the succinctness property of zero-knowledge proofs, so all inputs are public, including:
- Fixed inputs
- Instance inputs
- Advice inputs

Figure 7: Circuit arithmetisation
The Prover will populate this large matrix with the generated public inputs, arranging them from top to bottom for each execution step. An individual execution step can represent the execution of an opcode, as well as operations like reading/writing balances, nonces, and code hashes.
The design of EVM is quite old and has several flaws, such as:
- 1.Being based on 256 bits rather than the more modern-friendly 64-bit for contemporary CPUs.
- 2.Lack of separate storage area for function local variables, leading to potential “stack too deep” issues.
- 3.JUMPDEST making jump table optimization more challenging.
There are also some non-zero-knowledge (zk) friendly design aspects:
- 1.Non-use of zk-friendly hashes.
- 2.Over 100 opcodes, exceeding the RISC design principle and resulting in a larger circuit.
To partially address the shortcomings of the EVM and achieve greater scalability, we intend to build a new Virtual Machine (VM) based on the EVM instruction set rather than being entirely EVM-based. This new VM will possess the following features:
- 1.Close to 100% compatibility with EVM opcodes.
- 2.Improvements to non-zk-friendly opcodes like bitwise operations, keccak, etc.
- 3.Adjustment of the word size from 256 bits to 64 bits.
- 4....

Figure 8: zkVM comparison
The performance of zkVM is typically determined by the speed of the zk-proof system and the generation of traces by the VM. The IVC-based VM allows us to execute larger program code, simultaneously generating traces and proofs in segments, greatly enhancing performance.
Jolt & Lasso is a novel VM technology. Jolt avoids the need to craft circuits for small opcodes and directly utilizes lookups to query results. Lasso ensures the correctness of lookup tables while supporting larger lookup tables. A simple example is for the ADD opcode, where we can choose to use a lookup table to directly retrieve the result in a single query instead of writing three constraints to simulate opcode execution. This enhances both the performance and security of the entire system. Auditing zkVM is usually complex, as each opcode requires dozens to hundreds of lines of circuit code for constraints. Missing a constraint could compromise the security of the entire system. Using Jolt allows the generation of audit tables to replace the intricate and diverse opcode execution process, thereby simplifying and enhancing the system’s security.

Figure 9: Jolt & Lasso
ParaX VM is not only positioned within the realm of EVM but also as a high-performance and generalized Virtual Machine. It can be utilized not only for zkRollup but also for various generalizable verifiable computations, such as:
- 1.Expanding the computational capabilities of Solidity contracts (e.g., utilizing historical data, executing complex computations).
- 2.Providing cloud platforms with verifiable computation capabilities similar to AWS and GCP.
- 3.Verifying the training data of medium-sized ML models (zkML).
- 4.Delegate computation.
- 5.Creating appchain akin to dydx and zk-Link.
Of course, to fully realize this vision, it would be necessary to support general ISAs (Instruction Set Architectures) such as RISC-V, WebAssembly, MIPS, etc. This would enable programs written in languages like C/C++, Rust, and Go to be compiled into this instruction set.
After the EVM version of ParaX VM goes live, the experience gained will be used to create zkVM with a general and streamlined instruction set. While still being based on NIVC, zkVM will prioritize high performance and low memory usage as its primary implementation goals, moving away from EVM compatibility.
We believe that this market falls under the category of “future computation,” and it’s a market with substantial potential. Zero-knowledge (zk) protocols have removed the need for trust assumptions, allowing us to have confidence in future cloud platforms. However, current solutions like risc0 and zk-WASM still resemble CPUs from years ago. They can only execute programs of a few megabytes in size and come with significant memory consumption. Zero-knowledge (zk) protocols have removed the need for trust assumptions, allowing us to trust future cloud platforms.
To enhance the speed of proof generation, in addition to software-level parallelism, we plan to introduce hardware acceleration. In the v1 version, we intend to implement CUDA-based hardware acceleration, primarily focusing on accelerating the Multiexponentiation (MSM) operation. We are considering algorithms such as:
- Pipenger
- Batch Affine
- GLV
... for this purpose.
Furthermore, we are contemplating collaboration with hardware acceleration vendors like Cysic and Jump to integrate their FPGA and ASIC hardware acceleration solutions. This strategy aims to leverage specialized hardware capabilities to further boost the efficiency and speed of the proof generation process. This combination of software and hardware optimizations can greatly enhance the overall performance of our ZkEVM system.
Intro a decentralized perp exchange based on ParaVM:

Figure 10: Architecture of Decentralized Perp Exchange
- 1.Abstract order book data based on the KV database
- 2.BFT consensus protocol, on each round generate proof by leader node, verified by following nodes
- 3.Improve zk-circuit for trading, design special circuit to speed up trading
[KS]. Abhiram Kothapalli, Srinath Setty. HyperNova: Recursive arguments for customizable constraint systems. https://eprint.iacr.org/2023/573.
[CBDZ]. Binyi Chen, Benedikt Bünz, Dan Boneh, Zhenfei Zhang. HyperPlonk: Plonk with Linear-Time Prover and High-Degree Custom Gates. https://eprint.iacr.org/2022/1355.
[BC]. Benedikt Bünz, Binyi Chen. ProtoStar: Generic Efficient Accumulation/Folding for Special Sound Protocols. https://eprint.iacr.org/2023/620.
[GW]. Ariel Gabizon Zachary J. Williamson. fflonK: a Fast-Fourier inspired verifier efficient version of PlonK. https://eprint.iacr.org/2021/1167.pdf.
Last modified 4d ago