Rollup Plasma for mass exits & complex disputes


#1

Credit to @ben-chain for much of this! Also @barrywhitehat & V for discussions which led to these ideas.

As it turns out the same scalability techniques which apply to Rollup chains (as introduced in this post) apply to plasma. The difference is instead of computation enforced with zkProofs, we use exit games for computational enforcement. This gives us plasma scalability in the optimistic case, and then pessimistically we fall back to rollup-level TPS without use of zkProofs.

We can estimate the gas savings of somewhat optimized exits, aka checkpoints (more info here), with the following python script:

Gas estimates for this ā€˜batch exit/checkpointā€™ are:

total exits in block: 1801.354401805869
gas per exit: 4441.102756892231
avg exits per second 128.0

With this weā€™re already in the realm of zkRollup chains in the pessimistic case. However, we can further optimize if we want to use this technique for complex dispute resolution of applications that seemed infeasible on plasma (eg. Uniswap) but work fine on rollup chains.

In these cases we can deduplicate the stateObjects, resulting in savings of ~2x. Then more savings may come from deduplicating blockNumber, reducing range bytes, & including a deposit registry of 4 bytes. With this we may be able to get down to 20 bytes per exit/checkpoint.

Approximate results for this application specific optimized checkpoint:

total exits in block: 5596.072931276297
gas per exit: 1429.5739348370928
avg exits per second 399.0

This once again reorients plasma intuitions. We get plasma-level scale optimistically, and then in the worst case fallback to rollup scalability.


UPDATE:
For a clear example of the scalability benefits vs traditional rollup, using the assumptions made in the original rollup post, we can achieve about 2x the TPS using this optimistic approach. The original post assumes the use of tricks which can get our transaction size to ~8 bytes. With 8 bytes transactions, we can run the simulator and we get these results:

total exits in block: 13081.967213114754
gas per exit: 611.5288220551379
avg exits per second 934.0

[Retrospective] Plasma Implementers Call #27
[Retrospective] Plasma Implementers Call #22!
#3

Intriguing!

Still trying to grasp: for the pessimistic fall-back case, if we arenā€™t SNARKing, how does the TPS match rollup? Wouldnā€™t the operator, in this case, need to publish all of the data (including signatures)?


#5

Great question! I forgot to mention that this scheme requires optimistic exits & challenge inclusion. This way we donā€™t need to publish the signatures because either:

  1. The signature was generated off-chain by the user so they know that their exit / checkpoint is valid; or,
  2. The signature was never signed by the user and so they know they will be able to challenge inclusion or challenge invalid history & keep their coins safe.

#6

What happens to coin histories in this case? Do they still stay the same? Do we get improvements? Do we make things worst? Or, does it not matter?


#7

If you rollup a large number of checkpoints this removes the histories before those checkpoints for all of those coins. Itā€™s pretty great!


#8

Just to add to thisā€“one of the big ways rollup scales is that it sends ledger updates only as calldata (inputs to the main-chain function), from those a calculated merkle tree is the only thing stored on-chain. The way we achieve this scale here is that we post many checkpoints at once in the calldata, but only store a merkle root of the checkpoints in the contractā€™s storage, instead of storing every in-progress checkpoint individually.


[Retrospective] Plasma Implementers Call #21!
#9

Thank you for sharing! interesting!
I have 2 questions.

  • I suppose once clients verify state transitions of merkle tree, clients donā€™t have to verify same blocks in next time. Is it right?
  • Can we make rollup on plasma as predicate?

#10

It doesnā€™t look like the gas calculations take into account either Merklelization costs (they just hash each exit/checkpoint once?) or cost of event logs. Or is the plan that every user run an Ethereum full node?

What is the cost of challenging a checkpoint, compared to the cost of challenging a checkpoint in the usual way? Iā€™d prefer the term ā€œstatelessā€ to ā€œrollupā€ so as to make it more clear whatā€™s happening. At some point your stateless exits/checkpoints need to become statefulā€”you canā€™t have statelessness all the way down.


#11

I suppose once clients verify state transitions of merkle tree, clients donā€™t have to verify same blocks in next time. Is it right?

Not sure I understand the question fully. The process is the same as our normal constructions. The difference is, after the checkpoint is passed to the function only a merkle root of its batch is stored on chain.

Can we make rollup on plasma as predicate?

The rollup techniques can absolutely be used in predicates, but if you were to build a rollup for mass exits/checkpoints it will either need to be on a root Deposit contract, or a predicate which itself is a nested Deposit contract.


#12

Ah good catch! I handwaved it because it wouldnā€™t effect the total gas that much, but Iā€™ve added it in & adjusted the estimates


[Retrospective] Plasma Implementers Call #21!
#13

For checkpoint.
Actually, Iā€™m not sure that the difference between many checkpoints by checkpoint predicate(I think it similar to plasma inside plasma predicate) and many checkpoints by rollup.

In former, we can make a large checkpoint for a range(but there must be no gaps. So all users in one large range should sign it) and this predicateā€™s state data is only Merkle root. In withholding case, users can ā€œchallenge inclusionā€ if they didnā€™t sign the transaction for checkpoint predicate.
In latter, can we make a large checkpoint with gaps between its range?

Maybe Iā€™m missing something.

Of course, I think itā€™s very cool for complex dispution logic!


#14

Ah, sorry! finally I got what you mean. Users can not only startCheckpoint but also startExit and finalizeExit with single Merle root stored on chain and call data. Itā€™s efficient. I change my question. In this case, should all owners in one large range should sign?(If there are least one user who not signed, they can challenge checkpoint)

added
ohh, users can know whether their coins are included or not by watching call data. Iā€™m sorry, now make sense. I should have seen this well. :sweat_smile:


#15

Thatā€™s right! Glad you figured it outā€“not surprised it was a bit difficult considering thereā€™s no reference implementation / spec. Hopefully weā€™ll build this soon & get everything audited to make sure that there arenā€™t any issues that arenā€™t immediately obvious.


#16

There is one issue with this scheme which needs to be sorted out, and Iā€™m not sure what the best way to do it is. The question is: how do we store challenges, deprecations, etc. when everything is compressed into this root?

I see a few options. The simplest is to treat these rolled up exits/checkpoints to act as a single attestation, so that challenges and deprecations on any of the subranges apply to all of them. However, this would make it problematic for exits, since any of the users under the root could make a signature and cancel the whole thing. It might be okay for checkpoints, though, if the users are willing to download the othersā€™ history.

The other option is the stateless client approach, where we serve a merkle proof down to a particular exit, record that it is challenged or deprecated, and then use the merkle proof with the new leaf hash to produce a new root which replaces the old one in storage. However, this is very problematic because it introduces a race condition where each inclusion proof changes the root, making any others which were broadcasted at the same time fail. It might be possible to do some sort of batching so that the race condition is only once per batch, but itā€™s unclear the best way to handle that (who does batching?)


#17

The latter option seems attractive.

Iā€™m not sure that this can solve the race condition. But, how about separating challenges to 2 phases?

For example, deprecation case.

  1. Challenger show deprecation, but DepositContract doesnā€™t remove exit(donā€™t replace merle root) and store it as pendingDeprecation
  2. Someone actually deprecateExit(replaces merle root) with batching by seeing many of 1ā€™s call data

The incentive of 2 is the part of ExitBond.
Users canā€™t finalizeExit as long as there are least one 1. (But we need a SSTORE for 1.)


#18

Oooh, interesting!

Users canā€™t finalizeExit as long as there are least one 1

How would users prove that there are 0 to finalizeExit?


#19

hmm, this is the only thing off the top of my head. How about using the challenge counter for depreciation, however deprecation cost becomes expensiveā€¦


#20

I tried to write code of batch checkpoints as a single attestation and exits as a later option, which Ben mentioned on the thread, forking PGā€™s deposit contract.


[Retrospective] Plasma Implementers Call #23!