Rage against the (EV)Machine part 2: gas costs and optimizations

June 4, 2020

Originally published on Aragon One Official Blog


The amount of gas consumed by a Solidity contract is of paramount importance as it determines the price users will pay for interacting with it. It’s quite a broad topic, which can even be considered a particular case of an even broader topic in Computer Science: optimization.

There are many techniques to minimize gas usage, like trying to pack less than one word-sized variables together in structures (to save SSTORE’s) or caching storage variables accessed more than once in a function (to save SLOAD’s). If you have previous experience or intuition on the topic of optimization, it’s going to be very useful, but try to take into account gas prices and make sure they are up to date. As you can see in that table, optimizing usage of SSTORE’s is one of the first things you should probably try. Also, bear in mind that gas prices don’t accurately reflect the real CPU (or GPU or ASIC or whatever) usage, so don’t let your previous experience betray you.

Anyway, trying to write a comprehensive guide about how to optimize would exceed the scope of a blog post. In this article, we are going to discuss a particular optimization we did while working on sortition for jury drafting in the Aragon Court. It may be the case that you never have to apply it in your own code, but it’s a good trick to carry around anyway.

Some high-level knowledge about how the Court works is assumed for a better understanding of this article. You can refer either to the white paper or to the blog post about v1 technical details.


The problem of choosing non-ordered elements of a list proportionally to their weight is well formulated in this article about creating a Work token, and some solutions proposed. Still, in essence, on each (non-final) round of a dispute, we need to randomly choose a set of jurors from the total number of jurors available, proportionally to the number of tokens they have. Let’s visualize jurors as adjacent segments in a line. The length of each segment represents the number of tokens in a juror’s balance. Then, we randomly choose a number, and the interval which that number belongs to corresponds to the juror being selected.

As you can read in the article posted above, it is not an easy problem to solve if you want to do it efficiently. The most naive approach would be to do a linear search from the beginning (zero), but that would have a complexity of O(n). The next natural idea would be to order jurors by their stake, so we could then use a binary search and achieve logarithmic time, but then inserting and updating would increase complexity a lot. Although we can foresee that searches will happen way more often than insertions or updates, we must take into account that writing (SSTORE) is way more expensive than reading (SLOAD) in the EVM, so, again, this doesn’t seem like a good path to follow either.

Luckily, we found a better way to solve this in this article by Kleros. The solution is presented as “a sum tree.” In short, we just build a tree where each leaf represents a juror’s stake, and the rest of the nodes contain the sum of all the nodes below.

You can see an example of such a tree below, with 4 jurors having 100, 20, 15, and 35 tokens, with the sums in the intermediate nodes. The root node has a total amount, 170.

With this data structure, the complexity for sortition becomes O(K * log N / log K), where K is the number of children of each node in the tree, while N would be the number of leaves (jurors in this case). For writing operations, the complexity becomes O(log N / log K). Therefore, increasing K makes sortition worse while also making addition and removal jurors better.

As pointed out above, reading is cheaper (SLOAD) than writing (SSTORE), but we expect it to happen much more often. We have done some rough calculations, and K = 16 looks like a good trade-off, and it’s the number we are currently using in the Court, but once we have real data, we will be able to produce better estimates of how often each operation takes place in a regular scenario, and come up with more fine-tuned numbers. There’s another factor that tips the scales towards optimizing sortition, which is explained in the next section: checkpointing.

Of course, when working with algorithms, we firmly believe the motto “you can do better”, so we can already think of some paths to explore, like using Fenwick trees, which look promising, especially for low values of K:


Now let’s add the time dimension for some more fun.

As you can imagine, the tree is quite dynamic, it changes over time: jurors come and go, activate and deactivate, earn rewards, and get slashed…

In all rounds, but the last one, we can simply draft on the current term (which is the minimum time unit in the Court). There is still the problem of balance updates, which we defer for next term to ensure consistency during the whole term (that’s the point of having terms as the minimum time unit), but we could achieve that using queues, as we did initially. It has some upsides and downsides, you can read our initial discussions, and gas estimations in this GitHub issue of our repo.

But when you enter the final round, when there is no draft, and all jurors are allowed to vote, you definitely need to keep track of historical values, what we call “checkpointing,” like MiniMe tokens do, mostly to prevent double voting. For regular rounds, moving tokens around wouldn’t give jurors any advantage, as it’d be a lottery; the chances you win in the recipient account are symmetrically lost from the sender account. In final rounds, nonetheless, all jurors are allowed to vote, so we need to set a fixed term, and use that timestamp for all juror balances. This prevents jurors from performing sybil attacks, in which they could double vote by moving tokens from one account to another.


The animation in the section above, although not accurate because we don’t make a complete copy of the tree each time there’s an update, gives us the intuition that keeping track of balances along time has a significant impact on gas costs. Jorge already pointed it out when we first considered the idea of checkpointing:

I am particularly worried about the added SLOAD cost while traversing the tree performing a sortition, as for every node in the tree we’d need to get its checkpointed value (even though it is a binary search, it would increase the number SLOADs by at least a factor of 2 and grows logarithmically with the number of balance updates that a juror has had).

Indeed it turned out to be a huge problem, a deal-breaker. As you can see in this pull request, we initially were spending 1M gas for a 10 jurors draft with big trees. That would mean using almost a whole block for a last regular round draft, which needs 81 jurors. We didn’t want to be the next crypto-kitties clogging the network. Besides, it could mean that the business model for the court may not be viable.

After several failed attempts (that you can check in that pull request mentioned above), we finally came up with the idea of multi-sortition. Essentially it consists in descending the tree only once for every batch, instead of once for each selected juror, and therefore visiting all intermediate levels only once at most, thus saving a lot of SLOADS. You can grasp the idea with these two animations:

Regular sortition: