Mapping in memory & functions optimisation

Hey everyone,

First thanks for all the efforts put in the creation of this language which is better and better every year !

There are 2 things that I would love to be improved though :grin:

  1. Mappings in memory

Key - value stores are super useful to improve time complexity of algos, which is good for us to save gas ! Unfortunately, this saving is often canceled by the fact that we can only put mappings in storage, making them expensive to manipulate…

I guess it is not straightforward to implement if it is not already, but allowing mappings in memory would be a great feature !

  1. Functions optimisation

You might know the 2 best practices in coding called SRP and DRY.

When you have a big chunk of code in a function, it is advised to break it into multiple smaller functions that each do only one thing. Also, If you have the same code in two or more functions, it is advised to put it into a single function and call it in the multiple places.

Those improve readability and maintainability.

Unfortunately, we tend to not apply them in Solidity to improve gas costs.

Not sure how possible it is, but it would be super useful to implement some kind of optimisation/transpilation at compilation that would allow to create many functions without affecting gas cost.

A naive way I can think of doing that would be to cut/paste the opcodes of private and internal functions into the functions who call them at compilation, removing the use of the JUMP opcode. The cases where this would happen would of course need to be adjusted depending if we want to rather optimise for deploy time or run time.

Hopefully this feedback can help in any way !

1 Like

Thank you for the suggestions!

Mappings are mainly possible in storage because of the special way storage is accessible in the EVM. I don’t think it would be a good idea to implement something like mappings in memory in the compiler - you would need to search through lists, cleverly allocate the space and potentially copy over if things grow too big. This is not something you want to have written in a low-level language. We are planning to add generics to Solidity, though, and at that point, it could be possible to implement it as a library.

Function optimization is one of the main things we have been working over the past years. We don’t do it at the level of Solidity functions, but at the level of yul functions. To get the full benefit of this, you should compile via yul instead of via the opcode-based code generator. You can try it out using solc --experimental-via-ir - we would love to hear your feedback about gas values and opportunities for optimization! I could write some more paragraphs about that, but since we will release a new version of the compiler this week that contains some improvements in that area, I would rather advise you to read the related blog posts.