Feedback wanted: Modifying Overload Resolution

In Solidity, it is valid to define functions of the same name, but with different parameter types. What is problematic is that in some situations, it is impossible to call such functions because of implicit conversions:

function f(uint64) { ... }
function f(uint32) { ... }

function test() {
  uint16 a = 2;
  f(a);
}

The call f(a) will create a compiler error, because overload resolution first collects a set of possible functions purely by name. Then it eliminates all functions that cannot be called due to the argument types. If there is more than one function left (or no function at all), it is not clear which function to call.

In this case, the uint16 a can be both converted to uint32 and uint64.

This essentially leads to f(uint32) to be not callable at all, because any type that is implicitly convertible to uint32 is also implicitly convertible to uint64.

Another prominent example is having library functions that take “bytes memory” and “bytes storage” parameters.

We would like to solve this situation. There are at least two ways to solve it:

  1. Disallow situations altogether where a function is not callable because it will always result in at least two functions being callable. This might be problematic because of the using statement that can import multiple functions from multiple places even though you do not really want to use those functions.

  2. Make such function calls work by modifying the overload resolution logic as follows:
    If the set of function candidates contains a function whose parameter types exactly match the argument types, then this one is called. Otherwise, do overload resolution as it used to be: Check if there is a single function that can be called including implicit conversions of the arguments.

This way, it would be possible to call the right function above using f(uint32(a)) or f(uint64(a)).

A third option would be as follows: If there are multiple functions with the same name, always fail unless the argument and parameter types match exactly for one function.

The first and the third option are breaking changes because they can change the behaviour or validity of contracts that currently compile. The second option seems like it is a non-breaking change, because it only makes more contracts valid.

What do you think? What do you use overloads for? Would you find this feature useful or should it rather be restricted?

Thanks a lot for your feedback!

Option 2 sounds good to me. It’s the behavior I expected when I first ran into this issue. It’s also aligned with minimizing breaking changes in the language, and doing them only when they provide significant value, which I don’t think would be the case here.

That said, I think it’s problematic to encourage casting numbers, because downcasting can silently overflow. There is an issue about adding overflow checking to casts: #10195.

2 Likes

Considering the described challenge, option #2 seems preferable to me, mostly for backwards-compatibility of existing code.
I use overloading quite a lot, but avoid implicit conversions in my own code. Rather, I use overloading with varying parameter lists that leave no doubt which function is meant to be invoked. I understand that importing other people’s code makes this hard to control.

Option 2 sounds reasonable to me. As a developer it’s fine to cast the parameters in a call when that is needed to choose between one of several options.

The only time where I’ve found this issue is when having multiple libraries to support the same operations in different types such as mulr(uint256, uint256) and mulr(uint128, uint128). Option 2 would have worked for me just fine.

Related specifically to this, I would very much like to see explicit imports like TypeScript, where you don’t get wildcard imports by default and similarly for using statements, I would like to see explicit using statements.

import { SafeUnsignedMath } from 'SafeMath.sol';
using { add } from SafeUnsignedMath for uint256;
using { add } from SafeSignedMath for int256; // error, you didn't explicitly import SafeSignedMath
uint256 a = 5;
a.sub(5); // error, you didn't explicitly import sub from anywhere

Explicit imports have been there since the beginning and a change to the “using” directive is being discussed here: Extend "using for" · Issue #9211 · ethereum/solidity · GitHub

Your example could be implemented as “using SafeUnsignedMath.add for uint256;”. I don’t think that at this stage, it makes any sense to distinguish libraries from modules and internal library functions from free functions for this purpose.

1 Like

Option 2, although I prefer option 3, cause I think the developers must know what they write and what they want to achieve at all times, so ideally, it should either succeed or fail, and definitely not the third case: indirectly succeed, some unanticipated successes may lead to huge problems. But option 3 will bring breaking changes, so for backwards-compatibility, maybe option 2 is much better.

Why is backwards compatibility so important that we are willing to sacrifice clarity and security for it?

@axic Can you explain how option 2 sacrifices those things?

Backwards compatibility is important because breaking changes place a lot of strain on the ecosystem. Tooling and Solidity libraries need to catch up with a new incompatible version, this may result in having to break backwards compatibility themselves; Solidity developers will remain on previous versions because they already have a working setup and see no significant value from upgrading, and then they may be left out of tooling and library improvements because the newer ecosystem is now incompatible with their code. I’m somewhat exaggerating but this all happens to some degree and should be taken seriously.

This doesn’t mean everything should always be backwards compatible, but it should be preferred.

1 Like