The assembly/solidity distinction was a mistake, don't repeat in Solidity Core!

I sincerely believe that the explicit distinction between inline assembly and “higher-level” code in Solidity was a mistake. A mistake which I see being suggested for solidity core in some of the example snippets on the blog. This is why I wanted to make my case for why and what I think you should be aiming for instead.

The Problems

Segregating language features between two scopes creates limitations on both sides that make it unnecessarily annoying to code and/or requires the language itself to duplicate concepts to make it ergonomic e.g.:

  • The ability to define, export, import and invoke functions
  • Control flow constructs (if-else-if conditional chains, switch, for-loops, while loops, do-while, etc.)
  • The ability to create abstractions over data via structs, custom types and associated methods (via using)
  • Access to low-level unsafe memory manipulation
  • Static typing
  • Access to non-uint256 constants

Furthermore because of these differences crossing the boundary becomes annoying as you have to think and deal with the differing semantics (all variables are u256’s vs. typed).

Why It’s Not Necessary

The main thing you can’t do in high-level solidity for which you need inline assembly for is:

  • low-level memory control
  • switch control flow statement
  • direct access to certain EVM opcodes/functionality (e.g. raw tstore, tload, byte opcode, log0-log4, etc.)
  • General optimization

All of this functionality can and would greatly benefit from being exposed as builtins directly accessible from the high-level language. This is what other mainstream system’s programming languages like C, Rust or Zig do. Low-level close to the metal primitives are primarily exposed and accessed via high-level wrappers.

For optimizations most of the things developers typically do are things the high-level language already provides but does sub-optimally:

  • abi encoding
  • abi decoding
  • packed bytes handling

The optimizations that users typically do should just be incorporated in the compiler as these usually follow a pretty regular pattern. For others, adding unsafe casts and allowing you to do arithmetic and/or logical operations directly on booleans (e.g. non-branching bitwise or) covers another big portion of common optimizations.

You could even borrow the unsafe design concept from Rust: a simple function coloring system that has the developer specifically designate boundaries when core compiler invariants must be manually upheld and/or just as a general convention of tricky lower-level code.

Why Other Languages Have Inline Assembly

“If other mainstream languages have inline assembly why shouldn’t Solidity?”

My claim is that Solidity’s inline assembly isn’t comparable. Inline assembly in other languages let you largely bypass the compiler and directly handwrite the instructions that will be included final binary. This comes at a large DX tradeoff but let’s you really control everything when you need to. Solidity’s inline assembly doesn’t give you this power. It gives you lower-level access but not as far as to let you:

  • write your own control flow with jumps
  • manage the stack
  • select exactly which arithmetic/logic instructions the final compiler will produce
  • include instruction bytes the compiler doesn’t yet recognize

However this doesn’t mean that Solidity couldn’t or shouldn’t have “real” inline assembly, just that I believe the current approach is suboptimal and should either be ditched or reworked in a fresh compiler.

Conclusion

Inline assembly in its current form provides a sub-optimal developer experience and bloats the compiler by duplicating concepts. Solidity Core should take advantage of its clean slate mandate to either deprecate inline assembly entirely, lifting lower-level functionality into builtins or introduce true inline assembly.

4 Likes

Thanks for your post and for engaging with the Core Solidity proposal. Your perspective on the role of inline assembly is valuable, and I appreciate the opportunity to clarify the design philosophy behind our view for Core Solidity.

The main guiding principle for the Core Solidity design is extensibility: the compiler should implement the minimal, most general primitives possible, and everything else should live in the standard library or user code.

This is why we introduced SAIL (Solidity Algebraic Intermediate Language) as a minimal core language. SAIL provides:

  • Algebraic datatypes and pattern matching
  • Generics and type classes
  • Functions and contracts
  • Assembly (Yul) blocks
  • A single built-in type (word)

Everything else —abi.encode, dispatch generation, uint256, bytes, memory pointers, even basic arithmetic operations — is implemented in the standard library using these primitives. The abi.encode implementation in the Core Solidity Deep Dive post is a concrete example: a feature that is a compiler builtin in Classic Solidity becomes a library function in Core Solidity, implemented using SAIL primitives and type classes.

This design has, in our view, some interesting consequences:

  • It eliminates the artificial boundary between “high-level” and “low-level” code. Library authors have access to the same primitives as the compiler writers. Features you identified as requiring assembly—low-level memory control, direct opcode access, optimization—can be exposed as library functions built on SAIL, not as special compiler intrinsics.

  • It reduces compiler bloat. Instead of the compiler duplicating concepts like control flow,
    function abstraction, or data structures in both high-level and assembly contexts, these are
    unified in SAIL. The compiler only needs to implement SAIL correctly; all higher-level features are desugaring passes (which could be delegated to library by using macros) and library code.

  • It enables the kind of optimization you describe. When common patterns like ABI encoding are implemented in the standard library rather than in the compiler, they become visible and
    modifiable. Users can provide alternative implementations optimized for their specific use
    cases.

Your comparison to Rust and Zig is apt, but Core Solidity takes inspiration from languages like Lean, where nearly everything is implemented in the language itself, and the compiler is “just” a small
trusted core. This is the “library-based language” philosophy is our ultimate goal for Core Solidity.

Finally, you outline three possible paths forward: deprecating inline assembly in favor of builtins,
keeping the current approach, or introducing true inline assembly. I’d say that Core Solidity actually takes a fourth option: rather than lifting assembly primitives up into high-level builtins, we are lowering high-level functionality down to assembly. By making SAIL the foundational layer, and then implementing everything else as desugaring and library code, assembly becomes an integral, unified part of the language rather than a separate concern.

4 Likes

I like the direction of “small trusted core + libraries,” and the Lean-style philosophy.

That said, I don’t think this fully addresses the core DX issue raised, the boundary created by embedding a second language/scope. If SAIL includes “assembly (Yul) blocks” as a primitive, we still have a mode switch where typing, ergonomics, tooling expectations, and semantics differ. That feels like the same segregation problem, just moved into the “core” layer.

Your reply also made me question two things:

  1. “Everything in libraries” still implies builtins — just hidden. Is that assessment correct?

    To implement things like memory pointers, basic arithmetic, tstore/tload, log0-log4, byte, etc., something must remain primitive (intrinsics/kernel semantics). If those primitives are only accessible via embedded Yul blocks, then the boundary remains. If they’re exposed as first-class SAIL primitives that libraries can call directly, then I in favour of the “no special compiler intrinsics” framing, but it would be good to clarify what “built on SAIL” concretely means here.

  2. Macros

    I don’t know the exact macro model you have in mind, but I worry about smart-contract-specific failure modes: hidden semantics, non-obvious builds (macro version drift producing different bytecode), and the risk of recreating “two languages” if the macro layer has different rules or side effects. I don’t see how this is avoided if macros become the primary mechanism for extending the language itself, unless macros are strictly syntax-to-SAIL lowering and are fully deterministic/inspectable.

Curious how you’re thinking about these tradeoffs — specifically whether Yul blocks are a transitional artefact (eventually replaced by first-class SAIL primitives for low-level effects) or a fundamental part of the long-term core design.

1 Like

Yes, exactly. Making the high-level language more expressive by allowing you to implement lower-level constructs yourself but still requiring bounded “assembly (Yul) blocks” doesn’t change my point. It will still be annoying to write those functions because of the Yul barrier that needs to be crossed.

At most it alleviates the DX issue as most developers may not end up interacting with this layer if the standard library provides 1:1 wrappers for all the Yul builtins in the higher-level language but at that point you’ve reimplemented all the inline-Yul machinery just to use it to define thin wrappers around mstore, mload, sstore, log4, etc.

1 Like

If my understanding is correct, this system would allow well-versed developers to write high-level constructs in YUL, exposing them in a way that can then be used safely by developers who don’t want to write low-level assembly code. This would mean that most code would entirely lack all these low-level constructs, and the library would encapsulate them, in a safe way. I think this is kind of “best of both worlds”? Of course it could be abused, but I think most developers would rather not abuse it. Let me know what you think :slight_smile:

3 Likes

I believe that @msooseth correctly resumed my view on this: instead of providing extra primitives for accessing Yul level operations, they could be simply defined in the standard library. In this way, users will not need to write assembly blocks, since such wrappers would be available at library level.