Size limit of Array/Mapping on BNB Smart Chain (Scalability Question)

Hi,
I am going to deploy my contract on BNB Smart Chain and I want to ask a scalability related question, i have a struct something similar like following code:

struct Purchases{        

          uint32 itemId;        

           uint32 amount;        

           uint48 time;        

    }

and trying to manage purchase history through mapping as follows

  mapping(uint256 => Purchases\[\]) public purchaseHistory;

// each UserID is mapped to an Array of his Purchases to manage User’s Purchase History.

I am assuming that purchaseHistory will be incremented in 1000s every day, my questions are follows:

How much data can be stored in mapping like purchaseHistory array for each UserID ?
Will contract explode after some time ?
For retrival, we will use pagginated function that will return purchase of user 10 items per page only.

will it be workable in long run or can the contract stop function after the array grows to many 1000s of records ?

Regards. & will very much appreciate the reply.

How much data can be stored in mapping like purchaseHistory array for each UserID ?

Storage space is at a premium so the compiler tries to pack things for you as much a possible. There is very little overhead here. The mapping itself takes only as much space as its content. Each dynamic array takes a single slot for length and then just the items. Check Layout of State Variables in Storage and Transient Storage for details.

One thing to potentially change here in case you want to minimize the total use of storage could be to use a struct of arrays rather than an array of structs. A struct always takes the whole slot, while value types can be packed. This would have the extra overhead of 2 size slots for the arrays but given that you expect thousands of items in each array and items are very short, much more space is wasted on padding between items. The downside is that it would make reads more expensive - you’d need to access 3 slots instead of 1 to get a complete set of information for one purchase - but again, if you’re reading 10 subsequent items at a time you can amortize the storage access cost by reading more than one at a time.

Will contract explode after some time ?

The only hard limit on how much data you can store per mapping item is the size of storage, so for all practical purposes it’s pretty much infinite. There’s a soft limit of 2^64 32-byte slots, where the compiler starts to assume that the risk of collisions between mapping items stops being negligible, but that’s still not something you’ll easily reach before running into all kinds of other scaling problems.

While writing thousands of slots per day could add up to something quite expensive, as long as it’s not all in a single transaction, but rather spread over multiple users and transactions, it’s not an issue.

For retrival, we will use pagginated function that will return purchase of user 10 items per page only.

will it be workable in long run or can the contract stop function after the array grows to many 1000s of records ?

Assuming that you want to get the data to display it for the user or for some off-chain processing it’s not an issue either. Just make sure you return it using a view function. Such functions can be executed offchain using eth_call, which means that you don’t pay anything. No transaction is published, everything happens locally on your client, not on the network. If that’s your use case, you don’t even necessarily need pagination in the contract.

The only exception is if you want to get and process that data in another contract. Then that contract does need to execute the function on chain and pagination matters. In fact, the language automatically does that for you - getter functions for arrays by design return them item by item. In your case getting multiple items at a time would be more efficient if you go for the multi-array solution I mentioned, but generally the number should be low since a contract is not likely to have enough gas to process the whole array anyway.

Generally, there are no hard limits and you can get away with storing quite a lot of data overall, as long as you ensure that it amortizes to a small amount per user and design your contracts properly so that the whole array is never processed on chain.

1 Like