Erasure codes and extreme distribution are at the heart of our network, key to powering the availability and performance our users enjoy
Be it power outage, disk failure, or even a cosmic ray, data loss & unavailability is inherent to storage networks of any scale. Industry standard solutions often involve erasure codes/error correction codes which perform transformations to data, building in levels of redundancy tunable to achieve any ratio desired.
Erasure codes of the type employed take 4 MB “sectors” (sometimes “shards”) from the original file (called the data sector), and encode it to produce however many “parity” sectors are desired. These parity sectors contain redundant (repeated) information about the data sectors, but no two parity sectors are the exactly same. Using a combination of data and parity sectors, it is possible to recreate the original file even in cases of extreme outage/unavailability.
For an example, given parameters may be 30 data sectors and 60 parity sectors, which implies a 3x redundancy factor (any 30 of the total 30+60 sectors are required to recover data). Operations would be performed one “row” at a time, which would be 120 MB long (30 data sectors x 4 MB), and produce a total of 90 sectors (30 data + 60 parity).
The ScPrime network derived its renter functionality from the Sia protocol, which defines how sectors are created and distributed to a network. That original Renter-Host Protocol implementation called for the renter module to distribute these (default) 10 data and 20 parity sectors (3x redundancy) across the same set of hosts, this default behavior meaning only 30 storage providers. In practice this greatly limits performance capabilities, as one becomes limited to only the available bandwidth of this fairly low number of providers, and would become more greatly affected by any of those 30 providers becoming unavailable and taking a whole copy of sectors with them.
The Xa Net Services Relayer innovates on this by enabling each row individually to have a unique set of providers the sectors are distributed to. So while a row may produce 90 sectors (from the earlier example), these sectors would be distributed to a different 90 providers than the row before it (though in practice there is likely to be some overlap between rows). This greatly boosts the amount of parallelism that is possible for both network uploads and downloads, while also reducing the impact of any particular provider becoming unavailable.
This mass parallelism and greater distribution across the network is an essential strength in the Relayer stack. Sufficiently large files may become distributed across thousands of providers, with each Relayer possessing much greater granularity for performance tuning and resilience toward outages.