Skip to main content

Scaling

info

Scaling is currently in testing on Flare Testnet Coston2 prior to its launch on Flare Mainnet.

Scaling is an advanced framework designed to optimize the functionality and efficiency of FTSOv2. It operates through data providers who submit feed estimates weighted by their stake in the network. These estimates are processed using a weighted median algorithm to determine consensus feed values. Scaling offers several enhancements:

  • Supports up to 1000 data feeds across various asset classes, including equities, commodities, and cryptocurrencies, and offers access to 2 weeks of historical data.

  • Uses a commit-reveal process across approximately 100 independent data providers every 90 seconds to ensure data integrity and accuracy.

  • Optimizes median value computation and data storage to consume less than 5% of network bandwidth at peak usage.

Architecture

Each phase of Scaling is designed to ensure a secure, efficient, and fair consensus process. The protocol is structured into four phases:

  • Commit: Data providers compute and submit data proposals encoded in a commit hash. To maintain security, the actual feed values are not disclosed at this stage.

  • Reveal: Data providers reveal their data to one another, alongside the random numbers used to generate their commit hash.

  • Sign: Valid data reveals are used to calculate median values, which are aggregated into an efficient Merkle tree structure and published onchain.

  • Finalization: Once a sufficient weight of signatures for the same Merkle root is collected, a randomly chosen provider (or any other entity in case of a failure), can collect and submit them onchain for verification.

Once the finalization phase is complete, the Merkle root is published onchain, making it available to all other smart contracts for verification of calculation results. This structured approach not only maintains data integrity and accuracy but also incentivizes active participation from data providers, contributing to the overall efficiency and reliability of Scaling.

Weighted Median Calculation

The calculation of the weighted median in Scaling is a crucial process for ensuring accurate consensus on feed values from various data providers. This calculation begins once all valid data estimates are collected, which are then sorted in increasing order based on their feed values. Each data estimate is associated with a weight that corresponds to the voting power of the data provider, which is determined by the amount of stake each provider has. This weighted approach ensures that providers with a higher stake have a proportional impact on the final median calculation, alongside facing a stricter economic cost for misbehavior.

The next step involves calculating the total weight, which is the sum of the weights of all valid data submissions. The median threshold is then determined as half of the total weight. This threshold helps identify the point at which the cumulative weight of the sorted data meets or exceeds half of the total weight, indicating the weighted median.

Starting from the smallest data estimate, the weights are accumulated in the order of the sorted estimates. The weighted median is identified as the data estimate at which the cumulative weight first meets or exceeds the median threshold.

An illustrative example clarifying the process

Suppose we have the following data estimates from five providers with their corresponding weights:

Estimate (Feed Value)Weight
2504
2002
1001
1503
3001

First, sort these estimates:

Estimate (Feed Value)Weight
1001
1503
2002
2504
3001

Calculate the total weight: W=1+3+2+4+1=11W = 1 + 3 + 2 + 4 + 1 = 11.

The median threshold MM is W2=5.5\frac{W}{2} = 5.5.

Accumulate the weights:

  • For 100: Cumulative weight = 1
  • For 150: Cumulative weight = 1 + 3 = 4
  • For 200: Cumulative weight = 4 + 2 = 6

At this point, the cumulative weight (6) exceeds the median threshold (5.5). Therefore, the weighted median is 200.

This weighted median calculation ensures that the consensus feed value reflects the most influential estimates, balancing the data based on the providers' voting power. This method is designed to be robust against outliers and manipulation, ensuring a fair and reliable consensus process.

Incentivization Mechanism

The incentivization mechanism in Scaling is designed to ensure active and accurate participation from data providers while maintaining the integrity of the data submission process. The protocol divides the total reward pool for each voting epoch into three main categories: median closeness rewards, signature rewards, and finalization rewards.

Rewards are calculated by comparing data submissions to the median value. Providers whose submissions fall within the interquartile range (IQR) band are eligible for median closeness rewards. If a submission is on the boundary of the IQR band, a pseudo-random selection process determines inclusion. Additional reward bands, which are defined by Flare governance and have a fixed percentage around the finalized value, further refine the distribution, ensuring fair and accurate reward allocation.

Penalties are imposed for non-matching or omitted reveals to maintain data integrity. Providers with mismatched or missing reveals face reduced rewards or even negative cumulative rewards, making them non-claimable. The protocol also evaluates the quality of random numbers generated during reveals, penalizing omissions to prevent manipulation.

Inflation reward offers, triggered automatically for certain supported feeds, and community reward offers, submitted by any entity before a reward epoch, provide continuous and flexible incentivization. These mechanisms, managed by specific smart contracts, ensure that both common and less frequent feeds are adequately rewarded, promoting a balanced and effective participation from data providers. This comprehensive incentivization scheme encourages honest and active participation, ensuring Scaling's efficiency and reliability.