Calculating Topic Reward Distribution Across all Network Participants
Now that we've explained the mechanisms behind distributing rewards for the actors of each network participant, let's dive in to how topic rewards are distributed across the groups of::
- inference workers
- forecast workers
- reputers
The common objective of the calculated reward distribution across the network is to incentivize decentralization.
Key Factors
Modified Entropy
Entropy, in this context, is a measure of how spread out the rewards are among participants. We calculate entropy for each class of tasks (inference, forecast, reputer), which helps in determining how decentralized the reward distribution is.
Higher entropy means rewards are more evenly spread out across all participants.
The modified entropy for each class of tasks is given by the following equations:
Inference
Where:
- is the entropy for inference workers.
- means we add up the values for all inference workers .
- is the (smoothed) fraction of rewards for the -th inference worker.
- is the effective number of inference workers (a fair count to prevent cheating).
- is the total number of inference workers.
- is a constant that helps adjust the calculation.
The formula for forecast workers () and reputers () is similar:
Where:
- and are the entropies for forecast workers and reputers, respectively.
- and mean we add up the values for all forecast workers and all reputers .
- and are the (smoothed) fractions of rewards for the -th forecast worker and the -th reputer.
- and are the effective numbers of forecast workers and reputers.
- and are the total numbers of forecast workers and reputers.
where we have defined modified reward fractions per class as:
Here, the tilde over the rewards indicates they are smoothed using an exponential moving average to remove noise and volatility from the decentralization measure.
Effective Number of Participants
To prevent manipulation of the reward system against sybil attacks, we calculate the effective number of participants (actors). It ensures that the reward distribution remains fair even if someone tries to game the system.
Where:
- , , and are the effective numbers of inference workers, forecast workers, and reputers.
- The fractions , , and are squared and then added up for each type of worker.
Putting It All Together
Dividing the Pie: Who Gets What?
We take the total reward for a task and split it among the different worker types based on our entropy calculations. Here's the formula:
In simpler terms:
- is the reward for inference workers.
- is the reward for forecast workers.
- is the reward for reputers.
- is the total reward for the participants in topic .
- is a factor that adjusts how much reward goes to forecast workers.
- is a is a normalization factor to ensure the rewards add up to .
- , , and are the entropies for inference workers, forecast workers, and reputers.
What Value is Added by Forecasters? Checking the Predictions
We quantify the value added by the entire forecasting task using a score called :
Where:
- is the performance score for the entire forecasting task.
- iis the network loss when including all forecast-implied inferences.
- is the network loss with the forecast task.
We then use this score to decide how much the forecast workers should get. The higher their score relative to inference workers, the higher the total reward allocated to forecasters:
Where:
- is a ratio expressing the relative added value of the forecasting task relative to inference workers.
- is the performance score for each inference worker.
This ratio is then mapped onto a fraction of the worker rewards that is allocated to forecasters:
The Normalization Factor
We use a normalization factor to ensure the rewards add up to :
Where:
- ensures that the total reward allocated to workers () remains constant after accounting for the added value of the forecasting task. By using these methods, we can ensure that rewards are spread out fairly and encourage everyone to contribute their best work.