Calculating Topic Reward Distribution Across all Network Participants

Now that we've explained the mechanisms behind distributing rewards for the actors of each network participant, let's dive in to how topic rewards are distributed across the groups of::

  • inference workers
  • forecast workers
  • reputers

The common objective of the calculated reward distribution across the network is to incentivize decentralization.

Key Factors

Modified Entropy

Entropy, in this context, is a measure of how spread out the rewards are among participants. We calculate entropy for each class of tasks (inference, forecast, reputer), which helps in determining how decentralized the reward distribution is.

Higher entropy means rewards are more evenly spread out across all participants.

The modified entropy for each class of tasks is given by the following equations:

Inference

Fi=jfijln(fij(NieffNi)β)F_i = - \sum_j f_{ij} \ln \left( f_{ij} \left( \frac{N_i^{\text{eff}}}{N_i} \right)^\beta \right)

Where:

  • FiF_i is the entropy for inference workers.
  • j\sum_j means we add up the values for all inference workers jj.
  • fijf_{ij} is the (smoothed) fraction of rewards for the jj-th inference worker.
  • NieffN_i^{\text{eff}} is the effective number of inference workers (a fair count to prevent cheating).
  • NiN_i is the total number of inference workers.
  • β\beta is a constant that helps adjust the calculation.

The formula for forecast workers (GiG_i) and reputers (HiH_i) is similar:

Gi=kfikln(fik(NfeffNf)β)G_i = - \sum_k f_{ik} \ln \left( f_{ik} \left( \frac{N_f^{\text{eff}}}{N_f} \right)^\beta \right) Hi=mfimln(fim(NreffNr)β)H_i = - \sum_m f_{im} \ln \left( f_{im} \left( \frac{N_r^{\text{eff}}}{N_r} \right)^\beta \right)

Where:

  • GiG_i and HiH_i are the entropies for forecast workers and reputers, respectively.
  • k\sum_k and m\sum_m mean we add up the values for all forecast workers kk and all reputers mm.
  • fikf_{ik} and fimf_{im} are the (smoothed) fractions of rewards for the kk-th forecast worker and the mm-th reputer.
  • NfeffN_f^{\text{eff}} and NreffN_r^{\text{eff}} are the effective numbers of forecast workers and reputers.
  • NfN_f and NrN_r are the total numbers of forecast workers and reputers.

where we have defined modified reward fractions per class as:

fij=u~ijju~ij,fik=v~ikkv~ik,fim=w~immw~imf_{ij} = \frac{\tilde{u}_{ij}}{\sum_j \tilde{u}_{ij}}, \quad f_{ik} = \frac{\tilde{v}_{ik}}{\sum_k \tilde{v}_{ik}}, \quad f_{im} = \frac{\tilde{w}_{im}}{\sum_m \tilde{w}_{im}}

Here, the tilde over the rewards indicates they are smoothed using an exponential moving average to remove noise and volatility from the decentralization measure.

Effective Number of Participants

To prevent manipulation of the reward system against sybil attacks, we calculate the effective number of participants (actors). It ensures that the reward distribution remains fair even if someone tries to game the system.

Nieff=1jfij2,Nfeff=1kfik2,Nreff=1mfim2N_i^{\text{eff}} = \frac{1}{\sum_j f_{ij}^2}, \quad N_f^{\text{eff}} = \frac{1}{\sum_k f_{ik}^2}, \quad N_r^{\text{eff}} = \frac{1}{\sum_m f_{im}^2}

Where:

  • NieffN_i^{\text{eff}}, NfeffN_f^{\text{eff}}, and NreffN_r^{\text{eff}} are the effective numbers of inference workers, forecast workers, and reputers.
  • The fractions fijf_{ij}, fikf_{ik}, and fimf_{im} are squared and then added up for each type of worker.

Putting It All Together

Dividing the Pie: Who Gets What?

We take the total reward for a task and split it among the different worker types based on our entropy calculations. Here's the formula:

Ui=(1χ)γFiEi,tFi+Gi+Hi,Vi=χγGiEi,tFi+Gi+Hi,Wi=HiEi,tFi+Gi+HiU_i = \frac{(1 - \chi)\gamma F_i E_{i,t}}{F_i + G_i + H_i}, \quad V_i = \frac{\chi \gamma G_i E_{i,t}}{F_i + G_i + H_i}, \quad W_i = \frac{H_i E_{i,t}}{F_i + G_i + H_i}

In simpler terms:

  • UiU_i is the reward for inference workers.
  • ViV_i is the reward for forecast workers.
  • WiW_i is the reward for reputers.
  • Ei,tE_{i,t} is the total reward for the participants in topic tt.
  • χ\chi is a factor that adjusts how much reward goes to forecast workers.
  • γ\gamma is a is a normalization factor to ensure the rewards add up to Ei,tE_{i,t}.
  • FiF_i, GiG_i, and HiH_i are the entropies for inference workers, forecast workers, and reputers.

What Value is Added by Forecasters? Checking the Predictions

We quantify the value added by the entire forecasting task using a score called TiT_i:

Ti=logLilogLiT_i = \log L_i^- - \log L_i

Where:

  • TiT_i is the performance score for the entire forecasting task.
  • LiL_i^- iis the network loss when including all forecast-implied inferences.
  • LiL_i is the network loss with the forecast task.

We then use this score to decide how much the forecast workers should get. The higher their score relative to inference workers, the higher the total reward allocated to forecasters:

τiαTimin(0,maxjTij)maxjTij+(1α)τi1\tau_i \equiv \alpha \frac{T_i - \min(0, \max_j T_{ij})}{|\max_j T_{ij}|} + (1 - \alpha) \tau_{i-1}

Where:

  • τi\tau_i is a ratio expressing the relative added value of the forecasting task relative to inference workers.
  • TijT_{ij} is the performance score for each inference worker.

This ratio is then mapped onto a fraction of the worker rewards that is allocated to forecasters:

χ={0.1if τi<0,0.4τi+0.1if 0τi<1,0.5if τi1.\chi = \begin{cases} 0.1 & \text{if } \tau_i < 0, \\ 0.4 \tau_i + 0.1 & \text{if } 0 \leq \tau_i < 1, \\ 0.5 & \text{if } \tau_i \geq 1. \end{cases}

The Normalization Factor

We use a normalization factor γ\gamma to ensure the rewards add up to Ei,tE_{i,t}:

γ=Fi+Gi(1χ)Fi+χGi\gamma = \frac{F_i + G_i}{(1 - \chi)F_i + \chi G_i}

Where:

  • γ\gamma ensures that the total reward allocated to workers (Ui+ViU_{i} + V_{i}) remains constant after accounting for the added value of the forecasting task. By using these methods, we can ensure that rewards are spread out fairly and encourage everyone to contribute their best work.