Allora Glossary

Understand key terms used within the Allora Network.

Whether you're new to the network or looking to deepen your understanding, the following definitions will assist you in navigating our ecosystem.

The following terms are used within the Allora Network:

APPChainStateful Blockchain using Tendermint/CosmosSDK to coordinate the state and monetization of work and responses for the compute network.
Blockless (b7s)Stateless coordination and execution layer using libp2p gossipsub protocol as a connection base. Layered on top is a distribution, selection, and execution mechanism.
bondsA bond is defined between workers, where one predicts the quality of the other's inference. The bond expresses the strength of the relation between such worker pairs. It is equal to the weight obtained by comparing the predicted quality of the worker's inference to the actual quality of the network-wide inference. Bonds allow both types of workers to share in one another's network rewards.
cadenceUpdate cycle of the topic. This defines the time frequency at which a full network cycle is completed and new inferences are being produced.
data providerOptional user role not formally enforced in the system. Data providers provide data to predictors. This may be individuals or groups with access to proprietary datasets. Payment for data is not enforced in protocol and can be settled out-of-band.
emissionsNetwork subsidies generated to reward workers and reputers in units of the native network token. The emissions are sourced from fees paid by consumers and the network treasury, allocated to provide rewards.
epochA distinct time step at which each predictor emits a prediction which is incorporated into the Allora Network output.
ground truthThe proper objective data for the target. For example, if the target is ”the ETH price of NFT assets 30 days in the future”, the ground truth will be established 30 days from the time of the epoch.
incentiveThe payment distributed to predictor for their predictions.
peerA fellow participant in the network.
prediction consumerNetwork participant requesting inferences from the network. A consumer uses the native network token to pay for inferences.
registrationAddition of a new element to the network. This can be adding a topic to the on-chain topic registry, listing the topic name, target inference, loss function, or several other parameters, or registering a new network participant, such as a worker or reputer.
reputerNetwork participant evaluating the quality of the inferences provided by all workers. This is done by comparing the inferences to the ground truth when it becomes available (i.e. by calculating the loss according to a loss function), according to the reputer's subjective opinion. The output of a reputer also quantifies how much these inferences contribute to the network-wide inference. A reputer receives rewards proportional to its stake and the quality of its evaluations.
stakeAmount of native network tokens staked by reputers. Each reputer's stake determines its rewards earned in case of good performance or loss incurred in case of poor performance.
topic or targetThe feature that each predictor is trying to predict. For example, ”the ETH price of NFT assets 30 days in the future” or ”the return on ETH price of NFT assets 7 days in the future”. Per topic, a sub-network of participants exists that collaborate to provide inferences of the specific nature defined by the topic. The target inference describes the topic, the loss function that is minimized to achieve the best results, and several other parameters.
WASMPortable Application Format. Byte code that runs in a low-level Virtual Machine. Like the JVM but faster and more secure.
weightThe relative contribution of a worker's inference to the network-wide inference. The reward received by a worker is determined by the weight that it receives from the network.
weight-adjustment logicThe mechanism used by the network to determine the weights. This involves comparing the quality of the network-wide inference to the quality of the inferences by individual workers and converting these comparisons to a measure that quantifies how much each inference contributes to the network-wide inference.
workerNetwork participant providing AI/ML-powered inferences to the network. These inferences can either directly refer to the object that the network topic is generating or to the predicted quality of the inferences produced by other workers to help the network combine these inferences. A worker receives rewards proportional to the quality of its inferences.
ZKML proofA computational audit that proves that the specified machine learning model generated an output.

If there are any other terms that you are unfamiliar with, please feel free to reach out via any of our channels on our Community page, and we are happy to help clarify!