Developers
Allora API Endpoint

Allora API: How to Query Data of Existing Topics

The Allora API provides an interface to query real-time on-chain data of the latest inferences made by workers. Here's an explanation of how it works using the example endpoint:

API Authentication

To access the Allora API, you need to authenticate your requests using an API key.

Obtaining an API Key

You can obtain an API key through the Allora API key management system. Contact the Allora team on Discord (opens in a new tab) for access to API keys.

Using an API Key

Once you have an API key, you can include it in your API requests using the x-api-key header:

curl -X 'GET' \
  --url 'https://api.allora.network/v2/allora/consumer/<chainId>?allora_topic_id=<topicId>' \
  -H 'accept: application/json' \
  -H 'x-api-key: <apiKey>'

Replace <apiKey> with your actual API key, <chainId> with the chain ID (e.g., ethereum-11155111 for Sepolia), and <topicId> with the topic ID you want to query.

API Key Security

Your API key is a sensitive credential that should be kept secure. Do not share your API key or commit it to version control systems. Instead, use environment variables or secure credential storage mechanisms to manage your API key.

// Example of using an environment variable for API key
const apiKey = process.env.ALLORA_API_KEY;

Rate Limiting

API requests are subject to rate limiting. If you exceed the rate limit, you will receive a 429 Too Many Requests response. To avoid rate limiting issues, consider implementing retry logic with exponential backoff in your applications.

API Endpoints

Generic: https://allora-api.testnet.allora.network/emissions/{version_number}/latest_network_inferences/{topic_id}

Example: https://allora-api.testnet.allora.network/emissions/v7/latest_network_inferences/1

Where:

  • "v7" represents the latest network version number
  • "1" represents the topic ID

Sample Response:

{
  "network_inferences": {
    "topic_id": "1",
    "reputer_request_nonce": null,
    "reputer": "",
    "extra_data": null,
    "combined_value": "2605.533879185080648394998043723508",
    "inferer_values": [
      {
        "worker": "allo102ksu3kx57w0mrhkg37kvymmk2lgxqcan6u7yn",
        "value": "2611.01109296"
      },
      {
        "worker": "allo10q6hm2yae8slpvvgmxqrcasa30gu5qfysp4wkz",
        "value": "2661.505295679922"
      }
    ],
    "forecaster_values": [
        {
            "worker": "allo1za8r9v0st4ntfyeka23qs5wvd7mvsnzhztupk0",
            "value": "2610.160000000000000000000000000000"
        }
    ],
    "naive_value": "2605.533879185080648394998043723508",
    "one_out_inferer_values": [
      {
        "worker": "allo102ksu3kx57w0mrhkg37kvymmk2lgxqcan6u7yn",
        "value": "2570.859434973857748387096774193548"
      },
      {
        "worker": "allo10q6hm2yae8slpvvgmxqrcasa30gu5qfysp4wkz",
        "value": "2569.230589724828006451612903225806"
      }
    ],
    "one_out_forecaster_values": [],
    "one_in_forecaster_values": [],
    "one_out_inferer_forecaster_values": []
  },
  "inferer_weights": [
    {
      "worker": "allo102ksu3kx57w0mrhkg37kvymmk2lgxqcan6u7yn",
      "weight": "0.0002191899319465528034563075461505151"
    },
    {
      "worker": "allo10q6hm2yae8slpvvgmxqrcasa30gu5qfysp4wkz",
      "weight": "0.0002191899319465528034563075461505151"
    }
  ],
  "forecaster_weights": [
    {
      "worker": "allo1za8r9v0st4ntfyeka23qs5wvd7mvsnzhztupk0",
      "weight": "0.1444137067859501612197657742201029"
    }
  ],
  "forecast_implied_inferences": [
    {
      "worker": "allo1za8r9v0st4ntfyeka23qs5wvd7mvsnzhztupk0",
      "value": "2610.160000000000000000000000000000"
    }
  ],
  "inference_block_height": "1349577",
  "loss_block_height": "0",
  "confidence_interval_raw_percentiles": [
    "2.28",
    "15.87",
    "50",
    "84.13",
    "97.72"
  ],
  "confidence_interval_values": [
    "2492.1675618299669694181830608795809",
    "2543.9249467952655499150756965734158",
    "2611.033130351115229549044053766836",
    "2662.29523395638446190095015123294396",
    "2682.827040221238"
  ]
}
⚠️

Please be aware that there may be some expected volatility in predictions due to the nascency of the network and the more forgiving testnet configurations currently in place. We are actively working on implementing an outlier protection mechanism, which will be applied at the consumer layer and tailored to individual use cases in the near future.

Breaking Down the Response

Below is an explanation of important sub-objects displayed in the JSON output:

topic_id

In this case, "1" represents the topic being queried. Topics define the context and rules for a particular inference.

naive_value

The naive value omits all forecast-implied inferences from the weighted average by setting their weights to zero. The naive network inference is used to quantify the contribution of the forecasting task to the network accuracy, which in turn sets the reward distribution between the inference and forecasting tasks.

combined_value

The combined value is an optimized inference that represents a collective intelligence approach, taking both naive submissions and forecast data into account.

If you are looking to just get one value or number from Allora for a data oracle, this would be the one to take.

inferer_values

Workers in the network submit their inferences, each represented by an allo address. For example:

{
    "worker": "allo102ksu3kx57w0mrhkg37kvymmk2lgxqcan6u7yn",
    "value": "2611.01109296"
}

Each worker submits a value based on their own models. These individual submissions contribute to both the naive and combined values. The combined value gives higher weighting to more reliable workers, based on performance or other criteria.

one_out_inferer_values

These values simulate removing a single worker from the pool to see how the overall inference changes. This is a technique used to evaluate the impact of individual inferences on the combined result.

forecast_implied_inferences

The Forecast-Implied Inference uses forecasted losses and worker inferences to produce a predicted value where each prediction is weighted based on how accurately the forecasters predicted losses in previous time steps, or epochs.

inference_block_height

The specific chain block that the inference data was generated

confidence_interval_raw_percentiles

Fixed percentiles that are used to generate confidence intervals

confidence_interval_values

Confidence intervals show the predicted range of outcomes based on worker inferences.