Walkthrough: Build and Deploy Price Prediction Worker Node

How to build a node that predicts the future price of Ether


  1. Make sure you have checked the documentation on how to build and deploy worker node.
  2. Clone the coin-prediction-node repository once it is open sourced. It will serve as the base sample for your quick setup

We will be working from the repository you just cloned, we will explain each part of the source code and make changes for your custom setup as required.

Setting up your Custom Prediction Node network

  1. As explained in the previous guide, any worker node is a peer in the Allora Network, and each peer needs to be identified. You must generate an identity for your node by running this command which will automatically create the necessary keys and IDs in necessary folders.

  2. Peep into the docker-compose.ymlto understand what is happening and change the values as you wish. The docker-compose file consists of 3 docker services, namely

    1. Inference Service (app.py): This flask app exposes endpoints to be called to generate inference and to update the model. By hitting the service endpoint, the service will interact with the model to generate inference or update the model. This is the model server as explained in the previous guide, the gateway to the model from external requests. You can change the logic as your case demands.
    2. Updater Service (update_app.py): This service is meant to hit the /update endpoint on the inference service. The goal is to make sure the model state is updated when needed. You can also schedule this to be done periodically, depending on your case.
    3. Worker Service: This is the actual worker service that combines the allora-inference-base, the node function and the custom main.py Python logic. The Allora chain makes requests to the head node, which will broadcast requests to the workers, which will then download the function from IPFS and then run the function to be able to call the main.py, which contains a request to /inference/<token> endpoint that channels request to your model server. The worker service is built on a different Dockerfile called Dockerfile_b7s, which extends the functionalities of the allora-inference-baseimage.
      The BOOT_NODES variable is the address to the head node; it states the head node peerId so that the worker node would know the particular head node request that it needs to listen to that way when the head node publishes a request, the node will attend to it.
  3. The docker-compose services all communicate internally on the same network. This is not a requirement as you may decide not to use docker-compose and have them in different servers, as long as you follow the component principles and each service can be reached, then you have the worker node.

  4. If all the above is done correctly, you can now run docker-compose build && docker-compose up, which will create all services from the images and bootstrap until all services are up and running. Your worker node should be running and listening for topics created by the appropriate head node and should respond accordingly based on the logic of the inference service and the data from the model.

    Important: We provide a dedicated setup for ARM64 devices to address specific issues encountered with the dependencies of the inference and update nodes. To ensure an optimal experience and seamless operation on ARM64 devices, we have developed a tailored setup process that you can check here.

  1. You can verify that your services are up by running docker-compose ps and run the test curl request in the previous guide with your appropriate arguments.

Issue an Execution Request

After the node is running locally, it may be queried. Using cURL, issue the following HTTP request to the head node

curl --location 'http://localhost:6000/api/v1/functions/execute' --header 'Accept: application/json, text/plain, */*' --header 'Content-Type: application/json;charset=UTF-8' --data '{
    "function_id": "bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm",
    "method": "allora-inference-function.wasm",
    "parameters": null,
    "topic": "1",
    "config": {
        "env_vars": [
                "name": "BLS_REQUEST_PATH",
                "value": "/api"
                "name": "ALLORA_ARG_PARAMS",
                "value": "ETH"
        "number_of_nodes": -1,
        "timeout" : 2
}' | jq

The result will look like this:

    "code": "200",
    "request_id": "ef48fee9-d2da-43a4-84c4-936c5e8272e7",
    "results": [
            "result": {
                "stdout": "{'value':'2400000000000000000000'}\n\n",
                "stderr": "",
                "exit_code": 0
            "peers": [
            "frequency": 100
    "cluster": {
        "peers": [

The results.result.stdout will get the prediction output in json format.

Deploying your Custom Prediction Node

To get your node deployed to a remote production environment, you can deploy however you prefer or follow our Kubernetes deployment guide where you basically:

  1. Add the universal-helm chart to the helm repo.
  2. Update the values.yaml file to suit your case.
  3. Install the universal helm chart, and it will automatically deploy the node to production with the provided values.
  4. Monitor your node in your Kubernetes cluster.