Developers
Build/Deploy a Worker
Using Docker

Build and Deploy a Worker Node using Docker

This document outlines a setup where the worker node is supported by an inference server. Communication occurs through an endpoint, allowing the worker to request inferences from the server.

To build this setup, please follow these steps:

Prerequisites

Ensure you have the following installed on your machine:

  • Git
  • Go (version 1.16 or later)
  • Docker

Clone the allora-offchain-node Repository

Download the allora-offchain-node git repo:

git clone https://github.com/allora-network/allora-offchain-node
cd allora-offchain-node

Configure Your Environment

  1. Copy config.example.json and name the copy config.json.
  2. Open config.json and update the necessary fields inside the wallet sub-object and worker config with your specific values:

wallet Sub-object

  1. nodeRpc: The RPC URL for the corresponding network the node will be deployed on
  2. addressKeyName: The name you gave your wallet key when setting up your wallet
  3. addressRestoreMnemonic: The mnemonic that was outputted when setting up a new key

worker Config

  1. topicId: The specific topic ID you created the worker for.
  2. InferenceEndpoint: The endpoint exposed by your worker node to provide inferences to the network.
  3. Token: The token for the specific topic you are providing inferences for. The token needs to be exposed in the inference server endpoint for retrieval.
  • The Token variable is specific to the endpoint you expose in your main.py file. It is not related to any topic parameter.
⚠️

The worker config is an array of sub-objects, each representing a different topic ID. This structure allows you to manage multiple topic IDs, each within its own sub-object.

To deploy a worker that provides inferences for multiple topics, you can duplicate the existing sub-object and add it to the worker array. Update the topicId, InferenceEndpoint and Token fields with the appropriate values for each new topic:

"worker": [
      {
        "topicId": 1,
        "inferenceEntrypointName": "api-worker-reputer",
        "loopSeconds": 5,
        "parameters": {
          "InferenceEndpoint": "http://localhost:8000/inference/{Token}",
          "Token": "ETH"
        }
      },
      // worker providing inferences for topic ID 2
      {
        "topicId": 2, 
        "inferenceEntrypointName": "api-worker-reputer",
        "loopSeconds": 5,
        "parameters": {
          "InferenceEndpoint": "http://localhost:8000/inference/{Token}", // the specific endpoint providing inferences
          "Token": "ETH" // The token specified in the endpoint
        }
      }
    ],

Reputer Config

The config.example.json file that was copied and edited in the previous steps also contains a JSON object for configuring and deploying a reputer. To ignore the reputer and only deploy a worker, delete the reputer sub-object from the config.json file.

Create the Inference Server

Prepare the API Gateway

Ensure you have an API gateway or server that can accept API requests to call your model.

💡

The model in allora-offchain-node is barebones and outputs a random integer. Follow the model built in basic-coin-prediction-node (opens in a new tab) as an example for a full model that uses linear regression to provide an inference.

A full breakdown of the components needed to build the model is available here.

Server Responsibilities

  • Accept API requests from main.go.
  • Respond with the corresponding inference obtained from the model.

Inference Relay

Below is a sample structure of what your main.go, main.py and Dockerfile will look like.

main.go

allora-offchain-node comes preconfigured with a main.go file inside the adapter/api-worker-reputer folder (opens in a new tab).

The main.go file fetches the responses outputted from the Inference Endpoint based on the InferenceEndpoint and Token provided in the section above.

main.py

allora-offchain-node comes preconfigured with a Flask application that uses a main.py file to expose the Inference Endpoint.

The Flask application serves the request from main.go, which is routed to the get_inference function using the required argument (Token). Before proceeding, ensure that all necessary packages are listed in the requirements.txt file.

from flask import Flask
from model import get_inference  # Importing the hypothetical model
 
app = Flask(__name__)
 
@app.route('inference/<argument>')
def get_inference(param):
    random_float = str(random.uniform(0.0, 100.0))
    return random_float
 
if __name__ == '__main__':
    app.run(host='0.0.0.0')
⚠️

The model in allora-offchain-node is barebones and outputs a random integer. Follow the model built in basic-coin-prediction-node (opens in a new tab) as an example for a full model that uses linear regression to provide an inference.

A full breakdown of the components needed to build the model is available here.

Dockerfile

A sample Dockerfile has been created in allora-offchain-node that can be used to deploy your model on port 8000.

FROM python:3.9-slim
 
RUN apt-get update && apt-get install -y curl && rm -rf /var/lib/apt/lists/*
 
WORKDIR /app
 
COPY requirements.txt .
 
RUN pip install --no-cache-dir -r requirements.txt
 
COPY . .
 
EXPOSE 8000
 
CMD ["python", "main.py"]

Running the Node

Now that the node is configured, let's deploy and register it to the network. To run the node, follow these steps:

Export Variables

Execute the following command from the root directory:

chmod +x init.config
./init.config 

This command will automatically export the necessary variables from the account created. These variables are used by the offchain node and are bundled with your provided config.json, then passed to the node as environment variables.

💡

If you need to make changes to your config.json file after you ran the init.config command, rerun:

chmod +x init.config
./init.config 

before proceeding.

Request from Faucet

Copy your Allora address and request some tokens from the Allora Testnet Faucet (opens in a new tab) to register your worker in the next step successfully.

Deploy the Node

docker compose up --build

Both the offchain node and the source services will be started. They will communicate through endpoints attached to the internal DNS.

If your node is working correctly, you should see it actively checking for the active worker nonce:

offchain_node    | {"level":"debug","topicId":1,"time":1723043600,"message":"Checking for latest open worker nonce on topic"}

A successful response from your Worker should display:

{"level":"debug","msg":"Send Worker Data to chain","txHash":<tx-hash>,"time":<timestamp>,"message":"Success"}

Congratulations! You've successfully deployed and registered your node on Allora.



### Testing

You can test your local inference server by performing a `GET` request on `http://localhost:8000/inference/<token>`.

```bash
curl http://localhost:8000/inference/<token>