Build and Deploy a Worker Node From Scratch

This guide helps you build your worker by yourself without the help of the allora-wkr CLI tool

This document creates a setup where the worker node is supported by a side node providing inferences. They communicate through an endpoint so the worker will request inferences to the side node (the Inference Server). This makes an ultra-light worker node.

To build this setup, please follow these steps:

1. Inference Server

Ensure you have your API gateway ready or any API server that can accept API requests to call your model. The goal of this server is to accept API requests from the main.py, your custom Python script, which is run by the node function (more details in Node Function section below), and then to respond with the appropriate inference (gotten from the model), which will then be relayed by the worker node to the head node, which in turn will send to the chain validators to be eventually scored.

If you have a model running but do not have the inference server, you can make a Dockerfile to bundle a simple Flask application that exposes an endpoint to access the model. Below is a sample structure of what your app.py and Dockerfile will look like. You can also see a working example in the coin-prediction source code once it is open-sourced.

app.py

You will create a Flask application that imports the model module and calls the get_inference (could be any name) function upon API request, with the argument passed after you have ensured all needed packages were stated in requirement.txt

from flask import Flask
from model import get_inference  # Importing the hypothetical model

app = Flask(__name__)

@app.route('inference/<argument>')
def inference(argument):
    inference_data = get_inference(argument)
    return inference_data

if __name__ == '__main__':
    app.run(host='0.0.0.0')
Dockerfile

Next, you will create a Dockerfile like below and run the docker build -t inference-server . and docker run -p 8000:8000 inference-server

FROM python:3.8-slim

WORKDIR /app

COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

EXPOSE 8000

ENV NAME sample

# Run gunicorn when the container launches and bind port 8000 from app.py
CMD ["gunicorn", "-b", ":8000", "app:app"]

You can test your local Inference Server by hitting http://localhost:8000/inference/ (e.g., for ETH prediction model, this argument value is ETH )

2. Node Function Python Script

To communicate to the inference, you need to have the Python Script to serve as the entry point from the worker node to your inference server. You will have to write your custom logic in the main.py consisting of the logic to call the inference server and other processing logic to get the data ready for the head node. You can find a simple main.py below:

import requests
import sys
import json

def process(argument):
    url = f"http://localhost:8000/inference/{argument}"
    inference_response = requests.get(url)

    if inference_response.status_code == 200:
        inference = inference_response.json()
        # Your custom processing logic can be written here
        # Data must be provided in a `{"value":"<actual-value>"}` json format.
        response = json.dumps(inference['data']) 
    else:
        response = json.dumps({"error": "Errors providing inference"})

    print(response)


if __name__ == "__main__":
    try:
        argument = sys.argv[1]
        process(argument)
    except Exception as e:
        response = json.dumps({"error": {str(e)}})
        print(response)

3. Identity Generation

Node identity is determined by its private key. Each node on the network is also known as a peer, which has an ID and a private key. You can generate node identity by running the following command:

docker run -it --entrypoint=bash -v ./:/data 696230526504.dkr.ecr.us-east-1.amazonaws.com/allora-inference-base:dev-latest -c "mkdir -p /data/keys && (cd /data/keys && allora-keys)"

4. Worker Node Main Dockerfile

Now that you have the node identity generated for your worker, and your node function pulling data from your Inference Server, you must bundle your worker node with Dockerfile to be tested as a whole. As stated earlier, your Dockerfile will have to pull the allora-inference-base image from the Allora Docker repository and combine it with your custom main.py. Below is an example of what your Dockerfile_nodeshould look like:

# Pull base image
FROM --platform=linux/amd64 696230526504.dkr.ecr.us-east-1.amazonaws.com/allora-inference-base:dev-latest

# Copy requirements and install dependencies
COPY requirements.txt /app/
RUN pip3 install --requirement /app/requirements.txt

# Copy the main application file
COPY main.py /app/
    
CMD ["allora-node", "--role=worker", \
     "--peer-db=/data/peerdb", \
     "--function-db=/data/function-db", \
     "--runtime-path=/app/runtime", \
     "--runtime-cli=bls-runtime", \
     "--workspace=/data/workspace", \
     "--private-key=/data/keys/priv.bin", \
     "--log-level=debug", \
     "--port=9011", \
     "--topic=1", \
     "--boot-nodes=/dns4/head-0.staging-us-east-1.behindthecurtain.xyz/tcp/{head-port}/p2p/{head-id}"]


Note:

  1. The head-id/head-port value in BOOT_NODES variable are the peer id and p2p-port of the head node and they are 12D3KooWRX78c84ko4ZNiDFE5i2d9QT4SPxBkpT6kjzxUotQ8sNR/9010 and 12D3KooWEvNL9wXM6dzusvRpA7qo98QdsUE52x4vHtv2sttc6Za7/9011
  2. The BOOT_NODEvariable can take more than one address separated by comma

With the existence of the Dockerfile in your root directory, you can now run

# Build your Dockerfile
docker build -f Dockerfile_node -t image-name .

# run your newly built docker image
docker run -d -v ./:/data --name container-name image-name

At this point, your worker node is setup. If all is OK, you'll see in the logs a line saying peer connected.

It now should be able to pick up published requests from the head node (for your specified topic) and return responses.

You can test by running the following curl command to test, and see if your worker shows activity on the logs.

curl --location 'http://u:pw@head-address:head-port/api/v1/functions/execute' --header 'Accept: application/json, text/plain, */*' --header 'Content-Type: application/json;charset=UTF-8' --data '{
    "function_id": "bafybeigpiwl3o73zvvl6dxdqu7zqcub5mhg65jiky2xqb4rdhfmikswzqm",
    "method": "allora-inference-function.wasm",
    "parameters": null,
    "topic": "1",
    "config": {
        "env_vars": [
            {                              
                "name": "BLS_REQUEST_PATH",
                "value": "/api"
            },
            {                              
                "name": "ALLORA_ARG_PARAMS",
                "value": "ETH"
            }
        ],
        "number_of_nodes": -1,
        "timeout" : 2
    }
}' | jq

It's important to note that while all Dockerfiles in this guide are running independently, you can decide to have them run together as services in a Docker compose and in fact, that is advised as you can see in the worker Node source code once it is open sourced.

Please contact us for personalized access to connect to a head node.

Deploying a Worker Node

Now that you have built and tested your worker node, Your next goal will be to deploy it to production, where it runs forever. to do that, we use the Kubernetes cluster and Upshot Universal-helm chart. While you can deploy your node however you wish, you can also follow these steps if you are not opinionated on deployment.

1. Build, Tag, and push your Dockerfile

The first step toward deployment is pushing your Docker image to your preferred repository. The Universal-helm chart will use the pushed image to deploy the worker node to your Kubernetes cluster.

#login to docker repository eg Docker Hub
docker login

#tag your image
docker tag image-name:tag username/repository:tag

#push image to repository
docker push username/repository:tag

2. Add Universal-helm to Helm Repository

On your Kubernetes cluster on your preferred cloud service, you will have to add the universal-helm repository to your cluster.

helm repo add upshot https://upshot-tech.github.io/helm-charts

3. Create values.yaml File

Provide custom values in a values.yaml file.

statefulsets:
  - name: worker
    replicas: 1
    persistence:
      size: 1Gi
      storageClassName: gp2
      volumeMountPath: /data
    initContainers:
      - name: init-keys
        image: {image}:{tag}
        env:
          - name: APP_HOME
            value: "/data"
        workingDir: /data
        command:
          - /bin/sh
          - -c
          - |
            KEYS_PATH="${APP_HOME}/keys"

            if [ -d "$KEYS_PATH" ]; then
              echo "Keys exist"
            else
              echo "Generating New Node Identity"
              mkdir -p ${APP_HOME}/keys
              cd $KEYS_PATH
              /app/upshot-keys
            fi
        volumeMounts:
          - name: workers-data
            mountPath: /data
        securityContext:
          runAsUser: 1001

    containers:
      - name: worker
        image:
          repository: {image}
          tag: {tag}
        env:
          - name: APP_HOME
            value: "/data"
          - name: UPSHOT_API_TOKEN
            valueFrom:
              secretKeyRef:
                name: upshot-api-token
                key: UPSHOT_API_TOKEN
        workingDir: /data
        command:
          - /app/upshot-node
          - --role=worker
          - --peer-db=$(APP_HOME)/peer-database
          - --function-db=$(APP_HOME)/function-database
          - --runtime-path=/app/runtime
          - --runtime-cli=bls-runtime
          - --workspace=/tmp/node
          - --private-key=$(APP_HOME)/keys/priv.bin
          - --log-level=debug
          - --boot-nodes="/dns4/head-0.staging-us-east-1.behindthecurtain.xyz/tcp/{head-port}/p2p/{head-id}"
        ports:
          - name: p2p
            type: ClusterIP
            port: 9100
            protocol: TCP
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 256m
            memory: 512Mi
        startupProbe:
          tcpSocket:
            port: 9100
          periodSeconds: 10
          failureThreshold: 6
        livenessProbe:
          tcpSocket:
            port: 9100
global:
  serviceAccount:
    name: node
  securityContext:
    fsGroup: 1001
    runAsUser: 1001
    runAsGroup: 1001
    fsGroupChangePolicy: "Always"

Note: Please replace the variables {image}, {tag} {head-id} respectively

4. Install Helm Chart

helm install index-provider upshot/universal-helm -f values.yaml

If all these are done correctly, your worker node should run successfully in your cloud cluster.