Lightchain

Run a Node.
Power AI. Earn $LCAI.

Join the decentralized intelligence layer. Operators run real AI workloads, help secure the network, and earn rewards for consistent uptime.

What is a Lightchain Node?

What is a Lightchain Node?

Unlike traditional blockchain nodes that only validate ledgers, Lightchain nodes execute real AI workloads. Powered by Proof of Intelligence (PoI), nodes run the AIVM to process training, inference, and optimization tasks that help secure the network.

Why Run a Node?

Earn $LCAI Rewards

Passive income for securing the decentralized AI economy.

Secure the Network

Contribute to a resilient, censorship-resistant infrastructure.

Build Unbiased AI

Enable open-source models to run without corporate gatekeepers.

Before you onboard

A production-grade onboarding page should qualify the operator quickly. These are the only requirements worth showing here; everything else should move to technical docs.

Wallet access

Secure private key management required for node identity and reward collection.

Model capacity

Adequate VRAM for high-performance LLM inference and distributed processing.

Reliable uptime

99.9% availability for rewards. Connectivity must be stable for peer synchronization.

Docs nearby

Keep API references open. Follow protocol updates via the developer portal.

Onboard in five steps

This is the happy path. The page should guide the operator from install to registration to launch, without exposing every protocol detail on the first screen.

Phase 00

Install prerequisites

Set up the runtime and tooling required before building or registering the worker.

  • Install Ollama to serve AI models locally on your machine.
  • Pull the model you plan to advertise before registration.
  • Optionally create a new wallet keystore using cast if you don't have one.
Install Ollama and pull model
curl -fsSL https://ollama.com/install.sh | sh ollama pull llama3.2:3b cast wallet new --keystore ./data --password <your-password> # optional

Phase 01

Install the worker tools

Clone the repository and build the core binaries required to register your node and run the worker service.

  • Clone the lightchain-worker repository from source.
  • Build the CLI binary used for registration and key management.
  • Build the sidecar binary that serves inference jobs.
Download and build from source
git clone https://github.com/lightchain-protocol/lightchain-worker.git cd lightchain-worker go build -o lightchain-worker ./cmd/cli go build -o lightchain-worker-sidecar ./cmd/sidecar

Phase 02

Prepare your worker

Configure your keystore path, set your password, and generate the worker key before registration.

  • Point the worker to your keystore file via environment variable.
  • Set the keystore password so the worker can unlock the key.
  • Generate the worker key before submitting registration.
Set env vars and create worker key
export WORKER_KEYSTORE_PATH=./data/keystore.json export WORKER_KEYSTORE_PASSWORD=<your-password> ./lightchain-worker keygen

Phase 03

Register on the network

Declare the models you will serve and submit registration so the network recognises your worker.

  • Set the models your worker will advertise to the network.
  • Register with the wallet that will operate the worker.
  • Confirm the worker appears as active before launch.
Set model and register
export SUPPORTED_MODELS=llama3.2:3b ./lightchain-worker register

Phase 04

Go live and monitor

Start the worker service only after registration succeeds and your model endpoint is reachable.

  • Point the worker at your local Ollama instance.
  • Launch the sidecar process that serves inference jobs.
  • Check worker status to verify it is healthy and active.
Start serving jobs
export OLLAMA_URL=http://localhost:11434 ./lightchain-worker-sidecar ./lightchain-worker status

Launch checklist

Operators should be able to answer these quickly before they go live.

Launch checklist

I have installed the worker tools I need.

I can run both `lightchain-worker` and `lightchain-worker-sidecar` before I begin registration.

My wallet is funded and under my control.

No shared operator ambiguity, no missing keystore access.

My advertised models are already available in my runtime.

The worker should not register models it cannot serve.

I have current network values from docs.

Contract addresses, stake expectations, and environment details should stay out of this page.

I can keep the worker online.

Production onboarding should encourage reliability, not test-lab behavior.

Put the detail where it belongs

This page should stop after the operator understands the flow. The docs page should carry everything that changes often or needs precision.

Network & Environment

Keep current addresses, stake values, and full configuration reference in docs.

Deployment patterns

Docker, process managers, hosted runtimes, and multi-node production guidance belong there too.

Verification and troubleshooting

Status checks, metrics, failure cases, and recovery steps should not crowd the onboarding page.

Hardware requirements

Hardware Requirements

Verify your hardware meets the minimum or recommended specs before proceeding.

SpecificationMinimumRecommended
CPU4 Cores (x86_64)16 Cores (AMD/Intel)
RAM16GB DDR464GB+ DDR5
Storage512GB NVMe SSD2TB NVMe Gen4
GPU8GB VRAM (NVIDIA)24GB+ VRAM (RTX 4090/A100)
Internet100 Mbps Up/Down1 Gbps Symmetric
FAQ

Frequently Asked Questions