Local Deployment Guide¶
Loom — Lightweight Orchestrated Operational Mesh
Overview¶
Loom can run as a local background service accessible on your machine and optionally published on your LAN. Three deployment methods are supported:
- Docker Compose — recommended for most users
- Native process manager — macOS (launchd) or Windows (NSSM)
- Kubernetes — see KUBERNETES.md
Docker Compose¶
The simplest way to run the full Loom stack locally.
Prerequisites¶
- Docker Desktop (Mac/Windows) or Docker Engine (Linux)
- Docker Compose v2+
Start¶
This starts:
- NATS message bus (port 4222, monitoring on 8222)
- Valkey checkpoint store (port 6379)
- Workshop web UI (port 8080)
- Router deterministic task router
Access¶
- Workshop: http://localhost:8080
- NATS monitoring: http://localhost:8222
Add Workers¶
Workers connect to Ollama on the host for local LLM inference.
Uncomment the worker section in docker-compose.yml or add:
worker-summarizer:
build:
context: .
dockerfile: docker/Dockerfile.worker
environment:
- WORKER_CONFIG=configs/workers/summarizer.yaml
- MODEL_TIER=local
- NATS_URL=nats://nats:4222
- OLLAMA_URL=http://host.docker.internal:11434
# - LOOM_TRACE_CONTENT=1 # Enable prompt/completion logging in OTel spans
depends_on:
- nats
- router
Tracing: Set
LOOM_TRACE_CONTENT=1on any worker or orchestrator container to record prompt and completion text as OpenTelemetry span events. Disabled by default to avoid storing sensitive data in your tracing backend.
Deploy Apps¶
Upload app ZIPs through the Workshop at http://localhost:8080/apps.
Deployed apps persist across container restarts via the loom-apps volume.
LAN Access¶
Bind the Workshop to all interfaces:
Then access from other devices at http://<your-ip>:8080.
With mDNS enabled (install loom[mdns]), the Workshop auto-advertises
as loom-workshop on the LAN — discoverable via Bonjour/Avahi.
Native Process Manager — macOS¶
Run Loom as launchd background services that start on login.
Prerequisites¶
- Python 3.11+ with loom installed:
pip install loom-ai[workshop] - NATS server:
brew install nats-serveror Docker
Install¶
# Default (localhost only)
bash deploy/macos/install.sh
# LAN accessible
bash deploy/macos/install.sh 0.0.0.0
Services¶
| Service | Description | Log |
|---|---|---|
| com.loom.workshop | Workshop UI (port 8080) | ~/Library/Logs/loom/workshop.log |
| com.loom.router | Task router | ~/Library/Logs/loom/router.log |
Manage¶
# Check status
launchctl list | grep loom
# Stop workshop
launchctl stop com.loom.workshop
# Start workshop
launchctl start com.loom.workshop
# View logs
tail -f ~/Library/Logs/loom/workshop.log
Uninstall¶
Native Process Manager — Windows¶
Run Loom as Windows services using NSSM.
Prerequisites¶
- Python 3.11+ with loom installed:
pip install loom-ai[workshop] - NSSM:
choco install nssm - NATS server:
choco install nats-serveror Docker
Install¶
# Default (localhost only)
.\deploy\windows\install.ps1
# LAN accessible
.\deploy\windows\install.ps1 -Host "0.0.0.0"
Services¶
| Service | Description | Log |
|---|---|---|
| LoomWorkshop | Workshop UI (port 8080) | %LOCALAPPDATA%\loom\logs\workshop.log |
| LoomRouter | Task router | %LOCALAPPDATA%\loom\logs\router.log |
Manage¶
Uninstall¶
mDNS / Bonjour Discovery¶
Install the optional mDNS dependency to auto-advertise Loom on your LAN:
When the Workshop starts, it automatically registers as a Bonjour service. Other devices on the network can discover it without knowing the IP address.
For headless deployments (no Workshop), use the standalone advertiser:
Discovery from clients¶
macOS:
Linux (Avahi):
For Kubernetes deployment, see KUBERNETES.md. For app bundle deployment, see APP_DEPLOYMENT.md. For ITP analytical system setup (Baft + Framework), see baft/docs/SETUP.md.