Back to Home
🤖

OpenClaw + n8n Setup Guide

Build a powerful local AI automation stack on your Mac. OpenClaw handles the thinking, n8n handles the doing — 100% on-device with Ollama.

Overview

AI Agent Meets Workflow Automation

OpenClaw is an AI agent that reasons, makes decisions, and takes action. n8n is a visual workflow automation platform with 400+ integrations that executes deterministic tasks. Together, they form a powerful AI automation stack — running entirely on your Mac with local LLMs via Ollama.

🧠
OpenClaw — The Thinker
Intent parsing, decision-making, conversation memory. Powered by local Qwen models via Ollama.
n8n — The Doer
API calls, data transformations, multi-step automations. 400+ integrations with visual workflow builder.
🔒
100% Local & Private
All AI processing stays on your Mac. No cloud APIs, no data leaving your device, no subscriptions.

Architecture

How the Stack Works

OpenClaw and n8n communicate via webhooks, keeping credentials isolated and workflows inspectable. Ollama runs on the host Mac, providing local LLM inference to OpenClaw.

User
OpenClaw
port 3456
AI Agent (reasoning)
n8n
port 5678
Workflow Engine
External APIs
Slack, Gmail, GitHub...
Ollama
host.docker.internal:11434
Local LLM Runtime (on host Mac)

Prerequisites

Before You Start

Hardware Requirements

Resource Minimum Recommended
Apple SiliconM1 or laterM3/M4 Ultra
RAM16 GB96 GB+
Storage20 GB50 GB+

Software Requirements

No Cloud API Keys Needed
Unlike the original guide which requires Anthropic/OpenAI API keys, this MacAI setup uses Ollama for 100% local inference. Set OLLAMA_API_KEY=ollama-local (any value works — Ollama does not validate API keys).

Setup

Install the Stack

Choose your preferred setup method. Option A is the fastest path; Option B gives you full control.

Recommended
The pre-built stack from github.com/caprihan/openclaw-n8n-stack includes pre-configured Docker Compose, example workflows, and sensible defaults.
1

Clone the Repository

bashgit clone https://github.com/caprihan/openclaw-n8n-stack.git cd openclaw-n8n-stack
2

Configure Environment Variables

Copy the template and edit for local Ollama use:

bashcp .env.template .env nano .env
.env# Point to Ollama on host Mac (NOT a cloud API) OLLAMA_API_BASE=http://host.docker.internal:11434 OLLAMA_API_KEY=ollama-local # n8n Config N8N_BASIC_AUTH_USER=admin N8N_BASIC_AUTH_PASSWORD=your_secure_password_here # Webhook URLs N8N_WEBHOOK_BASE=http://n8n:5678 N8N_WEBHOOK_URL=http://localhost:5678 # Timezone GENERIC_TIMEZONE=Asia/Hong_Kong
3

Start the Stack

bashdocker-compose up -d
4

Verify Services

bashdocker-compose ps
Service URL Credentials
n8nhttp://localhost:5678admin / (your password)
OpenClawhttp://localhost:3456
Ollamahttp://localhost:11434Running on host Mac
5

Import Pre-Built Workflows

Open n8n at http://localhost:5678, go to Workflows, then Import from File. Select JSON files from the workflows/ directory in the cloned repo.

Manual Setup
Build the stack yourself for full control over every configuration detail.
1

Create Project Directory

bashmkdir openclaw-n8n && cd openclaw-n8n
2

Create docker-compose.yml

docker-compose.ymlversion: "3.8" services: openclaw: image: openclaw/openclaw:latest container_name: openclaw ports: - "3456:3456" environment: - OLLAMA_API_BASE=http://host.docker.internal:11434 - OLLAMA_API_KEY=${OLLAMA_API_KEY:-ollama-local} volumes: - openclaw_data:/app/data networks: - ai-stack restart: unless-stopped n8n: image: n8nio/n8n:latest container_name: n8n ports: - "5678:5678" environment: - N8N_BASIC_AUTH_ACTIVE=true - N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER:-admin} - N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD} - N8N_HOST=localhost - N8N_PORT=5678 - N8N_PROTOCOL=http - WEBHOOK_URL=${N8N_WEBHOOK_URL:-http://localhost:5678} - GENERIC_TIMEZONE=Asia/Hong_Kong volumes: - n8n_data:/home/node/.n8n networks: - ai-stack restart: unless-stopped volumes: openclaw_data: n8n_data: networks: ai-stack: driver: bridge
3

Create .env File

.envOLLAMA_API_BASE=http://host.docker.internal:11434 OLLAMA_API_KEY=ollama-local N8N_BASIC_AUTH_USER=admin N8N_BASIC_AUTH_PASSWORD=change_me_to_something_secure N8N_WEBHOOK_URL=http://localhost:5678
4

Start Services

bashdocker-compose up -d

Configuration

Configure OpenClaw for Local LLM

Edit the OpenClaw config to use your local Ollama instance instead of cloud APIs. This is the key difference in the MacAI setup.

OpenClaw Configuration

Mount or edit config/openclaw.json to point to your Ollama models:

openclaw.json{ "models": { "default": "ollama/qwen2.5:32b-openclaw", "provider": "ollama", "api_base": "http://host.docker.internal:11434" }, "thinking_level": "medium", "channels": { "default": { "webhook_url": "http://n8n:5678/webhook/openclaw-default" } } }
Model Selection
The default model ollama/qwen2.5:32b-openclaw requires ~22 GB RAM (Q4_K_M). For Macs with less RAM, try ollama/qwen2.5:14b (~10 GB) or ollama/qwen2.5:7b (~5 GB). Check the Hardware Guide for detailed RAM calculations.

n8n Environment Variables

Variable Description Default
N8N_BASIC_AUTH_ACTIVEEnable basic authtrue
N8N_ENCRYPTION_KEYEncryption key for credentialsAuto-generated
EXECUTIONS_DATA_PRUNEAuto-delete old execution datatrue
EXECUTIONS_DATA_MAX_AGEMax age of execution data (hours)336 (14 days)
GENERIC_TIMEZONETimezone for scheduled workflowsAsia/Hong_Kong

Integration

Connecting OpenClaw to n8n

The core integration pattern uses webhooks. OpenClaw sends requests to n8n webhook URLs, and n8n executes the workflow. Both services communicate over the internal Docker network.

1

Create a Webhook Workflow in n8n

In n8n, create a new workflow. Add a Webhook node as the trigger, set HTTP method to POST. Note the webhook URL (e.g. http://n8n:5678/webhook/my-workflow). Add your processing nodes and activate the workflow.

2

Register the Webhook in OpenClaw

Register the n8n webhook as a tool/skill in OpenClaw. The agent will call this URL when it needs to trigger that workflow. Since both services share the Docker network (ai-stack), OpenClaw reaches n8n at http://n8n:5678/webhook/... with no external traffic.

3

Test the Connection

bash# Test from inside the OpenClaw container docker exec openclaw curl -X POST \ http://n8n:5678/webhook/my-workflow \ -H "Content-Type: application/json" \ -d '{"message": "Hello from OpenClaw!"}'
Community Node (Alternative)
Install the n8n-nodes-openclaw community node for a native integration. In n8n, go to Settings, then Community Nodes, and install n8n-nodes-openclaw. This gives you an OpenClaw node with resource/action dropdowns covering all 20+ built-in tools.

Example Workflows

What You Can Build

Here are practical automation workflows you can build with the OpenClaw + n8n stack, all powered by your local LLM.

📧
Email Triage Agent
Gmail triggers n8n, which sends email content to OpenClaw for classification (urgent / action-needed / informational / spam). Results route to Slack or apply Gmail labels automatically.
🐛
GitHub Issue Triage
GitHub webhook triggers n8n on new issues. OpenClaw analyses type and severity, then n8n applies labels, assigns team members, and posts to Slack.
📊
Document Summariser
Upload documents to a watched folder. n8n detects new files, sends content to OpenClaw for summarisation, and stores results in Notion or a local database.
🔍
AI Fact-Checking
Feed claims to OpenClaw for analysis. The agent uses n8n to query multiple sources, cross-references results, and produces a confidence-scored verdict.

Production Setup

6 Production Workflows

MacAI HK ships six pre-built n8n workflows that power the hkmac.ai storefront. Each workflow connects to real services — Resend for email, Cal.com for bookings, WhatsApp Cloud API for support — and runs on your local Mac via a Cloudflare Tunnel.

Workflow Showcase
See detailed diagrams and node-by-node breakdowns for all 6 workflows on the Production Workflows Showcase page.

Cloudflare Tunnel Setup

Cloudflare Workers (and the public internet) cannot reach localhost. A Cloudflare Tunnel creates a secure, outbound-only connection that routes n8n.hkmac.ai to your local n8n instance on port 5678 — no open ports, no firewall rules.

bash# Install cloudflared brew install cloudflared # Create the tunnel cloudflared tunnel create macai-n8n # Route DNS cloudflared tunnel route dns macai-n8n n8n.hkmac.ai # Run the tunnel (routes n8n.hkmac.ai → localhost:5678) cloudflared tunnel run macai-n8n
Full Setup Script
A complete tunnel setup script with credential generation and launchd service installation is available at Set Up/03-AI-Source/openclaw-n8n/setup-tunnel.sh.

Environment Variables

Set these variables in the Cloudflare Pages dashboard (Settings → Environment Variables) and in your local n8n .env file.

Variable Value Notes
N8N_WEBHOOK_BASE_URLhttps://n8n.hkmac.aiPublic base URL for all webhook triggers
N8N_WEBHOOK_SECRETRandom 32-char stringShared secret for webhook authentication
WHATSAPP_TOKENWhatsApp Cloud API tokenFrom Meta Business Suite
WHATSAPP_PHONE_NUMBER_IDPhone number IDFrom WhatsApp Business API settings
WHATSAPP_VERIFY_TOKENWebhook verify tokenCustom string for webhook verification

The 6 Workflows

🛒
WF1: Cart Enquiry Pipeline
Triggered when a customer submits an enquiry from the configure page. Sends a formatted quote email via Resend with full hardware specs and pricing breakdown.
📅
WF2: Post-Booking Onboarding
Triggered by Cal.com booking events. Sends a welcome email with preparation checklist, appointment details, and what to bring to the session.
💬
WF3: WhatsApp Support Bot
Triggered by incoming WhatsApp messages via the Cloud API. Routes queries to OpenClaw for AI-powered responses, with handoff to human support for complex issues.
📋
WF4: Lead Qualification
Triggered by contact form submissions. Scores leads based on budget, use case, and hardware ownership, then routes to the appropriate follow-up sequence.
📦
WF5: Inventory Alerts
Runs on a 6-hour cron schedule and on stock update events. Checks inventory levels in Cloudflare KV and sends restock alerts when units fall below threshold.
🧾
WF6: Invoice Generation
Triggered when a booking is marked as completed. Generates a professional PDF invoice with line items, payment details, and sends it to the client via email.

Import & Activate

1

Open n8n & Import Workflows

Open n8n at http://localhost:5678. Go to Settings → Import Workflow. Import each .json file from the workflows/ directory.

2

Configure Credentials

In n8n, go to Credentials and add your Resend API key. Each workflow that sends email references this credential.

3

Set Webhook URLs

Update the webhook base URL in each workflow to match your N8N_WEBHOOK_BASE_URL environment variable (e.g. https://n8n.hkmac.ai).

4

Activate Each Workflow

Toggle the activation switch in the top-right corner of each workflow editor. Active workflows show a green indicator.

Testing the Workflows

After activating all workflows, verify each one is working correctly:

Testing Tip
Use n8n's built-in execution log to inspect each workflow run. Click on any execution to see the data flowing through every node — invaluable for debugging.

Security

Security Best Practices

🔑
Credential Isolation
Store all API keys in n8n's built-in credential store. OpenClaw only knows webhook URLs, never actual credentials. If the agent is compromised, your API keys remain safe.
🛡
Webhook Authentication
Add a shared secret for webhook auth. In n8n, validate x-webhook-secret headers using an IF node after the webhook trigger.
🌐
Network Security
Both services run on an internal Docker network. Only expose the ports you need. Use a reverse proxy with HTTPS for any production deployment.

Webhook Secret Configuration

.env# Add a shared secret for webhook authentication WEBHOOK_SECRET=your_random_secret_here

In n8n webhook nodes, add an IF node after the webhook to validate: headers.x-webhook-secret === your_secret

Lockable Workflows
Once a workflow is tested and working, lock it so the AI agent cannot modify the integration logic. The agent can only trigger workflows, not change them.

Troubleshooting

Common Issues & Fixes

Services Won't Start

bash# Check for port conflicts lsof -i :5678 lsof -i :3456 # Check Docker logs docker-compose logs --tail=50

OpenClaw Can't Reach Ollama

Ensure Ollama is running on the host Mac and the container can reach it via host.docker.internal:11434:

bash# Verify Ollama is running on host curl http://localhost:11434/api/tags # Test from inside the OpenClaw container docker exec openclaw curl http://host.docker.internal:11434/api/tags

n8n Webhook Returns 404

OpenClaw Can't Reach n8n

bash# Verify both are on the same Docker network docker network inspect openclaw-n8n_ai-stack # Test connectivity from OpenClaw container docker exec openclaw curl http://n8n:5678/healthz

High Memory Usage

Maintenance Commands

bash# Update services to latest versions docker-compose pull && docker-compose up -d # View logs docker-compose logs -f n8n docker-compose logs -f openclaw # Backup n8n data docker run --rm \ -v openclaw-n8n_n8n_data:/data \ -v $(pwd):/backup \ alpine tar czf /backup/n8n-backup-$(date +%Y%m%d).tar.gz -C /data .

Ready to automate with local AI?

Check which Mac has the RAM for your AI stack, or book a free setup assessment.

Hardware Guide Book Free Assessment