Build a powerful local AI automation stack on your Mac. OpenClaw handles the thinking, n8n handles the doing — 100% on-device with Ollama.
Overview
OpenClaw is an AI agent that reasons, makes decisions, and takes action. n8n is a visual workflow automation platform with 400+ integrations that executes deterministic tasks. Together, they form a powerful AI automation stack — running entirely on your Mac with local LLMs via Ollama.
Architecture
OpenClaw and n8n communicate via webhooks, keeping credentials isolated and workflows inspectable. Ollama runs on the host Mac, providing local LLM inference to OpenClaw.
Prerequisites
| Resource | Minimum | Recommended |
|---|---|---|
| Apple Silicon | M1 or later | M3/M4 Ultra |
| RAM | 16 GB | 96 GB+ |
| Storage | 20 GB | 50 GB+ |
ollama pull qwen2.5:32bOLLAMA_API_KEY=ollama-local (any value works — Ollama does not validate API keys).Setup
Choose your preferred setup method. Option A is the fastest path; Option B gives you full control.
Copy the template and edit for local Ollama use:
| Service | URL | Credentials |
|---|---|---|
| n8n | http://localhost:5678 | admin / (your password) |
| OpenClaw | http://localhost:3456 | — |
| Ollama | http://localhost:11434 | Running on host Mac |
Open n8n at http://localhost:5678, go to Workflows, then Import from File. Select JSON files from the workflows/ directory in the cloned repo.
Configuration
Edit the OpenClaw config to use your local Ollama instance instead of cloud APIs. This is the key difference in the MacAI setup.
Mount or edit config/openclaw.json to point to your Ollama models:
ollama/qwen2.5:32b-openclaw requires ~22 GB RAM (Q4_K_M). For Macs with less RAM, try ollama/qwen2.5:14b (~10 GB) or ollama/qwen2.5:7b (~5 GB). Check the Hardware Guide for detailed RAM calculations.| Variable | Description | Default |
|---|---|---|
| N8N_BASIC_AUTH_ACTIVE | Enable basic auth | true |
| N8N_ENCRYPTION_KEY | Encryption key for credentials | Auto-generated |
| EXECUTIONS_DATA_PRUNE | Auto-delete old execution data | true |
| EXECUTIONS_DATA_MAX_AGE | Max age of execution data (hours) | 336 (14 days) |
| GENERIC_TIMEZONE | Timezone for scheduled workflows | Asia/Hong_Kong |
Integration
The core integration pattern uses webhooks. OpenClaw sends requests to n8n webhook URLs, and n8n executes the workflow. Both services communicate over the internal Docker network.
In n8n, create a new workflow. Add a Webhook node as the trigger, set HTTP method to POST. Note the webhook URL (e.g. http://n8n:5678/webhook/my-workflow). Add your processing nodes and activate the workflow.
Register the n8n webhook as a tool/skill in OpenClaw. The agent will call this URL when it needs to trigger that workflow. Since both services share the Docker network (ai-stack), OpenClaw reaches n8n at http://n8n:5678/webhook/... with no external traffic.
n8n-nodes-openclaw community node for a native integration. In n8n, go to Settings, then Community Nodes, and install n8n-nodes-openclaw. This gives you an OpenClaw node with resource/action dropdowns covering all 20+ built-in tools.Example Workflows
Here are practical automation workflows you can build with the OpenClaw + n8n stack, all powered by your local LLM.
Production Setup
MacAI HK ships six pre-built n8n workflows that power the hkmac.ai storefront. Each workflow connects to real services — Resend for email, Cal.com for bookings, WhatsApp Cloud API for support — and runs on your local Mac via a Cloudflare Tunnel.
Cloudflare Workers (and the public internet) cannot reach localhost. A Cloudflare Tunnel creates a secure, outbound-only connection that routes n8n.hkmac.ai to your local n8n instance on port 5678 — no open ports, no firewall rules.
Set Up/03-AI-Source/openclaw-n8n/setup-tunnel.sh.Set these variables in the Cloudflare Pages dashboard (Settings → Environment Variables) and in your local n8n .env file.
| Variable | Value | Notes |
|---|---|---|
| N8N_WEBHOOK_BASE_URL | https://n8n.hkmac.ai | Public base URL for all webhook triggers |
| N8N_WEBHOOK_SECRET | Random 32-char string | Shared secret for webhook authentication |
| WHATSAPP_TOKEN | WhatsApp Cloud API token | From Meta Business Suite |
| WHATSAPP_PHONE_NUMBER_ID | Phone number ID | From WhatsApp Business API settings |
| WHATSAPP_VERIFY_TOKEN | Webhook verify token | Custom string for webhook verification |
Open n8n at http://localhost:5678. Go to Settings → Import Workflow. Import each .json file from the workflows/ directory.
In n8n, go to Credentials and add your Resend API key. Each workflow that sends email references this credential.
Update the webhook base URL in each workflow to match your N8N_WEBHOOK_BASE_URL environment variable (e.g. https://n8n.hkmac.ai).
Toggle the activation switch in the top-right corner of each workflow editor. Active workflows show a green indicator.
After activating all workflows, verify each one is working correctly:
Security
In n8n webhook nodes, add an IF node after the webhook to validate: headers.x-webhook-secret === your_secret
Troubleshooting
Ensure Ollama is running on the host Mac and the container can reach it via host.docker.internal:11434: