Lesson 2 of 5
Setting Up the Dev Environment
Estimated time: 8 minutes
Setting Up the Dev Environment
Before the agent can write a single line of code, it needs a secure workspace connected to your repository. In this lesson, you'll configure the sandbox, connect GitHub, and verify the agent can read your codebase and run your tests.
Prerequisites
Environment Architecture
Your Chat OpenClaw Sandbox GitHub
┌──────────┐ ┌──────────┐ ┌──────────────┐ ┌──────────┐
│ Task │─────>│ Agent │─────>│ Isolated │────>│ Branch │
│ request │ │ Router │ │ container │ │ PR │
└──────────┘ └──────────┘ │ │ │ CI/CD │
│ - Clone repo │ └──────────┘
│ - Install │
│ deps │
│ - Run tests │
│ - No network │
│ (except │
│ GitHub) │
└──────────────┘
Sandbox Is Non-Negotiable
The agent runs in an isolated container with limited permissions. This is a safety requirement, not an optimization. An AI writing and executing code without isolation is a security risk. Never run the agent directly on your machine.
Connect GitHub
Generate a GitHub Personal Access Token (PAT) or install the OpenClaw GitHub App for your repository.
The GitHub App approach is more secure — it gets scoped permissions per repository instead of broad account access.
- Go to OpenClaw Dashboard > Integrations > GitHub
- Click Install GitHub App
- Select your organization or personal account
- Choose specific repositories to grant access to (do not select "All repositories")
- Approve the permissions:
- Read: code, metadata
- Write: pull requests, issues, checks
- No admin access
openclaw integrations add github --method app --repo your-org/your-repoConfigure the Agent Sandbox
The sandbox is a Docker container where the agent clones your repo, installs dependencies, makes changes, and runs tests. Configure it to match your project's requirements.
sandbox:
runtime: docker
image: "openclaw/sandbox:node20" # or python3.12, go1.22, rust
resources:
cpu: 2
memory: "4GB"
disk: "10GB"
timeout: 600 # Max 10 minutes per tasknetwork:Whitelist only what's neededallowed_hosts:
"github.com"
"registry.npmjs.org" # For npm install
"api.openai.com" # If your project uses AI APIs
block_all_other: true # Block everything else
setup_commands:
"npm install" # Or: pip install -r requirements.txt
"npm run build" # Verify build works
test_command: "npm test"
lint_command: "npm run lint"
build_command: "npm run build"sandbox:
image: "openclaw/sandbox:node20"
setup_commands:
"npm ci"
"npx prisma generate" # If using Prisma
test_command: "npm test"
lint_command: "npx eslint . --max-warnings 0"
build_command: "npm run build"Teach the Agent Your Codebase
The agent works best when it understands your project's patterns, conventions, and architecture. Create a context file.
project:
name: "MyApp"
description: "SaaS dashboard for analytics"
language: "TypeScript"
framework: "Next.js 15"
styling: "Tailwind CSS + shadcn/ui"
database: "PostgreSQL via Prisma"
testing: "Vitest + Testing Library"conventions:
"Use functional components with hooks, no class components"
"Colocate tests next to source files (*.test.ts)"
"Use named exports, not default exports"
"All API routes go in src/app/api/"
"Database queries go through Prisma client in src/lib/db.ts"
"Use server actions for mutations, not API routes"
important_files:
"src/lib/db.ts" # Database client
"src/lib/auth.ts" # Auth utilities
"src/types/index.ts" # Shared types
"tailwind.config.ts" # Design tokens
off_limits:
"src/lib/auth.ts" # Agent should not modify auth
"prisma/schema.prisma" # No schema changes without human
".env*" # Never touch env filesThe Context File Is Your Leverage
The better your context file, the better the agent's output. Spend 10 minutes writing clear conventions and it'll save you hours of code review corrections. Think of it as onboarding a new developer — what would you tell them on day one?
Verify the Setup
Run a smoke test to confirm everything is connected and working.
You: Run a test task: add a comment to the top of README.md
that says "Agent test — delete this line"Bot: Starting task...[1/5] Cloning repo into sandbox... done (12s)
[2/5] Installing dependencies... done (34s)
[3/5] Making changes... done (2s)
[4/5] Running tests... all passing (18s)
[5/5] Opening PR... donePR #143: "test: verify agent setup"
https://github.com/your-org/your-repo/pull/143Changes:
Added comment to README.md (1 line)
All 47 existing tests still passing
Lint clean
Setup verified! You can close this PR without merging.If any step fails, the bot will report the error and suggest fixes. Common issues:
- Clone fails — check GitHub permissions
- Install fails — verify the sandbox image matches your runtime
- Tests fail — ensure
test_commandmatches your actual test runner
If your project uses a private registry (npm, Artifactory, etc.), add the credentials to the sandbox:
sandbox:
secrets:
name: NPM_TOKEN
source: openclaw_vault # Stored encrypted in OpenClaw
setup_commands:
"echo '//registry.npmjs.org/:_authToken=$' >> .npmrc"
"npm ci"For monorepos using pnpm or yarn workspaces, adjust the image and commands accordingly.
Why does the coding agent run in an isolated Docker container instead of directly on your machine?
Mastering OpenClaw Memory Architecture
# Mastering OpenClaw Memory Architecture In OpenClaw, memory is not just a database; it’s a process of continuous evolution. This is what we call the **"Molt."** Just as a lobster sheds its shell to ...