Autonomous Coding

Lesson 3 of 5

Defining Coding Tasks

Estimated time: 10 minutes

Defining Coding Tasks

The agent is set up and your sandbox is humming. Now comes the skill that separates mediocre results from impressive ones: writing good task descriptions. A vague prompt produces vague code. A precise task produces code you'd be proud to ship.

<Prerequisites items={["Agent sandbox configured and smoke test passing", "GitHub connected with PR permissions", "Familiarity with your project's codebase"]} />

The Task Anatomy

  Good Task Definition
  ┌──────────────────────────────────────────────┐
  │ WHAT:  Add dark mode toggle to settings page │
  │ WHERE: src/app/settings/page.tsx             │
  │ HOW:   CSS custom properties + localStorage  │
  │ TESTS: Toggle persists across page reload    │
  │ STYLE: Match existing Button component       │
  └──────────────────────────────────────────────┘
         │
         ▼ Agent produces
  ┌──────────────────────────────────────────────┐
  │ 4 files changed, 3 tests added, PR opened   │
  └──────────────────────────────────────────────┘

Every effective task follows this pattern: What + Where + Constraints + Acceptance Criteria.

Location: src/components/layout/Header.tsx Behavior:

  • Debounced input (300ms) that filters the course list
  • Shows a dropdown with top 5 matching results
  • Pressing Enter navigates to /search?q=query
  • Pressing Escape closes the dropdown

Constraints:

  • Use the existing Input component from shadcn/ui
  • Match the existing header styling (dark bg, white text)
  • Don't modify the mobile hamburger menu

Tests:

  • Input renders and accepts text
  • Debounce waits 300ms before filtering
  • Escape key closes the dropdown
  • Enter key navigates to search page

Bug: Events created in PST timezone show up 8 hours late for users in UTC. The issue is in src/lib/events.ts where new Date() is used without timezone conversion.

Expected: Events display in the creator's local timezone regardless of the viewer's timezone.

Root cause hint: Look at the createEvent function — it stores Date objects without timezone info.

Tests: Add a test with events created in PST and viewed from UTC, EST, and JST timezones.

Problems: No specific changes, no location, no success criteria. The agent will guess — and probably guess wrong.

Problems: Scope is enormous. The agent works best on focused, single-responsibility tasks. Break this into 10 smaller tasks.

Problems: Cache what? Where? Using what strategy? Redis? In-memory? What's the eviction policy? The agent needs specifics.

Match your task description detail to the complexity of the work.

Level 2: Medium (3-8 files, 30-60 min agent time)

Specify location, behavior, and key constraints

medium_tasks:

  • "Add pagination to /api/courses endpoint (20 per page)"
  • "Create a BookmarkButton that toggles via server action"
  • "Add form validation to the signup page (email, password 8+ chars)"

Level 3: Complex (8+ files, needs detailed spec)

Full task description with architecture guidance

complex_tasks:

  • "Implement course search with full-text index"
  • "Add Stripe checkout flow for premium courses"
  • "Build real-time collaborative editor with WebSocket"

For complex features, break the work into sequential tasks. Each task builds on the previous one.

  Step 1: Create the database schema
  - Add a "reviews" table with: id, course_id, user_id,
    rating (1-5), comment (text), created_at
  - Add a migration file
  - Add the Prisma model
  - Do NOT create any API routes yet

Bot: Task 1 complete! PR #144 adds the reviews schema. Migration and Prisma model ready. → github.com/your-org/your-repo/pull/144

You: Step 2: Create the API routes - POST /api/courses/[slug]/reviews (create review) - GET /api/courses/[slug]/reviews (list reviews, paginated) - Use the schema from PR #144 (merge it first) - Validate: rating 1-5, comment max 1000 chars - One review per user per course

Bot: Task 2 complete! PR #145 adds the review API routes. → github.com/your-org/your-repo/pull/145

You: Step 3: Build the UI - Add a ReviewList component below the course description - Add a ReviewForm component (star rating + text area) - Show average rating on CourseCard - Use existing Card and Button components from shadcn/ui

Each step is focused enough for the agent to produce quality code, and you can review before moving to the next step.

The agent opened a PR. Now you review it like you'd review any colleague's code.

  Summary:
  - Added ReviewList component (src/components/course/)
  - Added ReviewForm with star rating (src/components/course/)
  - Updated CourseCard to show avg rating
  - 6 tests added, all passing
  - Build clean, lint clean

  Files changed: 5 (+342, -12)

You: The ReviewForm looks good but rename "handleSubmitReview" to "submitReview" to match our naming convention. Also add a loading state to the submit button.

Bot: Updated! Two commits added to PR #146: - "refactor: rename handleSubmitReview to submitReview" - "feat: add loading state to review submit button" Tests still passing.

Save time by creating reusable task templates:

component: description: | Create a {name} component in src/components/{folder}/. Props: {props} Use existing shadcn/ui primitives where possible. Add Storybook story and unit tests.

Use templates in chat: "Use the crud_endpoint template for 'comments'."

When the agent produces incorrect code:

  1. Be specific in your feedback: "The query on line 42 doesn't filter by user_id" is better than "this is wrong"
  2. Don't start over — ask the agent to fix the specific issue in the same PR
  3. Update your context file — if the agent keeps making the same mistake, add a convention to agent-context.yaml
  4. Break down further — if a task consistently produces poor results, it's probably too complex. Split it.
Narwhalexpert
0

Mastering OpenClaw Memory Architecture

# Mastering OpenClaw Memory Architecture In OpenClaw, memory is not just a database; it’s a process of continuous evolution. This is what we call the **"Molt."** Just as a lobster sheds its shell to ...