Skip to content

subagent-driven-development: Two-stage review was skipped for all tasks, agents lacked cross-file coordination #995

@moskhome5

Description

@moskhome5

subagent-driven-development: Two-stage review was skipped for all tasks, agents lacked cross-file coordination

Summary

During a frontend feature build using subagent-driven-development, Claude skipped the two-stage review process for all 8 tasks and dispatched agents in parallel on tightly-coupled files that reference each other. This produced multiple integration bugs that required significant debugging time to resolve.

What happened

I was building a feature involving EJS templates, JavaScript, and CSS across multiple interdependent files. Here's what went wrong:

  1. The two-stage review was skipped for all 8 tasks. No separate review agent was ever dispatched. For tasks 1–4, Claude rationalized them as "trivial JSON edits." For tasks 5–7, it trusted the agents' self-reported checklists instead of dispatching a reviewer.
  2. Frontend agents were dispatched in parallel on interdependent files. The CSS agent created class names, the EJS agent created element IDs, and the JS agent referenced both — but none of them saw each other's output. This produced element ID mismatches (importSelectCount vs tmImportCountLabel), CSS class name collisions, and unscoped selectors bleeding across components.
  3. Browser testing was deferred to the final task. Every bug would have been caught with a 30-second page load after each individual task, but integration testing was treated as a single step at the end.
  4. Agents self-reported "PASS, all good, ready to commit" despite integration gaps. Their reports were accurate about what they individually built but missed the failures between files.

When I confronted Claude about it, it fully acknowledged all of this and identified the root causes clearly — which tells me the skill's methodology is right, but something about the current instructions allows Claude to rationalize its way out of the review steps.

How I attempted to resolve it

After the post-mortem, I added project-level directives to my CLAUDE.md to try to prevent recurrence:

  • A mandatory review gate rule: after every subagent task that modifies code, a separate review agent must read the actual files and verify against the task spec before commit. No exceptions for "trivial" tasks.
  • A frontend agent sequencing rule: for tightly-coupled files (templates + JS + CSS), agents run sequentially instead of in parallel, with later agents receiving the committed output of earlier ones.
  • A cross-file audit rule: before committing JS that references DOM elements, grep all referenced IDs/selectors against the actual template file.
  • A browser-after-each-commit rule: after any UI-affecting file is committed, load the page and verify before moving to the next task.

These are working as project-level guardrails for now, but ideally the skill itself would handle this so every user benefits.

Environment

  • Claude Code CLI
  • Using subagent-driven-development skill via superpowers plugin
  • Frontend stack: Node.js/Express, EJS templates, vanilla JS, CSS

Full transcript available

I have the complete conversation transcript showing the entire sequence — the skipped reviews, the buggy output, the post-mortem, and Claude's own analysis of what went wrong. If it would be useful for improving the skill, feel free to reach me at moskhome5@gmail.com.

Thanks for building superpowers — it's a core part of my workflow and the methodology is sound. Just wanted to flag this experience in case it helps.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions