The Publishing Pipeline Behind beach.io
We treat content publishing as a software engineering problem: git-driven, schema-validated, human-approved. Here is how the pipeline works and why it is designed the way it is.
beach.io runs on Hammer, Beach's static site builder. There is no CMS dashboard, no content editor UI, no editorial queue. Every post is a Markdown file in a git repository. The site is compiled by Hammer into a Build/ directory, committed alongside the source, and served by Forge from that compiled output.
This architecture is deliberate. It means the full content publishing stack is auditable, versioned, and deployable like any other piece of software. It also means you need a disciplined pipeline if you want publishing velocity without letting quality decay.
This post documents how that pipeline works today. We will update it when the workflow changes materially.
The starting point: a campaign plan, not a prompt
The pipeline begins with a plan.json file, not an AI model. Each content campaign has a plan that defines phases, ordered posts, scheduled dates, inter-post dependencies, and a brief for each piece: a hook (the core angle) and notes (research direction, positioning, which existing posts to reference).
The plan is authored by humans. It answers the editorial questions an AI cannot: what does this audience need to understand, in what order, and what specific angle do we want to take? The AI works within the brief, not instead of it.
Eligibility for a post to enter the pipeline requires status == "planned", a scheduled date on or before today, and all dependency posts already in "review" or "published" state. That ordering constraint is not incidental. We do not let an article that references a prior piece run before that piece is published. The dependency graph is in the plan; the pipeline enforces it.
The pipeline
Once a post is selected, the pipeline runs in sequence.
Research. The agent reads the existing research.md for the campaign, then supplements it with targeted web research relevant to the specific hook. Findings are appended, not overwritten. The plan status moves to "researching".
Drafting. The agent reads the three most recent published posts before writing. This is not a style-matching exercise. It is a context pass: what has already been covered, which internal links are natural, what the reader already knows from this publication. The draft lands in the campaign drafts/ folder, not directly in content/posts/. It is a working artefact at this point, not a published one.
Quality gates. Before the draft can be promoted to content/posts/, it must clear two deterministic checks:
hammer check --strict --format json
hammer build --mode publish --format json
hammer check --strict validates the schema: required frontmatter fields present, category resolving to a known entry, slug in kebab-case and URL-safe, any declared image path existing on disk. These are binary pass/fail conditions. The build step compiles the full site and confirms the output. Neither check cares about tone or prose quality. Both care about correctness.
If either check fails, the pipeline attempts to self-correct up to three times. After three attempts it stops and reports the failure rather than deploying broken content. This is not an edge case guard. Schema errors, malformed frontmatter, and unresolved category references happen in practice. The gates catch them before they reach a URL.
Staging. Once QC passes, the draft is promoted to content/posts/, a feature branch is cut from main (content/posts/[slug]), and source and Build/ output are committed together. The branch is merged to staging and pushed. A Forge redeploy of the staging site fires automatically.
The post is now live at staging. The pipeline stops here. The agent has no path to production.
The human gate
Merging staging to main requires a human git merge. This is structural, not a policy. There is no automated trigger, no approval queue that clears after a timer, no way for the agent to push to main directly. The staging branch is the boundary.
The staging review is not a rubber-stamp. The questions it answers are ones the QC gates deliberately do not try to answer: is the argument actually correct? Does it represent Beach's position accurately? Are there claims that need more specificity or a caveat? Does the internal linking serve a reader arriving at this post cold? These are editorial questions. They belong to a human.
If edits are needed, they go directly to the staging branch. When the post is approved, a human merges to main, pushes, and triggers the production Forge redeploy. The plan.json status moves to "published" after the deploy completes. Any posts depending on this piece become eligible for the next pipeline run.
How the roles divide
The agent drafts. The build system validates. Humans approve. These are not interchangeable functions and the pipeline does not treat them as interchangeable.
The agent is capable of producing well-structured first drafts within a constrained brief. Given a specific hook, a set of research notes, context from the existing publication, and the content guidelines in the site's CLAUDE.md, it reliably produces workable copy. It is not capable of the editorial judgment that determines whether a post actually serves the reader, which is why the staging gate is hard and non-negotiable.
The deterministic QC layer exists because probabilistic output needs deterministic validation. A language model will produce correctly formatted frontmatter the large majority of the time, and a broken slug or unresolved category reference some fraction of the time. Catching that reliably requires a validator that does not make probabilistic judgments. hammer check --strict is that validator.
The pipeline also runs one post at a time, by design. Running it as a batch would be faster. It would also make it easier to let things accumulate in staging without proper review. One post per run means one human review per post. That is an intentional rate limit on the system.
Living documentation
This post describes the pipeline as it stands on publication. The tooling, gate structure, and agent behaviour will evolve. When those changes are material, we will update this post rather than publish a new one. The current state is always here.
If you are building a similar setup on Hammer and Forge, the Hammer product page covers the build system. Our Developer Tools playbooks cover the broader patterns for static-site content operations and AI-assisted publishing workflows.