Editing Novels Faster Episode 0: How to Use AI as an Extra Brain (and Should You?)

The Real Cost of Finishing a Novel – Edits and Revision

There is a moment every novelist recognizes: typing “The End” and realizing, almost immediately, that the work is not even close to finished. The draft exists, but the book does not yet. What follows is not a matter of polishing sentences or correcting typos, the so-called “line editing” that most people think of when discussing editing. It is the longer, quieter phase where the manuscript is tested for coherence, balance, and intent.

For long novels especially, revision becomes more expensive than drafting. Writing a first draft is additive, a slow stacking of words and scenes and chapters. Revision is comparative, with no clear pathway to follow. You are no longer creating something from nothing; you are holding dozens of decisions in mind at once, evaluating what stays, what moves, and what no longer earns its place and finding some system that allows you some measure of prioritization. That work compounds quickly, and it does so in ways that are difficult to measure until you are already deep inside it and flailing.

This is the problem I ran into for my first novel, and I want to streamline the process for my 2nd.

Long-Form Revision as a Scale Problem

As word count increases, the nature of editorial work changes. Local clarity is no longer sufficient. A scene can be effective on its own while undermining the larger structure. A character arc can feel emotionally coherent in isolation while drifting thematically across the manuscript. Rereading helps, but rereading alone does not surface patterns efficiently, especially when the manuscript exceeds what can be comfortably held in working memory. And I don’t just meant computer memory, here. Human’s struggle to see “the big picture” when a manuscript climbs above 50 thousand words, let along three or four times that!

At a certain scale, revision becomes an information management problem as much as an artistry and experience problem. You are not just asking whether something works, but how it works in relation to everything else. This is where fatigue enters the process, not because the work is unclear, but because it is too distributed. Many serious authors pay for professionals to perform a developmental edit, at the average cost of 3.05 cents per word. The mathematically inclined among you realize quickly that even a modest novel of 100K words will run you upwards of $3000.

The Calibration Novel: A Known Quantity

This project is grounded in a completed novel: Viral Agents. This is not an early draft or a work-in-progress used to justify experimentation. It is a manuscript that has already passed through the full editorial arc, from a genuinely rough first draft through multiple structural and developmental passes to a version that is query-ready.

Importantly, that first draft was exactly what it sounds like. It was written for momentum, not quality, and it was not cleaned up before revision began. No human beyond me has ever seen it. Every structural improvement, every pacing fix, every cut and consolidation happened the old-fashioned way, through rereading, note-taking, reverse outlining, and judgment (mine and others’). The entire editorial history exists, intact, and remembered. This offers a powerful test-case.

Because Viral Agents has already been revised end-to-end, it provides something unusually useful: an upper bound. It represents the time, effort, and cognitive load required to take a long novel from completion to readiness using exclusively human processes. There are no unknowns here. The decisions were made. The tradeoffs were faced. The work was paid for in attention and time, but it is now done.

Why This Makes the Experiment Honest

It is easy to claim efficiency gains when there is no reference point for what the work actually costs. Having already completed a full revision cycle by hand constrains this project in a productive way. It prevents retroactive justification and forces any claimed improvement to be measured against something real.

More importantly, it clarifies what should not be automated. There are decisions in the revision of Viral Agents that were slow because they required judgment, not because they required processing. Those decisions are not candidates for acceleration. Knowing where the time went, and why, is what makes it possible to ask better questions about where tools might help without pretending they can replace thinking.

The Question This Project Is Actually Asking

With that context in place, the scope of the project becomes narrow and concrete. The question is not whether AI can write a novel, or even whether it can suggest improvements to one already written. I am not interested in those questions in the least. In fact, they are directly contrary to my goals.

Let me say this as clearly as possible: I am not interested in using any form of AI to write for me or edit for me. That is not the objective.

The question is whether local language models, paired with retrieval, can reduce the time and exhaustion associated with structural revision without abandoning the editorial responsibility of the author.

The desired outcome is not better prose through LLM tools. It is faster access to insight and data with which the author can apply their own sensibilities, judgements, and goals. The hope is not to eliminate rereading or decision-making, but to make rereading targeted and disciplined. Structural clarity, surfaced earlier and with less friction, is the metric that matters. Reduced exhaustion is not a side benefit; it is the goal.

The tool assists the human editor.

Authorship, Judgment, and Responsibility

In the original revision of Viral Agents, judgment lived entirely with me, the author. I determined what to cut, what to keep, and what to rewrite. That cannot change. Tools can surface patterns, but they cannot decide what those patterns mean or how they should be addressed. Taste, intent, and responsibility are not transferable to the AI overlords.

This project is explicitly designed to preserve that boundary. Analytical labor can be delegated. Judgment cannot. Any workflow that blurs that distinction is not useful here. The aim is not to diffuse authorship, but to protect it by reducing the cognitive noise that surrounds large-scale decision-making.

What Comes Next

Before any of that can happen, I have to build the infrastructure. The next phase of this project is procedural by necessity. It deals with tooling, friction, and the unglamorous work of turning a manuscript into something that can be interrogated at scale. Once I’ve done enough of that work to establish that the system can handle the kinds of tasks I want to throw at it, I’ll report back.

I do not anticipate a slam-dunk case. I expect it to be messy. I might abandon the whole effort if the toolbox proves insufficient.

Follow the blog for updates. I hope to deliver you an end-to-end look at the inception, tooling, execution and realization of an AI-assisted process for performing developmental editing on a novel-length manuscript.

Leave a Reply