AI Driven Writing

Overview#

Most people use AI for writing by dumping a vague prompt in and hoping something usable comes back. Sometimes it does. More often you get a wall of competent-sounding text that doesn’t say what you meant, structured in a way you wouldn’t have chosen, making points you didn’t ask for.

This playbook flips that. Instead of asking the AI to write and then fixing what it produces, you build the document in stages — thesis, then outline, then content — where you control the thinking and the AI does the prose. Each stage is a checkpoint where you decide whether to proceed, revise, or change direction.

Generating Variations

Overview#

Sometimes the hardest part of producing something isn’t the execution — it’s choosing a direction. You need a tagline, a pitch, an email subject line, a section heading, a way to frame a recommendation. You have a rough idea of what you want to say but not the best way to say it. So you write one version, stare at it, wonder if there’s something better, and either stick with it out of inertia or start second-guessing.

Rubric Driven Assessment

Overview#

You already know what good looks like. The problem is applying that judgment consistently — especially when you’re deep in the work and can’t see it clearly anymore.

This playbook is about getting your standards out of your head and into a rubric that an AI can apply on your behalf. It might be five bullet points. It might be a detailed scoring table. What matters is that it’s specific enough to produce useful feedback when someone other than you evaluates against it.

Secondary AI Review

Overview#

When you work with AI on a piece of writing, the output picks up patterns. You’ve seen them even if you haven’t named them. Filler phrases that sound confident but say nothing. Em-dashes used three times per paragraph. Lists where every item has exactly three sub-points. Hedging language that softens every claim. Transitions that all follow the same structure.

The problem is that the AI session you’ve been working in won’t catch these. It produced them. It thinks they’re fine. If you ask it to review its own work, it will mostly tell you it looks good, because within the context of that session, it does.