I Automated My Substack Research With Claude Code
I have a Substack called root cause where I write about infrastructure failures. Postmortems, outages, the kind of stuff where three reasonable decisions combine to take down half the internet. It’s fun to write, but I don’t do it as often as I’d like.
The problem isn’t the writing. The writing is the good part. The problem is everything before the writing.
Finding an incident worth covering means scanning a dozen engineering blogs, searching Hacker News, cross-referencing Reddit for insider context, building timelines from status pages. By the time I’ve done all that, I’m tired, and the weekend is gone.
So I built a thing.
The Setup
I’ve been using Claude Code for a while now—it’s basically become my default for any project where I need to actually build something rather than just talk about building something. The agentic stuff is what makes it useful. You tell it what you want, it writes files, runs scripts, searches the web, iterates until it works.
I figured: why not point that at my content problem?
The system I ended up with has four parts:
Discovery. Python scripts that scan RSS feeds from places like Cloudflare, Honeycomb, Netflix. Basically anywhere, engineers write candidly about failures. It also searches Hacker News for postmortem discussions. Everything goes into a candidate’s list.
Ranking. Not every outage is interesting. I defined what makes something “root cause worthy”, emergence (multiple reasonable decisions creating chaos), irony (darkly funny coincidences), clear lessons, good narrative potential. Claude scores candidates and shows me the top 3.
Research. Once I pick an incident, it fetches the postmortem, grabs HN comments, builds a timeline, saves everything to a folder. All the context I need, organized.
Writing. This is the weird part. I gave Claude my existing articles as examples. It generates draft angles like technical deep dive, narrative-first, and lessons-first. I can pick one, edit it, publish.
What Actually Surprised Me
The voice matching is better than I expected. I was skeptical that feeding it three articles would be enough, but it picked up on stuff I didn’t consciously realize I was doing. Starting paragraphs with “But.” Sentence fragments for emphasis. The phrase “This is insane” showing up exactly when systems behave insanely.
The drafts aren’t publish-ready, but they’re in the right direction. Editing, not rewriting.
The other thing: the CLAUDE.md file matters way more than I thought. Claude Code looks for this file in your project and treats it as instructions. I put everything in there: article structure, voice guidelines, file locations, what to check before showing me a draft.
Get that file right and the whole system behaves differently. It’s basically a config file for AI behavior.
The Meta Thing
I built a content engine about infrastructure failures, and the lessons were the same ones I write about:
The value is in how the pieces interact, not any single component. Discovery feeds ranking feeds research feeds writing. Emergence.
Defaults shape everything downstream. The CLAUDE.md file determines behavior more than any individual prompt.
Automate the boring parts, not the interesting ones. I still decide what’s worth writing about. I still shape the narrative. The system just clears the path.
If You Want to Try It
The whole thing is on GitHub: https://github.com/rajjagirdar007/content-engine
Setup takes maybe 10 minutes. The customization with your own voice samples, your own scoring criteria will take a bit longer, but that’s the part that makes it yours.
I’m curious if anyone else builds content pipelines with this. The interesting thing about Claude Code is it stops being a chatbot and starts being infrastructure. Once you think about it that way, a lot of workflows open up.
The system currently has 9 incidents scored and ready to write about. Which means I have no excuse not to post more often.
We’ll see if that actually happens. Don’t worry this one was not made by the engine lol.

