I keep seeing variations of this pitch everywhere:
“A bug report hits Jira at 3 AM? Your autonomous agent wakes up, reproduces it, writes the fix, and opens the PR before your alarm goes off.”
That’s from Zencoder’s marketing site. But it’s not just marketing anymore, developers are actually doing this. They’re using tools like Zencoder, Claude code, Openclaw, and others to delegate entire features or bug fixes to AI agents that run autonomously while they sleep.
The workflow is straightforward: assign a GitHub issue to the agent, let it work overnight in a sandboxed environment, wake up to a pull request with passing tests, review the diff, merge it.
The tests pass. The code works. They ship it.
And here’s the part that keeps me up at night: many of them admit they don’t fully understand every implementation detail. They review for correctness and patterns, but not line-by-line comprehension. And they’re shipping faster because of it.
I’ve been thinking about this for weeks. Because I can’t do that. Not yet.
My Workflow Isn’t There Yet
I use AI constantly. Claude Code and orchestrators like OpenClaw, they’ve transformed how I work. I can spin up features that would have taken days in a fraction of the time. The productivity gains are real.
But I still review everything. I still read the diffs. I still make sure I understand what went into the codebase before I approve it. I don’t merge code I haven’t checked.
And here’s the uncomfortable part: I’m starting to wonder if that makes me slow.
Because I’m watching people who don’t review everything. They’re shipping faster. They’re building more. And their code… works. Their products ship. Their companies grow.
The question that keeps me up at night isn’t “can AI code?” anymore. It’s: how much can I trust it before my caution becomes a liability?
The Job Changed Underneath Us
If you’ve been coding for more than a few years, you’ve felt it. The ground shifted. What we do day-to-day looks fundamentally different than it did even 18 months ago.
We used to write code. Now we orchestrate AI agents that write code.
We used to debug line by line. Now we describe the bug and let the agent fix it.
We used to refactor carefully. Now we prompt for a rewrite and evaluate the output.
The transformation isn’t subtle anymore. AI agent tools crossed some threshold. They’re not assistants anymore, they’re doing the actual work.
And here’s what nobody wants to say out loud: they’re often better at it than we are. Faster, definitely. More thorough in edge cases. Less prone to the lazy shortcuts we take when we’re tired.
The role of “software developer” didn’t disappear. It just became something else entirely. We’re now:
- Task organizers — Breaking down requirements into agent-friendly chunks
- Configuration engineers — Building rules, skills, and context for AI systems
- Quality evaluators — Reviewing output for correctness, not writing it ourselves
- Visibility champions — Advocating for our work because “just coding” is invisible now
If you’re a developer who just wants to code in peace, I have bad news for you: that job is disappearing.
The Pioneers Are Already There
I’m not imagining this shift. Some of the sharpest minds in our industry are actively building for a future where human code review might be optional and their work reveals just how far the transformation has already gone.
Steve Yegge’s Gas Town: The Industrial Coding Factory
Steve Yegge—the legendary blogger who gave us classics like “Stevey’s Google Platforms Rant”—recently launched Gas Town, what he calls “a new take on the IDE for 2026.” It’s not an IDE in any traditional sense. It’s an orchestrator for running dozens of Claude Code instances simultaneously.
In his words: “Gas Town is an industrialized coding factory manned by superintelligent robot chimps.”
He describes an 8-stage evolution of the AI-assisted developer, from basic code completions all the way to running 10+ agents at once. If you’re not at stage 6 or 7? “You will not be able to use Gas Town. You aren’t ready yet.”
Yegge also created Beads, a persistent memory system for coding agents. Because when you’re running that many agents, you need infrastructure just to keep track of what they know. He’s literally building Kubernetes for AI coders.
The pattern he describes: Prompt → Sleep → Evaluate → Keep or Toss. Work becomes “an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks.”
That’s not coding as we knew it. That’s something else entirely. And Yegge’s doing it right now.
ThePrimeagen’s 99: The Neovim Purist’s Answer
On the other end of the spectrum, ThePrimeagen, known for his Neovim evangelism, launched 99, which he describes as “the AI agent that Neovim deserves.”
His philosophy is different from the full-orchestration approach. 99 is built for developers “who don’t have skill issues”, those people who want AI integrated into their existing workflow, not replacing it. It’s about restricted AI interactions: fill in a function, handle a visual selection, stop when you want.
The interesting part? Even ThePrimeagen, arguably one of the most skilled traditional developers in the content space, is building AI tooling. He’s not fighting the wave; he’s figuring out how to ride it on his own terms.
These aren’t fringe experiments. These are serious developers building serious tools because they see what’s coming.
Caught in the Middle
So where does that leave people like me?
I see where this is going. I believe the future probably shifts heavily toward these “prompt and trust” workflows. The economics are too compelling. The speed advantages are too real. Companies will gravitate toward developers who ship faster, and right now, that means developers who trust AI more.
But I’m not there yet. And I’m honestly not sure if my reluctance is:
- Wisdom: A healthy caution born from experience with software that breaks in subtle ways
- Ego: An attachment to feeling like a “real programmer” who understands their code
- Fear: Discomfort with a world where my hard-won skills matter less
Maybe it’s all three.
The honest truth is: I don’t fully trust AI-generated code yet. Not because I’ve seen it fail catastrophically. But because the few times I have caught issues, they were subtle. The kind of bugs that pass tests but cause problems in production. The kind that make you question everything you didn’t review carefully.
And yet… those catches are getting rarer. The AI is getting better. Every month the code quality improves, the edge cases get handled more gracefully, the architecture decisions get more sensible.
At what point does my reviewing become theater? At what point am I just going through the motions because it feels wrong not to, even though I’m not actually catching anything?
I don’t have an answer.
The Uncomfortable Truths We’re Not Talking About
Whether you’re in the “trust fully” camp or the “review everything” camp, some things are happening to all of us.
Skill Atrophy is Real
I’ve noticed it in myself. There are language features I used to know cold that I now… don’t. Not because I forgot them entirely, but because I haven’t actually typed them as often. The AI does it.
When I do need to write code without AI assistance, I’m slower. Rustier. More likely to look things up that used to be automatic.
Is this a problem? I don’t know. Maybe these skills don’t matter anymore. Maybe they do. But either way, they’re fading.
The Carelessness Creep
Even when I review code, I’ve caught myself being less thorough than I used to be. When the AI can regenerate something in seconds, the stakes for any individual piece feel lower. Made a mistake? Trash it and re-prompt.
This creeping carelessness scares me. Not because every line needs to be perfect, but because I’ve noticed myself letting things slide that I wouldn’t have before. Small things. Subtle things. The kind of issues that compound.
Code Review Culture is Fragmenting
Here’s where it gets really weird: people are using AI to review code too.
On one hand, this is powerful. AI can catch patterns and issues that humans miss. But on the other hand… we’re now in a loop where AI writes code and AI reviews code and humans just glance at both and click approve.
What happens when we stop deeply understanding the systems we’re building? When code becomes something we orchestrate but don’t really know?
The Disposable Code Question
Maybe code quality just doesn’t matter the way it used to.
If AI can rewrite a module in 30 seconds, why spend hours making it elegant? Why obsess over structure, readability, maintainability? Just make it work. When it breaks or needs to change, throw it away and regenerate it.
This feels wrong to me. But I can’t articulate exactly why. Maybe I’m just attached to an old way of working. Maybe “good code” was always just a means to an end, and now there’s a faster means.
Maybe code doesn’t need to be human-readable anymore. Maybe it just needs to be AI-readable. And maybe that’s fine.
I genuinely don’t know.
The Paradox: Embrace or Fall Behind
Here’s the impossible choice: embrace AI coding fully and potentially lose something essential, or maintain your review discipline and risk becoming irrelevant.
If you don’t use these tools you’re objectively slower than colleagues who trust more and verify less. You deliver less. You look less productive. In a world where companies are cutting costs and measuring output, that’s not sustainable.
But if you do embrace them fully, you risk becoming dependent. Losing skills. Becoming someone who can’t actually code anymore, just prompt.
I’m trying to find a middle path. Use AI heavily for generation, but maintain understanding. Trust but verify. It’s uncomfortable. It might not be sustainable. But it’s where I am right now.
And here’s the kicker: the job now requires something that coding alone never did: visibility.
It’s not enough to do good work. You have to be seen doing it. You have to advocate. Document. Communicate impact. Build relationships. The IC who just crushes code tickets in silence? That person is becoming a liability, because from the outside, it’s unclear what AI could or couldn’t do in their place.
What Even Is a Developer Anymore?
Here’s where I diverge from the panic.
A lot of developers are having an identity crisis because “AI can code now.” But honestly? I always thought being a developer was about more than writing code.
The job was never just about typing syntax. It was always about:
- Knowing what to build (strategic thinking)
- Knowing how to break it down (task decomposition)
- Knowing how to evaluate quality (critical judgment)
- Knowing how to communicate impact (visibility)
Coding was just the tool we used to execute on those skills. A really important tool, sure.
What’s happening now is that the tool is changing. The execution layer is being automated. But the thinking, the judgment, the strategy? That’s still ours.
If anything, AI is forcing the industry to acknowledge what good developers have always known: the hard part was never the syntax. It was figuring out what to build, why it matters, and whether it actually works.
The developers freaking out about “AI replacing us” are often the ones who built their identity entirely around code execution. And I get it, that was the most visible, most measurable part of the job. It’s what we practiced, what we got good at, what differentiated us.
But it was never the whole job. And now we’re being forced to reckon with that.
We’re the Canary in the Coal Mine
Software developers are experiencing this transformation first not because we’re special, but because we’re closest to the technology. We use AI to automate our own work. Of course it’s impacting us fastest.
But every profession is next.
The writer using AI to draft articles. The designer using AI to generate concepts. The analyst using AI to build models. The lawyer using AI to review contracts.
Everyone is about to face their own version of this identity crisis: “What is my job when AI can do the execution better than I can?”
We don’t have answers yet. We’re navigating in real-time, making it up as we go.
Navigating the Blur
I don’t have a clean conclusion here. No five-step framework for staying relevant. No confident prediction about where this goes.
What I do know:
The technology won’t slow down. We can’t wish our way back to a simpler time.
Skills still matter, but which ones is shifting. Deep technical knowledge isn’t obsolete, but it might not be sufficient anymore.
Intentionality is everything. We can drift into AI dependency without thinking, or we can engage with these tools critically, keeping what works, discarding what doesn’t, staying honest about trade-offs.
Trust isn’t binary. I’m somewhere on the spectrum between “review every line” and “ship without looking.” That’s probably where most of us are. And that’s okay, we’re all calibrating in real-time.
We’re figuring this out together. Nobody has it solved. The developers who seem most confident are probably just better at hiding their uncertainty.
The job is changing. Our identity as developers is changing with it. The only way through is acknowledging the discomfort and navigating it intentionally.
I still check the code. I still review the diffs. Maybe that makes me slow. Maybe that makes me careful. Maybe it’s just how I’m wired.
But I’m watching the people who don’t. And I’m taking notes.
We’re in uncharted territory.
And honestly? That’s both terrifying and kind of exciting.
What’s your experience with this shift? Are you in the “trust fully” camp, the “review everything” camp, or somewhere in between like me? I’d genuinely love to hear how other developers are navigating this.
The post The Developer Identity Crisis appeared first on sudoish.







Top comments (0)