DEV Community

Cover image for The Vibe Coding Paradox
Juno Threadborne
Juno Threadborne

Posted on

The Vibe Coding Paradox

My last PR for Nudges was +96 −312, touched 38 files, and was about 90% vibe coded. And I’m confident in it.

While I was gliding through Hyrule, two different AI agents were quietly refactoring Kafka consumers, validating payload shapes, and committing changes—clean, scoped, and production-ready.
When I came back, I scanned the diff, nodded at the function names, double checked a few critical spots, and told Claude Code to submit a PR.

And here's the thing: it feels good.

Because in that system, the AI isn’t just guessing—it’s working inside a system I built with intention.
A system where I own the architecture, the message formats, the telemetry, the consequences.
Nudges is mine. So when the code writes itself, it feels like an extension of my will.

But that same experience—AI as co-developer, frictionless and fast—feels completely different in my contracts.

Because there, I don’t own the system.

I don’t even trust the system.
I’m a contractor navigating 300,000 lines of barely-working code, where architectural debt is measured in years and every “improvement” risks a cascade failure.
In that context, AI doesn’t feel like empowerment.
It feels like a shield against cognitive overload.

It still writes the code.
But I can’t always afford to care if it’s the best code.

And that—more than anything—is the paradox of this new era:

AI removes the friction.
But friction was where we used to decide what mattered.

The Copilot Moment

It started with one of those tickets that shouldn’t exist.

A client wanted the UI to throw warnings if a form was in just the right state. One with rules that that overlapped with a dozen others.

It was a mess. A logic snarl wrapped in inconsistent naming conventions and years of “we’ll get to it next release” tech debt.

I opened the React component. 1100 lines. Most of it conditionals.

No test coverage. No documentation. Just vibes and nested ternaries.

I pulled up Copilot, hit Win+H, and started talking.

I explained what I was trying to accomplish, why it mattered, and where to find context.

Then I leaned back and rubbed my eyes.

An AI agent wrote the rest.

A fresh set of variables and checks.

Each one memoized. Each one defensive.

Each one carrying just enough semantic weight to let me keep going without refactoring the whole thing.

And here’s the seduction: it was more precise than what I would have written.
Because I was tired.
Because after thirty years of writing code, I don’t always have it in me to do it the “right” way when the AI’s “extra work” is technically correct.

So I kept it. I checked it. And it worked.
The code read well.
The diff looked professional.

But I knew—deep down—I’d just added another 30 lines to a file that should’ve been 300 on its own.

I was perpetuating bad patterns.

The kicker?

I submitted the PR.
Because it was correct.
Because that was one less thing to worry about.

The Cognitive Calculus

I could’ve extracted it.
Could’ve split the validation logic into a dedicated module, wired up a test harness, and structured it all so "Go to Reference" is all you need to understand it. Hell, I could have just written it myself.

But that would’ve taken two hours of understanding the mess.
And what I needed was ten minutes of momentum.

That’s the math now.

On some contracts, I’m making dozens of these tradeoffs every week:

  • The “right” way: extract, document, test — 2 hours.
  • The “good enough” way: keep the naming clean, make it work, ship — 5 minutes.
  • Multiply that by countless decisions a week.

You start to see the shape of it.

I don't write code with AI because I can't do it myself, or because I'm lazy or don't care.

I write with AI because I've learned where my energy is best utilized.

And this is where the game has changed.

Because now it's trivially easy to do the “good enough” thing.

It offers clean names, smart guards, reusable patterns.
The five-minute path looks like the two-hour path—until you zoom out.

That’s the danger.

You stop noticing you’re doing triage.
Because the bandages look like real skin.

The Pattern Replicator

To quote Richard Campbell: "Computers are amplifiers."

So let's be clear:
AI doesn’t improve your system. It continues it.

If the pattern is clean, it scales clarity.
If the pattern is broken, it scales dysfunction—beautifully formatted, semantically named dysfunction.

That’s the danger.

It replicates everything:

  • Inline functions that should be services
  • Defensive props on components that should be deleted
  • Patterns you meant to fix later, now baked into every new line

And it does it without resistance.
Because the code looks right.

No red flags. No typos.
Just subtle misfits stacking on top of each other until you’re buried in clean, wrong logic.

You used to feel it—used to wrestle with the system.
Now the system slides forward like it’s on rails.
And if you're not paying attention, it takes you somewhere you never meant to go.

The Contrast

The difference isn’t the tool.
It’s the context.

In Nudges, I own the system.
I care about how it's built because I’ll still be inside it six months from now.
Every event, every topic name, every module boundary—I built those.

So when AI adds something, I see it instantly: Does this belong here? Does this match the goal of the system I meant to build?

And if it doesn’t? I stop. I fix it. I nudge it back into alignment.

Because I want it to hold.

In that context, AI is an amplifier of care.
It lets me move faster without compromising my standards—because those standards were embedded in the design from the beginning.

But in my contract work?

I don’t own the system.
I inherit it.
I patch it.
And when AI offers a solution, I don’t stop to ask “Is this best practice?”
I ask “Will this work? Will it ship? Will it survive the next merge?”

Because that's the job.
Because the cost of caring is higher than the cost of being correct.

And the reward is the same either way.

It’s not about skill. I’m not too old. I’m not slower.
What’s changed isn’t what I can do—it’s what I choose to do, and when.

The Ghost in the Machine

I merged that PR for Nudges. The one the agents wrote while I was playing a video game. It was beautiful. It was right. But it was right because I spent two years building the constraints that made it right.

Tomorrow, I’ll tackle another ticket for the client. I’ll open a file that smells like 2018 and regret. And I might ask an AI to patch a hole I should probably fix at the source. It will be ugly. It will be "good enough." And it will be exactly what the system deserves.

Because if the code writes itself, our hands are off the keyboard. But our fingerprints are everywhere.

Top comments (9)

Collapse
 
art_light profile image
Art light

Really thoughtful take — I love how you frame AI as an amplifier of intent rather than a shortcut, especially the contrast between owning the system versus inheriting it. It resonates with me, and I think the real challenge (and opportunity) is designing systems strong enough that AI can extend our standards, not quietly erode them.

Collapse
 
mahou_anisphia profile image
Trương Đức Sang

Thanks for the article—the part about 'Computers are amplifiers' really inspired me. I'm a 'new Software Engineer' in an Asian outsourcing culture that has relied on AI since day one. But I always felt something was wrong. I decided to take a step back, slow down, and open MDN and the basic docs, just to chase the feeling that I need to fill the gap to 'own the craft.' Yet I still had doubts, because even my manager admits: 'Man, I can't code anymore either, why are you so stressed?' This article just gave me the answer.

Collapse
 
rokoss21 profile image
rokoss21

This is a fascinating lens on AI-as-amplifier. The distinction between "AI guessing with intent" vs. "AI guessing without intent" is essentially the difference between specification-based systems and heuristic-based ones.

What you're describing with Nudges—where you own the architecture—parallels the broader engineering challenge: systems designed with external observability constraints tend to behave better under AI augmentation than those optimized for internal clarity alone.

The real paradox: the more you let AI make decisions within defined boundaries, the better it performs. But those boundaries have to be intentional. Vibe coding works not because it abandons contracts, but because great developers internalize implicit contracts at scale. AI can operate the same way if the system is built to expose what those contracts are.

Great meditation on craft vs. automation.

Collapse
 
andreas_mller_2fd27cf578 profile image
Andreas Müller

Really good take. I will probably look into training and certifying as a meditation trainer in addition to my job as a dev next year, and you just re-inforced that decision. Because one thing is certain: The day I'm told to use vibe coding to write production code is the day I take my hands of the keyboard and search for a new job. And the day I can't find a job that allows me to care about my craft and write the code myself is the day I quit this job entirely and do something else. Writing code is the part of my job that I love the most. Designing abstractions. Inventing algorithms. Writing beautiful, small functions and variable names. The list goes on and on. Call me a dinosaur, but I will never, ever use natural language to prompt an LLM to write code for me outside of unit tests. The code that actually is executed in production? I will either write that myself or I will not be involved in creating it at all. If anyone has a problem with that they can search for a new dev and I will gladly search for a new job. Coding is the best part of my professional live. I'll quit the job and continue doing it as a hobby before I let anyone take that away from me.

Collapse
 
zaknarfen profile image
Renan Augusto Dembogurski

I couldn't agree more. I'm already feeling like "what is the point?" I'm already feeling a resistance to do my regular work because of this, I see creating code as a form of art, and if that goes away I'll move into something else.

Collapse
 
divineshadow profile image
Bill Molchan

Well said. Resonated with my own experiences planting a meticulous seed and watching my care replicate and scale. Appreciated the perspective on contracting tech debt too.

Documentation, linting, strong types, good abstraction boundaries, tests; all propagated with the same intentionality of that seed.

Collapse
 
donald_sledge_939d196bb70 profile image
Donald Sledge

This was a beautiful article. I've been working on figuring out if I could develop an app on my own with my little coding knowledge. Your article got me right into perspective to sit with it and do the work to learn. Once I have learned and am comfortable in my knowledge maybe I could use AI to assist.

Collapse
 
c_z_77941aff34ce1a5e33f11 profile image
c z

wow how delusional can people be? copilot is complete and utter TRASH. it CANNOT perform even the most basic functions. it cannot correctly answer BASIC GD ARITHMETIC. when it tries to provide code for uipath, it will often fail to render it's answers completely. openai is nothing but a hot steaming pile of scam altmans excrement and it will soon be flushed.

Collapse
 
mlengineerdotcom profile image
mlengineer-dotcom

LOL at a file that "smells like 2018 and regret"