DEV Community

Cover image for If ChatGPT Writes Your Code, What Are You Getting Paid For?
TheBitForge
TheBitForge

Posted on

If ChatGPT Writes Your Code, What Are You Getting Paid For?

I used Claude to write a function last week. Took thirty seconds. Would've taken me twenty minutes.

So what did I do for the other nineteen and a half?


When was the last time you wrote a for-loop from memory? When did you last implement quicksort without checking syntax? Do you remember what year you stopped memorizing standard library functions?

We've been "cheating" forever. Google. Stack Overflow. That one GitHub repo you always copy-paste from. Your coworker's code from three years ago.

So why does AI feel different?


Here's what I think I'm actually paid for, and I want to know if you agree:

Knowing which problem to solve first?

Understanding why the CEO's "simple request" will break everything?

Explaining to the PM why "just add a button" means three weeks of refactoring?

Deciding not to build the thing?

Knowing when the 300-line function ChatGPT gave me is technically correct but architecturally wrong?

Reviewing the PR at 4 p.m. and catching the bug that would've cost $40k?

Sitting in the incident channel at midnight taking responsibility?


But here's the uncomfortable part: how much of your day is actually that?

How much is just... typing?

And if it's mostly typing, what happens when typing isn't the constraint anymore?


I don't think AI is replacing developers. But I think it's asking us a question we've been avoiding:

What were we really doing all along?

Were you solving problems, or were you translating solutions into syntax? Because only one of those is going away.


So I'm curious: when you use AI to write code, what are you doing while it types? What's happening in your head that the AI can't do?

And more importantly—is that the thing your company is paying for?

Top comments (31)

Collapse
 
vanni7544 profile image
Vanni Daghini

From my perspective, problems are not solved by writing code.
They are solved earlier, by designing solutions that are already strategic and long-term in your mind.

Code is just the translation of that thinking.

AI can make this translation faster, but it cannot — and should not — think on your behalf.
It does not decide what is sustainable, what is risky, or what will have consequences six months from now.

For this reason, I don’t think the real question is whether we use AI to write code or not.
The real issue is whether we are willing to cooperate: with the context, with the system, and with the people who will come after us.

Without that cooperation, even perfect code — whether written by a human or by an AI — remains fragile.

Collapse
 
aliadelnour profile image
Ali Nour Al Din

I agree

Collapse
 
nandofm profile image
Fernando Fornieles

Programming has never been the main or most complicated thing, anyone can learn to program. The challenge has always been figuring out how to solve problems, with programming being the vehicle for implementing the solution.

This was true yesterday, it's true today, and it will continue to be true tomorrow. Whether or not we have AI involved.

So from my own perspective coding is still as important as before. That is, for whether good programming matters or not... well, it matters about as much as laying bricks properly in a building. No matter how well you've design the building or thought out the solution, if the bricks aren't laid correctly or the material isn't right, you'll have built a building very quickly and very beautiful at first glance, but it will fall apart and need repairs almost from day one.

So I don't agree with those that claims that writing code won't be important anymore because the IA. It is stil as coding is important to implement the solution to a problem, nothing more, nothing less.

Collapse
 
canro91 profile image
Cesar Aguirre

The challenge has always been figuring out how to solve problems, with programming being the vehicle for implementing the solution.

This! As long clients don't know what they want, there's work! :)

Collapse
 
nandofm profile image
Fernando Fornieles

Fact! xD

Collapse
 
itsugo profile image
Aryan Choudhary

I must say, I find it rather intriguing that AI-generated code has become a topic of discussion in the developer community. It's almost as if we're witnessing a subtle shift in the way we perceive our profession - from being seen as a craftsman or artisan, to a more utilitarian role, where efficiency and speed are paramount. Nevertheless, I firmly believe that the human element will continue to play a crucial role in software development, particularly in problem-solving and strategic decision-making.

Collapse
 
alifar profile image
Ali Farhat

Yes, the shift is real, from problem-solving to decision-making, it will all be replaced by AI, but we're not quite there yet 😂

Collapse
 
itsugo profile image
Aryan Choudhary

Real, I saw this too ^0^

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

The thing is, AI rarely produces working code, unless you request very straightforward out-of-the-book boilerplate. So, while gen AI comes up with an answer, code, or copy, anticipate its shortcomings, inconsistencies, hidden bugs or failing to question implicit narrow constraints.

Collapse
 
0xrazumovsky profile image
Razumovsky

Copying a 2,000 lines code is definitely a bad practice but you can use it as a draft - it reduces your time spent only on syntax and basic operations.
Buy you right anyway, you need double, even triple check

Collapse
 
shitij_bhatnagar_b6d1be72 profile image
Shitij Bhatnagar

Couldn't agree more

Collapse
 
salaria_labs profile image
Salaria Labs

We’re paid for judgment, not keystrokes. AI can write code — it can’t own consequences, tradeoffs, or accountability.

Collapse
 
thebitforge profile image
TheBitForge

Exactly. And I think what scares people is realizing how much of their day wasn't actually judgment. It's one thing to say "we're paid for decisions, not typing" — it's another to look at your calendar and realize half your meetings could've been emails, and half your code reviews were just catching syntax issues AI wouldn't make. The uncomfortable truth is that accountability only matters when something goes wrong. The rest of the time, we're just... there. And AI is forcing us to be honest about what "being there" actually means.

Collapse
 
shitij_bhatnagar_b6d1be72 profile image
Shitij Bhatnagar

Well said, I feel what the AI produces from the prompts by the developer and what the final outcome is actually a bigger reflection on the developer's skill & maturity rather than the chat bot.

Collapse
 
thebitforge profile image
TheBitForge

Exactly. That's the part I keep coming back to—the prompt itself is the skill. What you ask for, what you leave out, when you stop the AI and rewrite from scratch because it doesn't feel right... that's all judgment. The chatbot doesn't know if it's building something maintainable or just technically correct. That gap between "it works" and "it works in six months when someone else touches it" is everything. And honestly? I think that gap is getting wider, not narrower, because now we can generate more code faster—which means more chances to make deeply embedded mistakes at scale.

Collapse
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

AI accelerates the typing layer.
I operate at the layer that cannot be automated:

• Intent
• Integrity
• Boundaries
• Restoration logic
• System coherence
• Long-term consequences
• Mythic-operational framing
• Transmission across eras

That’s the work companies actually pay for—even if they don’t always have the language for it.

Collapse
 
thebitforge profile image
TheBitForge

I think you've named something really important here that most conversations about AI miss entirely. "Intent" and "system coherence" especially—those aren't just abstract concepts, they're the difference between code that ships and code that lives in production for years. AI can generate a perfectly valid implementation, but it can't tell you whether that implementation respects the implicit contracts your system has been running on for the last five years. It doesn't know what broke last time someone "just refactored this one thing." That knowledge—that operational mythology you're carrying—that's irreplaceable, and honestly, I think it's what separates developers who survive AI from those who get replaced by it.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

It kind of does know some of those things, doesn't it? I know when I use AI to write code in my codebase, it's respecting my conventions, it's using the patterns that we look for. If it doesn't, I tell it and it makes a rule for that and it doesn't make that mistake again.

Thread Thread
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

You're right that AI can learn pattern adherence—conventions, style guides, linting rules.

That's real and useful.

But coherence isn't pattern adherence.

Coherence is knowing why that one service can't be refactored even though it violates every convention in the repo. It's the restoration logic that says "if this breaks, here's what we rebuild first." It's the implicit contract between teams that was never documented because it predates everyone currently on the team.

It's the thing that breaks when someone "just refactors this one thing" and three downstream services fail silently for six hours.

You can correct an AI into following your conventions. You can't correct it into understanding the emotional weight of a system's history, the unwritten dependencies, or the restoration sequences that only exist in the collective memory of the people who've been paged at 3am.

That's not a rule violation. It's a missing ontology.

Collapse
 
narnaiezzsshaa profile image
Narnaiezzsshaa Truong

That's the real work, isn't it—what survives refactors, rewrites, and regime shifts.

Collapse
 
canro91 profile image
Cesar Aguirre

The bottleneck has never been typing.

It's:

  • waiting for a committee to approve changes
  • deciding what to build
  • derisking sprints
  • discussing how to slice a problem
  • coming up with testing plans
  • waiting for code reviews

That hasn't changed much with AI.

Collapse
 
nube_colectiva_nc profile image
Nube Colectiva

AI helps to a certain extent. Then I start making mistakes, and that's when I use critical thinking. I use my knowledge and consult the official documentation for the languages or technologies I'm using.

Collapse
 
moibra profile image
Mohamed Ibrahim • Edited

You get paid because you tell the ChatGPT or any AI tool what to do! That's all! If you're a good programmer, you'll tell him to do certain things to save you time; experience is key here. Inexperienced people will hit a wall as soon as the project gets a little bigger.

A good programmer here is one who considers performance, SEO, flexibility, future scalability, testing, and the tools they choose to develop their program, etc.. All of this is in your head, not artificial intelligence, but it does save a significant amount of time, just like existing libraries used to save you time. But here, it's even easier.

Collapse
 
vibhash_mishra_da1c559149 profile image
vibhash mishra

Even if ChatGPT writes your code, you are getting paid for:

  • Understanding the problem
  • Making the right decisions
  • Knowing what to ask
  • Reviewing & fixing
  • Context & responsibility
  • Integration & ownership

Some comments may only be visible to logged-in visitors. Sign in to view all comments.