I used Claude to write a function last week. Took thirty seconds. Would've taken me twenty minutes.
So what did I do for the other nineteen and a half?
When was the last time you wrote a for-loop from memory? When did you last implement quicksort without checking syntax? Do you remember what year you stopped memorizing standard library functions?
We've been "cheating" forever. Google. Stack Overflow. That one GitHub repo you always copy-paste from. Your coworker's code from three years ago.
So why does AI feel different?
Here's what I think I'm actually paid for, and I want to know if you agree:
Knowing which problem to solve first?
Understanding why the CEO's "simple request" will break everything?
Explaining to the PM why "just add a button" means three weeks of refactoring?
Deciding not to build the thing?
Knowing when the 300-line function ChatGPT gave me is technically correct but architecturally wrong?
Reviewing the PR at 4 p.m. and catching the bug that would've cost $40k?
Sitting in the incident channel at midnight taking responsibility?
But here's the uncomfortable part: how much of your day is actually that?
How much is just... typing?
And if it's mostly typing, what happens when typing isn't the constraint anymore?
I don't think AI is replacing developers. But I think it's asking us a question we've been avoiding:
What were we really doing all along?
Were you solving problems, or were you translating solutions into syntax? Because only one of those is going away.
So I'm curious: when you use AI to write code, what are you doing while it types? What's happening in your head that the AI can't do?
And more importantly—is that the thing your company is paying for?

Top comments (31)
From my perspective, problems are not solved by writing code.
They are solved earlier, by designing solutions that are already strategic and long-term in your mind.
Code is just the translation of that thinking.
AI can make this translation faster, but it cannot — and should not — think on your behalf.
It does not decide what is sustainable, what is risky, or what will have consequences six months from now.
For this reason, I don’t think the real question is whether we use AI to write code or not.
The real issue is whether we are willing to cooperate: with the context, with the system, and with the people who will come after us.
Without that cooperation, even perfect code — whether written by a human or by an AI — remains fragile.
I agree
Programming has never been the main or most complicated thing, anyone can learn to program. The challenge has always been figuring out how to solve problems, with programming being the vehicle for implementing the solution.
This was true yesterday, it's true today, and it will continue to be true tomorrow. Whether or not we have AI involved.
So from my own perspective coding is still as important as before. That is, for whether good programming matters or not... well, it matters about as much as laying bricks properly in a building. No matter how well you've design the building or thought out the solution, if the bricks aren't laid correctly or the material isn't right, you'll have built a building very quickly and very beautiful at first glance, but it will fall apart and need repairs almost from day one.
So I don't agree with those that claims that writing code won't be important anymore because the IA. It is stil as coding is important to implement the solution to a problem, nothing more, nothing less.
This! As long clients don't know what they want, there's work! :)
Fact! xD
I must say, I find it rather intriguing that AI-generated code has become a topic of discussion in the developer community. It's almost as if we're witnessing a subtle shift in the way we perceive our profession - from being seen as a craftsman or artisan, to a more utilitarian role, where efficiency and speed are paramount. Nevertheless, I firmly believe that the human element will continue to play a crucial role in software development, particularly in problem-solving and strategic decision-making.
Yes, the shift is real, from problem-solving to decision-making, it will all be replaced by AI, but we're not quite there yet 😂
Real, I saw this too ^0^
The thing is, AI rarely produces working code, unless you request very straightforward out-of-the-book boilerplate. So, while gen AI comes up with an answer, code, or copy, anticipate its shortcomings, inconsistencies, hidden bugs or failing to question implicit narrow constraints.
Copying a 2,000 lines code is definitely a bad practice but you can use it as a draft - it reduces your time spent only on syntax and basic operations.
Buy you right anyway, you need double, even triple check
Couldn't agree more
We’re paid for judgment, not keystrokes. AI can write code — it can’t own consequences, tradeoffs, or accountability.
Exactly. And I think what scares people is realizing how much of their day wasn't actually judgment. It's one thing to say "we're paid for decisions, not typing" — it's another to look at your calendar and realize half your meetings could've been emails, and half your code reviews were just catching syntax issues AI wouldn't make. The uncomfortable truth is that accountability only matters when something goes wrong. The rest of the time, we're just... there. And AI is forcing us to be honest about what "being there" actually means.
Well said, I feel what the AI produces from the prompts by the developer and what the final outcome is actually a bigger reflection on the developer's skill & maturity rather than the chat bot.
Exactly. That's the part I keep coming back to—the prompt itself is the skill. What you ask for, what you leave out, when you stop the AI and rewrite from scratch because it doesn't feel right... that's all judgment. The chatbot doesn't know if it's building something maintainable or just technically correct. That gap between "it works" and "it works in six months when someone else touches it" is everything. And honestly? I think that gap is getting wider, not narrower, because now we can generate more code faster—which means more chances to make deeply embedded mistakes at scale.
AI accelerates the typing layer.
I operate at the layer that cannot be automated:
• Intent
• Integrity
• Boundaries
• Restoration logic
• System coherence
• Long-term consequences
• Mythic-operational framing
• Transmission across eras
That’s the work companies actually pay for—even if they don’t always have the language for it.
I think you've named something really important here that most conversations about AI miss entirely. "Intent" and "system coherence" especially—those aren't just abstract concepts, they're the difference between code that ships and code that lives in production for years. AI can generate a perfectly valid implementation, but it can't tell you whether that implementation respects the implicit contracts your system has been running on for the last five years. It doesn't know what broke last time someone "just refactored this one thing." That knowledge—that operational mythology you're carrying—that's irreplaceable, and honestly, I think it's what separates developers who survive AI from those who get replaced by it.
It kind of does know some of those things, doesn't it? I know when I use AI to write code in my codebase, it's respecting my conventions, it's using the patterns that we look for. If it doesn't, I tell it and it makes a rule for that and it doesn't make that mistake again.
You're right that AI can learn pattern adherence—conventions, style guides, linting rules.
That's real and useful.
But coherence isn't pattern adherence.
Coherence is knowing why that one service can't be refactored even though it violates every convention in the repo. It's the restoration logic that says "if this breaks, here's what we rebuild first." It's the implicit contract between teams that was never documented because it predates everyone currently on the team.
It's the thing that breaks when someone "just refactors this one thing" and three downstream services fail silently for six hours.
You can correct an AI into following your conventions. You can't correct it into understanding the emotional weight of a system's history, the unwritten dependencies, or the restoration sequences that only exist in the collective memory of the people who've been paged at 3am.
That's not a rule violation. It's a missing ontology.
That's the real work, isn't it—what survives refactors, rewrites, and regime shifts.
The bottleneck has never been typing.
It's:
That hasn't changed much with AI.
AI helps to a certain extent. Then I start making mistakes, and that's when I use critical thinking. I use my knowledge and consult the official documentation for the languages or technologies I'm using.
You get paid because you tell the ChatGPT or any AI tool what to do! That's all! If you're a good programmer, you'll tell him to do certain things to save you time; experience is key here. Inexperienced people will hit a wall as soon as the project gets a little bigger.
A good programmer here is one who considers performance, SEO, flexibility, future scalability, testing, and the tools they choose to develop their program, etc.. All of this is in your head, not artificial intelligence, but it does save a significant amount of time, just like existing libraries used to save you time. But here, it's even easier.
Even if ChatGPT writes your code, you are getting paid for:
Some comments may only be visible to logged-in visitors. Sign in to view all comments.