I glanced at my browser tabs this morning. Twenty tabs open. Every single one was either Claude or Gemini.
Then I remembered what my tabs used to look like just four years ago: Stack Overflow threads, MDN docs, blog posts, GitHub issues. The traditional developer toolkit.
The shift happened so gradually that I almost missed how fundamental it is.
The Old Way
Remember the workflow? You'd hit a problem, craft the perfect search query, wade through Stack Overflow answers from 2014, cross-reference three different blog posts, and eventually piece together a solution. Then you'd keep those tabs open for days because you might need them again.
Documentation was your bible. You'd spend hours reading API docs, trying to understand the examples, mentally mapping them to your specific use case.
The New Way
Now? I open a chat. I describe what I'm building. The conversation unfolds like pair programming with someone who's read every piece of documentation and every Stack Overflow thread ever written.
- Need to understand a complex API? Ask questions back and forth until it clicks
- Stuck on an architectural decision? Brainstorm trade-offs in real-time
- Can't remember that regex pattern? Get it explained and customized for your exact case
It's not just faster. It's fundamentally different.
Example: Last week I needed to implement rate limiting for my MCP server. Old way: I'd search "rate limiting node.js", read 5 articles, piece together a solution, test it, debug it. New way: I described my use case to Claude, we discussed trade-offs (token bucket vs sliding window), it generated an implementation with my specific edge cases handled, I reviewed and shipped.
Same outcome. Completely different process. The knowledge transfer happened through conversation, not documentation.
What We're Not Talking About
The development community is still processing this shift, and honestly, I'm not sure we're having the right conversations:
We talk about productivity gains. Sure, I ship faster. But that's not the interesting part.
We don't talk enough about how it's changing how we learn. I'm not memorizing syntax anymore. I'm learning concepts and patterns while the AI handles the implementation details. Is this better? Worse? Just different?
We're not discussing the dependency. My entire workflow now assumes I have access to these tools. What happens when I don't? Am I losing skills or just offloading the memorization to focus on higher-level thinking?
We're glossing over what we lost. Those 47 Stack Overflow tabs weren't just research - they connected you to the collective knowledge of thousands of developers who struggled with the same problem. Now my tabs are conversations with AI. I'm more productive, but I'm also more isolated. There's something philosophical there we haven't unpacked.
The Uncomfortable Questions
I love these tools. FPL Hub, the RAG systems I've built, my MCP servers - they've all been developed with AI assistance. I'm more productive than ever.
But sometimes I wonder:
- Am I becoming a better developer or just a better prompt engineer?
- When the AI explains something, am I really understanding it or just trusting it?
- What happens to the next generation of developers who never learn to read documentation the hard way?
Maybe It's Just Evolution
Every generation of developers faces this. Assembly programmers thought high-level languages would ruin programming. Java devs scoffed at frameworks that "did too much magic." Seniors today still debate whether junior devs should learn algorithms before frameworks.
Maybe AI assistance is just the next abstraction layer. Maybe fighting it is like insisting we should all write malloc() from scratch to "really understand memory management."
Or maybe it's different this time.
The Skills That Matter Now
Here's what I've noticed: the skills that matter are changing.
Less important:
- Memorizing syntax
- Knowing every edge case of a library
- Perfect recall of API methods
More important:
- Asking the right questions
- Evaluating if an AI-generated solution is actually good
- Understanding trade-offs and system design
- Knowing when to trust the AI and when to dig deeper
I can't tell if this is a net positive yet. But I know it's real.
What I Know For Sure
Looking at those tabs - all AI chats, zero traditional docs - I know one thing: the way we work has completely changed, and we're still figuring out what that means.
We're the first generation of developers living through this transition. We're writing the playbook in real-time. Some of us are thriving, some are skeptical, most of us are somewhere in between.
The tabs don't lie though. The shift is real. Whether we've fully processed it or not.
What's in your tabs right now? Still documentation? Or has the shift happened for you too?
Top comments (109)
I remember when I began learning the Linux OS back in 2012. I'd get stuck and go on Stack Overflow for answers. Sometimes I'd get the answer, sometimes I'd get 10 answers and sometimes I'd get attacked for how I phrased my question. I like things better now - how all of the AI models know most everything there is to know about Linux - and I know where they learned it from.
the stackoverflow gatekeeping was painful fr.
ai is more patient for sure. but we also lost something . those threads were built by thousands of people debugging together
now its just... me and claude. faster but lonelier somehow.
The "where they learned it from" sources are running dry. The bad docs, mentioned in another discussion, are still there. There are still forums and social media. We can still discuss and ask questions and we still do. However, if the shift towards private AI conversations continues, new AI generations will have less trustworthy source material to learn from, so their answers will likely deteriorate.
I remember times before StackOverflow (Experts Exchange and various forums, and even the good old USENET newsgroups) and before MDN (MSDN). I still have MDN tabs open, I still use StackOverflow, and I do use Claude, Perplexity and Gemini/Bard/Google AI mode, but apart from MDN, there is still no single source of truth that won't fail eventually.
Some day, using AI will become more expensive, and something new will emerge, hopefully better than all that we have right now. Critical thinking, and learning by doing will never go away though.
Right - critical thinking, learning by doing, and as you say, we're all talking and exchanging information with each other here on this forum. We used slide rules in 1971 in my electronics classes and studied electron tubes - calculators were just arriving on the scene. Now there are smartphones and wireless and I can run AI on my laptop - I've learned to roll with the tech.
fair point. tools change, fundamentals dont.
i think my worry is less about the tool and more about the shift from public knowledge bases to private conversations. but youre right that we adapt.
appreciate the perspective from someone whos actually lived through multiple tech waves 🙏
"knowledge running dry" is exactly it.
we're all optimizing our individual workflows while accidentally killing the commons. stackoverflow was annoying but it was PUBLIC. future devs could learn from our struggles. now all that reasoning is locked in private ai chats
and youre right about cost. when ai gets expensive or goes away, what do we have left? a bunch of people who forgot how to debug without it?
this feels like tragedy of the commons but for developer knowledge. no idea how to fix it but naming it is important
thanks for this perspective
I don't think private is the biggest problem, at least the AI companies could use private chats for training, but only with feedback and curation. If all AI chats were public, indexed and searchable, they would still contain all the failed steps with incomple code and misleading suggestions: much harder to make sense of than StackOverflow with its strict voting system (and outdated answers, but that's often obvious and commented) or forums where senior members can edit titles and people often share their final solution and mark a topic as solved. Most AI chats are too conversational to become part of a new knowledge library.
fair point. ai chat transcripts would be as messy as my git history. technically a record but not actually useful 😂
i think youre right that the curation/voting system was the real magic of stackoverflow not just the public visibility. raw conversations without that filtering layer are just noise
maybe the real question is. how do we rebuild that curation mechanism for ai-era knowledge? bc right now we dont have it and individual productivity is masking that gap
really appreciate this back and forth btw. this is exactly the kind of thinking i was hoping for
I started learning Linux in 2012. When I got stuck, I'd go to Stack Overflow. Sometimes I'd find an answer. Sometimes I'd find ten. Sometimes I'd just get attacked for how I asked the question.
Now I ask Claude or Perplexity and get a clear, patient explanation—no judgment, no "this has been asked before," no downvotes.
And here's the thing: I know exactly where these models learned it all. From Stack Overflow. From the Arch Wiki. From all those forum threads where someone got roasted for not reading the manual first.
The communities that gatekept knowledge ended up training the tools that now give it away freely.
We haven't really sat with that yet.
The "dumb question" you were afraid to ask is now the safest one to ask
Hostile experts created the dataset for patient machines
My students will never know what it felt like to mass-open 30 tabs and pray
"hostile experts created the dataset for patient machines"
THIS. we built the knowledge commons through collective struggle, then got replaced by the nicer version of ourselves
your students will learn faster but differently. not sure if thats progress or just... change
either way this comment is 🔥
Fair point... we learnt and were happy being grilled on stack overflow, the criticism was brutal... still stack overflow was the best spot for help earlier
Fantastic article ...
Now, this just made me wonder: could AI write this ... ? I think the answer is an unqualified "NO", and that clearly shows where humans still have the edge, and will keep that edge for the foreseeable future ...
appreciate that 🙏
ai loves giving solutions. this is more just... processing out loud and not knowing the answers yet
glad it landed
This article explains why human-written texts are often still a lot better than AI-generated text:
dev.to/ujja/ai-confluence-docs-and...?
yes, ai writes to sound correct. humans write to figure things out
the messy thinking-out-loud is what people actually want to read
good link
Plus it has a tendency to be formal, and abstract - which makes it hard to digest, coz it's difficult to relate to the abstractions ... and tends to be repetitive as well - often you can spot AI-written text from a mile away, not because it's wrong or incorrect, but because it's got something "robotic" :-)
"robotic" is exactly it. too perfect, too structured, zero rough edges.
humans think out loud. ai writes reports.
people want the thinking not the summary.
appreciate you leob 🙏
Is this a challenge? ;)
No, it's just an observation ... :)
Code for thought. Even though AI is becoming the new norm, it still requires us to know what exactly do we want. It's like a developer getting to understand what the client wants. Now developers are AI's clients. Excellent article.
100%. the bottleneck moved from implementation to articulation
asking the right questions matters more than having the right answers now
thanks.
This really hit close to home.
The part about losing the collective struggle of Stack Overflow threads resonated a lot. Those messy tabs weren’t just solutions — they were context, debate, and scars from other devs who had already been burned.
I feel more productive than ever with AI, but also strangely more alone in the problem-solving process. Less “community knowledge”, more private conversations.
I don’t think this makes us worse developers — but it does change what “good” means. Judgment, system thinking, and knowing when to push back on the AI feel more critical than raw recall ever was.
Curious to see how we teach newcomers in this world. Reading docs was painful, but it trained a kind of patience and skepticism that I’m not sure chats automatically build.
Great post — this is one of the conversations we should be having more openly.
this is it. "less community knowledge, more private conversations"
the productivity is real but so is the isolation. and youre right. judgment matters way more than recall now. which is probably good? but also we have no idea how to teach that systematically yet
the newcomer question keeps me up tbh. reading bad docs built skepticism by accident. ai gives you answers confidently. how do you learn to doubt?
thanks for adding to this. really good perspective. 👍
That’s such a good point about skepticism being “accidentally” trained by bad docs.
AI answers confidently by default, and without friction it’s easy to skip the doubt step. Maybe the new skill we need to teach isn’t how to find answers, but how to interrogate them — asking “what assumptions is this making?” and “when would this fail?”
Feels like we’re still early in figuring out how to pass that mindset to newcomers. Appreciate you pushing the conversation further.
"interrogate answers" is the perfect framing.
the old way: friction forced skepticism
the new way: we have to teach doubt explicitly
no clue how to do that at scale but naming it is step one i guess
appreciate the back and forth, this is exactly what i was hoping for with this post. 👍
Still using documention and tutorials myself. Starting out, I do believe fundamentals remain important for a firm foundation that can be built on.
100%. fundamentals are even more important now imo. you need to know what good code looks like to catch when AI generates garbage.
what are you learning rn?
Currently working through the Responsive Web Design certification at freeCodeCamp but my first love is Python!
python gang 💪
fcc's structure is really good for building muscle memory. you thinking fullstack eventually or backend focused?
I aim to complete the freeCodeCamp Full Stack curriculum, though I’m more backend-leaning. Python's my favourite, so I'll focus there while still keeping up with the frontend basics.
makes sense. knowing enough frontend to not be completely lost is clutch even as a backend dev
fastapi is fire if you haven't checked it out yet
No, I haven't as of yet. Appreciate the heads-up though!
for sure! you'll prob run into it eventually, it's everywhere now
good luck with fcc 💪
I ditched stack overflow a long time ago, but sometimes when I use LLM’s for documentations, I still do check the original documentations sometimes just to be a 100% sure. I read an article somewhere where the author mentioned, sometimes the major LLM’s “make up” stuff that does not exist😂 that made me lose trust in ai all together, so I still do have documentations on my tabs for sure
Agree, AI hallucination is a reality (and very similar to AI tools preparing code that does not compile and they attempt to do so without asking for a full understanding of the problem to be solved) and I still have links to Stack overflow articles (like at the end of this article dev.to/shitij_bhatnagar_b6d1be72/s...) because I still refer end up referring to stack overflow once in a while :-)
good point about the compilation angle. that's actually a form of cheap verification that ben santora talks about in his work on AI coding risks.
when AI generates code that doesn't compile, you catch it immediately. when it generates code that compiles but has subtle logic bugs or security issues, you don't know until much later.
that's why "it compiles" is becoming a weaker signal of correctness in the AI era. we need additional verification layers
appreciate the link to your article too.will check it out.
Thanks for.your comments and excellent point about the subtle and other shortfalls in AI output, though I do believe it will slowly improve / learn and get better.
Let’s see what the future brings
this is exactly the verification problem i keep coming back to.
you're doing what ben santora calls "keeping humans in the loop" using
AI for speed but docs for verification. that's the right approach but it also means you're doing MORE cognitive work than either method alone.
the "making up stuff" problem (hallucinations) is why AI can't fully replace documentation. but here's what worries me: if everyone stops contributing to stack overflow / public docs because "AI is good enough", what happens when you need to verify?
you're being smart about trust. the question is whether the next generation will be as skeptical, or if they'll just accept whatever AI says confidently
appreciate you sharing your workflow here.
Exactly I truly doubt the Ai “reign” would last for a long period of time. What might happen is, Ai would keep getting irrelevant as time goes on if people stop learning and contributing to solely depend on it. Which might really affect the next generations to come. This is a really interesting topic you’ve written about Daniel
you just described the feedback loop that keeps me up at night.
peter truchly said something similar yesterday. "if people stop contributing, the only entity left to build that knowledge base is AI itself" which leads to model collapse (AI training on AI-generated
content, degrading over time).
you're coming at it from a different angle but arriving at the same conclusion.the AI "reign" is self-limiting because it's consuming the knowledge commons it depends on.
the generational piece is what really worries me. you and i learned the old way. we have skepticism built in. but juniors entering the field NOW? they'll trust AI by default because they never experienced the alternative.
i'm working on a follow-up article exploring exactly this. would love to cite your insight if that's cool . you articulated the collapse mechanism really clearly.
thanks for extending this thinking
That would be awesome! Would be looking forward to read the next article 💯
LLMs often struggle with troubleshooting software or games, skipping steps or making things up entirely. And when you point it out, they’ll confidently reply, “Yes, that menu was removed in an update,” as if that fixes the problem.
Exactly they make up stuff and “apologise” when you point it out
My best comment on this would be a quote from a 1964 Bob Dylan's song, titled "The Times They Are A-Changin'":
_
great quote. we're swimming for sure.
just not sure if were building something together or just... not drowning individually
appreciate you reading 🙏
I generally share the same point of view. AI is really convenient and can produce a clean, quick, usable answer right away. I still often search with Google, but the results can be overwhelming, ambiguous, or buried in long threads of failed attempts — which isn’t useless either. That said, we shouldn’t forget that AI models are trained on years of documentation, questions, and exploratory content… and future generations might not benefit from such rich source material. From my perspective, a good minimum would be using the solid answers we get from AI to build clean, useful wikis that are helpful both to us and to future AI systems. But the race for profit has become the norm, and it’s hard to break away from that.
exactly. "future generations might not benefit from such rich source material" . this is the knowledge collapse problem
we're all consuming the commons (stackoverflow, docs, wikis) through ai but not contributing back. eventually the well runs dry.
your wiki idea is interesting though. treat ai conversations as raw material, then curate/publish the good stuff. rebuilds the public knowledge layer.
no idea how to make that happen at scale but its better than just... private chats forever.
appreciate this perspective 🙏
Personally, I found Stack Overflow a significant distraction, frequently wrong, misguided, and with conflicting advice on the things I needed most. A vital part of my work, but a frustration. AI removes that for me; my understanding is much improved, though I get fed up with the writing style, which jars with me.
The lack of such sources of information for new developers will not matter if future development is mostly done by AI. Scary thought. It probably wouldn't invent half the architecture that I use, designed to optimise for multiple teams working on the same code base, but even that won't matter if the teams are all coordinating agents.
I fear Stack Overflow, dev.to etc are like manuals on how to look after your horse, when the world is soon going to be driving Fords.
the horse/ford analogy is provocative and honestly might be right.
but here's what worries me about "AI does all the development".who verifies the AI's
architecture decisions? in domains with cheap verification (does it compile? does it run?)
AI is probably fine. but system architecture, scaling patterns, team coordination those have expensive verification. you only know you're wrong when production falls over months later.
your point about "optimising for multiple teams working on the same codebase" - AI wouldn't invent that because it's learned from individual problem-solving, not organizational design. and if we stop doing that thinking publicly, future AI can't learn it either.
maybe the real question isn't "will AI replace developers" but "what level of the stack are we operating at?" if we're just implementing features, yeah, probably automated.
if we're designing systems for human organizations... maybe not?
though your coordinating agents point is chilling. if the teams ARE agents, then
organizational design becomes compiler design. whole different game
not sure if i'm optimistic or terrified. probably both.
appreciate the pushback though .this is exactly the kind of uncomfortable question we should be asking.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.