DEV Community

Cover image for We're Creating a Knowledge Collapse and No One's Talking About It
Daniel Nwaneri
Daniel Nwaneri

Posted on

We're Creating a Knowledge Collapse and No One's Talking About It

"Hostile experts created the dataset for patient machines."

That line, from a comment by Vinicius Fagundes on my last article, won't leave my head.

Stack Overflow's traffic collapsed 78% in two years. Everyone's celebrating that AI finally killed the gatekeepers. But here's what we're not asking:

If we all stop contributing to public knowledge bases, what does the next generation of AI even train on?

We might be optimizing ourselves into a knowledge dead-end.

The Data We're Ignoring

Stack Overflow went from 200,000 questions per month at its peak to under 50,000 by late 2025. That's not a dip. That's a collapse.

Meanwhile, 84% of developers now use AI tools in their workflow, up from 76% just a year ago. Among professional developers, 51% use AI daily.

The shift is real. The speed is undeniable. But here's the uncomfortable part: 52% of ChatGPT's answers to Stack Overflow questions are incorrect.

The irony is brutal:

  • AI trained on Stack Overflow
  • Developers replaced Stack Overflow with AI
  • Stack Overflow dies from lack of new content
  • Future AI has... what, exactly?

The Wikipedia Problem

Here's something nobody's complaining about loudly enough: Wikipedia sometimes doesn't even appear on the first page of Google results anymore.

Let that sink in. The largest collaborative knowledge project in human history - free, community-curated, constantly updated, with 60+ million articles - is getting buried by AI-generated summaries and SEO-optimized content farms.

Google would rather show you an AI-generated answer panel (trained on Wikipedia) than send you to Wikipedia itself. The thing that created the knowledge gets pushed down. The thing that consumed the knowledge gets prioritized.

This is the loop closing in real-time:

  1. Humans build Wikipedia collaboratively
  2. AI trains on Wikipedia
  3. Google prioritizes AI summaries over Wikipedia
  4. People stop going to Wikipedia
  5. Wikipedia gets fewer contributions
  6. AI trains on... what, exactly?

We're not just moving from public to private knowledge. We're actively burying the public knowledge that still exists.

Stack Overflow isn't dying because it's bad. Wikipedia isn't disappearing because it's irrelevant. They're dying because AI companies extracted their value, repackaged it, and now we can't even find the originals.

The commons didn't just lose contributors. It lost visibility.

What We Actually Lost

PEACEBINFLOW captured something crucial:

"We didn't just swap Stack Overflow for chat, we swapped navigation for conversation."

Stack Overflow threads had timestamps, edits, disagreement, evolution. You could see how understanding changed as frameworks matured. Someone's answer from 2014 would get updated comments in 2020 when the approach became deprecated.

AI chats? Stateless. Every conversation starts from zero. No institutional memory. No visible evolution.

I can ask Claude the same question you asked yesterday, and neither of us will ever know we're solving the same problem. That's not efficiency. That's redundancy at scale.

As Amir put it:

"Those tabs were context, debate, and scars from other devs who had already been burned."

We traded communal struggle for what Ali-Funk perfectly named: "efficient isolation."

The Skills We're Not Teaching

Amir nailed something that's been bothering me:

"AI answers confidently by default, and without friction it's easy to skip the doubt step. Maybe the new skill we need to teach isn't how to find answers, but how to interrogate them."

The old way:
Bad docs forced skepticism accidentally. You got burned, so you learned to doubt. Friction built judgment naturally.

The new way:
AI is patient and confident. No friction. No forced skepticism. How do you teach doubt when there's nothing pushing back?

We used to learn to verify because Stack Overflow answers were often wrong or outdated. Now AI gives us wrong answers confidently, and we... trust them? Because the experience is smooth?

The Economics of Abundance

Doogal Simpson reframed the problem economically:

"We are trading the friction of search for the discipline of editing.
The challenge now isn't generating the code, but having the guts to
reject the 'Kitchen Sink' solutions the AI offers."

Old economy: Scarcity forced simplicity

Finding answers was expensive, so we valued minimal solutions.

New economy: Abundance requires discipline

AI generates overengineered solutions by default. The skill is knowing
what to DELETE, not what to ADD.

This connects to Mohammad Aman's warning about stratification: those who
develop the discipline to reject complexity become irreplaceable. Those
who accept whatever AI generates become replaceable.

The commons didn't just lose knowledge. It lost the forcing function that
taught us to keep things simple.

The Solver vs Judge Problem

Ben Santora has been testing AI models with logic puzzles designed to reveal reasoning weaknesses. His finding: most LLMs are "solvers" optimized for helpfulness over correctness.

When you give a solver an impossible puzzle, it tries to "fix" it to give you an answer. When you give a judge the same puzzle, it calls out the impossibility.

As Ben explained in our exchange:

"Knowledge collapse happens when solver output is recycled without a strong, independent judging layer to validate it. The risk is not in AI writing content; it comes from AI becoming its own authority."

This matters for knowledge collapse: if solver models (helpful but sometimes wrong) are the ones generating content that gets recycled into training data, we're not just getting model collapse - we're getting a specific type of collapse.

Confident wrongness compounds. And it compounds confidently.

The Verification Problem

Ben pointed out something crucial: some domains have built-in verification, others don't.

Cheap verification domains:

  • Code that compiles (Rust's strict compiler catches errors)
  • Bash scripts (either they run or they don't)
  • Math (verifiable proof)
  • APIs (test the endpoint, get immediate feedback)

Expensive verification domains:

  • System architecture ("is this the right approach?")
  • Best practices ("should we use microservices?")
  • Performance optimization ("will this scale?")
  • Security patterns ("is this safe?")

Here's the problem: AI solvers sound equally confident in both domains.

But in expensive verification domains, you won't know you're wrong until months later when the system falls over in production. By then, the confident wrong answer is already in blog posts, copied to Stack Overflow, referenced in documentation.

And the next AI trains on that.

The Confident Wrongness Problem

Maame Afua and Richard Pascoe highlighted something worse
than simple hallucination:

When AI gets caught being wrong, it doesn't admit error - it generates
plausible explanations for why it was "actually right."

Example:

You: "Click the Settings menu"
AI: "Go to File > Settings"
You: "There's no Settings under File"
AI: "Oh yes, that menu was removed in version 3.2"
[You check - Settings was never under File]
Enter fullscreen mode Exit fullscreen mode

This is worse than hallucination because it makes you doubt your own
observations. "Wait, did I miss an update? Am I using the wrong version?"

Maame developed a verification workflow: use AI for speed, but check
documentation to verify. She's doing MORE cognitive work than either
method alone.

This is the verification tax. And it only works if the documentation
still exists.

The Tragedy of the Commons

This is where it gets uncomfortable.

Individually, we're all more productive. I build faster with Claude than I ever did with Stack Overflow tabs. You probably do too.

But collectively? We're killing the knowledge commons.

The old feedback loop:

Problem → Public discussion → Solution → Archived for others
Enter fullscreen mode Exit fullscreen mode

The new feedback loop:

Problem → Private AI chat → Solution → Lost forever
Enter fullscreen mode Exit fullscreen mode

Ingo Steinke pointed out something I hadn't considered: even if AI companies train on our private chats, raw conversations are noise without curation.

Stack Overflow had voting. Accepted answers. Comment threads that refined understanding over time. That curation layer was the actual magic, not just the public visibility.

Making all AI chats public wouldn't help. We'd just have a giant pile of messy conversations with no way to know what's good.

Pascal CESCATO warned:

"Future generations might not benefit from such rich source material... we shouldn't forget that AI models are trained on years of documentation, questions, and exploratory content."

We're consuming the commons (Stack Overflow, Wikipedia, documentation) through AI but not contributing back. Eventually the well runs dry.

We're Feeling Guilty About the Wrong Thing

A commenter said: "I've been living with this guilty conscience for some time, relying on AI instead of doing it the old way."

I get it. I feel it too sometimes. Like we're cheating, somehow.

But I think we're feeling guilty about the wrong thing.

The problem isn't using AI. The tools are incredible. They make us faster, more productive, able to tackle problems we couldn't before.

The problem is using AI privately while the public knowledge base dies.

We've replaced "struggle publicly on Stack Overflow" with "solve privately with Claude." Individually optimal. Collectively destructive.

The guilt we feel? That's our instinct telling us something's off. Not because we're using new tools, but because we've stopped contributing to the commons.

One Possible Path Forward

Ali-Funk wrote about using AI as a "virtual mentor" while transitioning from IT Ops to Cloud Security Architect. But here's what he's doing differently:

He uses AI heavily:

  • Simulates senior architect feedback
  • Challenges his technical designs
  • Helps him think strategically

But he also:

  • Publishes his insights publicly on dev.to
  • Verifies AI output against official AWS docs
  • Messages real people in his network for validation
  • Has a rule: "Never implement what you can't explain to a non-techie"

As he put it in the comments:

"AI isn't artificial intelligence. It's a text generator connected to a library. You can't blindly trust AI... It's about using AI as a compass, not as an autopilot."

This might be the model: Use AI to accelerate learning, but publish the reasoning paths. Your private conversation becomes public knowledge. The messy AI dialogue becomes clean documentation that others can learn from.

It's not "stop using AI" - it's "use AI then contribute back."

The question isn't whether to use these tools. It's whether we can use them in ways that rebuild the commons instead of just consuming it.

Model Collapse

Peter Truchly raised the real nightmare scenario:

"I just hope that conversation data is used for training, otherwise the only entity left to build that knowledge base is AI itself."

Think about what happens:

  1. AI trains on human knowledge (Stack Overflow, docs, forums)
  2. Humans stop creating public knowledge (we use AI instead)
  3. New problems emerge (new frameworks, new patterns)
  4. AI trains on... AI-generated solutions to those problems
  5. Garbage in, garbage out, but at scale

This is model collapse. And we're speedrunning toward it while celebrating productivity gains.

GitHub is scraped constantly. Every public repo becomes training data. If people are using solver models to write code, pushing to GitHub, and that code trains the next generation of models... we're creating a feedback loop where confidence compounds regardless of correctness.

The domains with cheap verification stay healthy (the compiler catches it). The domains with expensive verification degrade silently.

The Corporate Consolidation Problem

webketje raised something I hadn't fully addressed:

"By using AI, you opt out of sharing your knowledge with the broader community
in a publicly accessible space and consolidate power in the hands of corporate
monopolists. They WILL enshittify their services."

This is uncomfortable but true.

We're not just moving from public to private knowledge. We're moving from
commons to capital.

Stack Overflow was community-owned. Wikipedia is foundation-run. Documentation
is open source. These were the knowledge commons - imperfect, often hostile,
but fundamentally not owned by anyone.

Now we're consolidating around:

  • OpenAI (ChatGPT) - $157B valuation
  • Anthropic (Claude) - $60B valuation
  • Google (Gemini) - Alphabet's future

They own the models. They own the training data. They set the prices.

And as every platform teaches us: they WILL enshittify once we're dependent.

Remember when:

  • Twitter was free and open? Now it's X.
  • Google search was clean? Now it's ads and AI.
  • Reddit was community-first? Now it's IPO-driven.

The pattern is clear: Build user dependency → Extract maximum value →
Users have nowhere else to go.

What happens when Claude costs $100/month? When ChatGPT paywalls
advanced features? When Gemini requires Google Workspace Enterprise?

We'll pay. Because by then, we won't remember how to read documentation.

At least Stack Overflow never threatened to raise prices or cut off API access.

Sidebar: The Constraint Problem

Ben Santora argues that AI-assisted coding requires strong constraints -
compilers that force errors to surface early, rather than permissive environments
that let bad code slip through.

The same principle applies to knowledge: Stack Overflow's voting system was a
constraint. Peer review was a constraint. Community curation was a constraint.

AI chats have no constraints. Every answer sounds equally confident, whether
it's right or catastrophically wrong. And when there's no forcing function to
catch the error...

The Uncomfortable Counter-Argument

Mike Talbot pushed back hard on my nostalgia:

"I fear Stack Overflow, dev.to etc are like manuals on how to look after your horse, when the world is soon going to be driving Fords."

Ouch. But maybe he's right?

Maybe we're not losing something valuable. Maybe we're watching an obsolete skill set become obsolete. Just like:

  • Assembly programmers → High-level languages
  • Manual memory management → Garbage collection
  • Physical servers → Cloud infrastructure
  • Horse care manuals → Auto repair guides

Each generation thought they were losing something essential. Each generation was partially right.

But here's where the analogy breaks down: horses didn't build the knowledge base that cars trained on. Developers did.

If AI replaces developers, and future AI trains on AI output... who builds the knowledge base for the NEXT paradigm shift?

The horses couldn't invent cars. But developers invented AI. If we stop thinking publicly about hard problems (system design, organizational architecture, scaling patterns), does AI even have the data to make the next leap?

Or do we hit a ceiling where AI can maintain existing patterns but can't invent new ones?

I don't know. But "we're the horses" is the most unsettling framing I've heard yet.

What We Actually Need

I don't have clean answers. But here are questions worth asking:

Can we build Stack Overflow for the AI age?

Troels asked: "Perhaps our next 'Stack Overflow for the AI age' is yet to come. Perhaps it will be even better for us."

I really hope so. But what would that even look like?

From Stack Overflow (the good parts):

  • Public by default
  • Community curation (voting, accepted answers)
  • Searchable and discoverable
  • Evolves as frameworks change

From AI conversations (the good parts):

  • Patient explanation
  • Adapts to your context
  • Iterative dialogue
  • No judgment for asking "dumb" questions

What it can't be:

  • Just AI chat logs (too noisy)
  • Just curated AI answers (loses the reasoning)
  • Just documentation (loses the conversation)

Maybe it's something like: AI helps you solve the problem, then you publish the reasoning path - not just the solution - in a searchable, community-curated space.

Your messy conversation becomes clean documentation. Your private learning becomes public knowledge.

Should we treat AI conversations as artifacts?

When you solve something novel with AI, should you publish that conversation? Create new public spaces for AI-era knowledge? Find a curation mechanism that actually works?

Pascal suggested: "Using the solid answers we get from AI to build clean, useful wikis that are helpful both to us and to future AI systems."

This might be the direction. Not abandoning AI, but creating feedback loops from private AI conversations back to public knowledge bases.

How do we teach interrogation as a core skill?

Make "doubting AI" explicit in how we teach development. Build skepticism into the workflow. Stop treating AI confidence as correctness.

As Ben put it: "The human must always be in the loop - always and forever."

The Uncomfortable Truth

We're not just changing how we code. We're changing how knowledge compounds.

Stack Overflow was annoying. The gatekeeping was real. The "marked as duplicate" culture was hostile. As Vinicius perfectly captured:

"I started learning Linux in 2012. Sometimes I'd find an answer on Stack Overflow. Sometimes I'd get attacked for how I asked the question. Now I ask Claude and get a clear, patient explanation. The communities that gatekept knowledge ended up training the tools that now give it away freely."

Hostile experts created the dataset for patient machines.

But Stack Overflow was PUBLIC. Searchable. Evolvable. Future developers could learn from our struggles.

Now we're all having the same conversations in private. Solving the same problems independently. Building individual speed at the cost of collective memory.

Sophia Devy said it best:

"We're mid-paradigm shift and don't have the language for it yet."

That's exactly where we are. Somewhere between the old way dying and the new way emerging. We don't know if this is progress or just... change.

But the current trajectory doesn't work long-term.

If knowledge stays private, understanding stops compounding. And if understanding stops compounding, we're not building on each other anymore.

We're just... parallel processing.


Huge thanks to everyone who commented on my last article. This piece is basically a synthesis of your insights. Special shoutout to Vinicius, Ben, Ingo, Amir, PEACEBINFLOW, Pascal, Mike, Troels, Sophia, Ali,Maame,webketje,doogal and Peter for sharpening this thinking.

What's your take? Are we headed for knowledge collapse, or am I overthinking this? Drop a comment - let's keep building understanding publicly.

Top comments (151)

Collapse
 
richardpascoe profile image
Richard Pascoe

To step outside of coding for a moment - I recently read that we’re already at the point where nearly 50% of all internet traffic is AI-generated. Take a second and let that sink in.

There’s a real risk here. If AI is allowed to endlessly consume information without boundaries, we eventually end up with an internet that’s mostly feeding on itself - AI trained on AI-generated content, over and over again. A snake eating its own tail.

No matter where you land in the AI debate - whether you’re excited by the commercial potential or worried about the environmental cost - it’s hard to ignore what’s at stake. The loss of human creativity doesn’t just change how we use the internet; it hollows it out from the inside.

At that point, we don’t just lose originality. We risk losing the internet as something alive.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

The "dead internet" is only accelerated (exponentially, however) by LLM-based generative AI, but there were real people producing sloppy spam content before AI took their jobs. Algorithms lured people into hate speech spirals and recommendation rabbit holes to maximise clicks and engagement before AI already.

Maybe that's not a risk at all, while still a waste of resources, if we focus and filter. There are millions of bad books that I don't need to read, millions of bad coffee shops that I'll never visit. Millions of questions that I could ask AI but I'll never will.

We won't lose the internet as something alive, we'll have to reinvent and rediscover the good aspects we loved about Web 1 (originality, imperfection, USENET, what else?) and Web 2.0 (instant interaction, user generated content and social media platforms before everything went too commercial) and maybe even Web3 (the ideas of decentralization, independence and forgery-proof, not necessarily built with crypto and blockchain though) and the discussions like this one about AI, DEV, StackOverflow, Wikipedia and how to continue collaborating as developers commited to finding facts and best practices.

Collapse
 
richardpascoe profile image
Richard Pascoe

True enough, Ingo and I really appreciate your take on this. I suppose it could be said that spammy "human" content was easy to recognise and ignore compare to AI-generated material. However, that doesn't take away from any of your points - which are well made.

The fact of the matter is that nothing has been set in stone, as of yet. We can decide how we use AI, as much as we can decide what we wish the internet to look like.

For myself, I've realised that if I honestly feel a certain way about the current situation then I should support Wikipedia beyond a donation, the same way I should become a member of the EFF. I'm already starting to migrate from the algorithm-lead nature of Big Tech as much as I can.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is it. concern into action

wikipedia support, EFF membership, migrating from big tech. concrete
not hand-wringing.

im doing similar.writing publicly on dev.to instead of private notes, publishing OSS, documenting reasoning not just solutions

maybe individual answer. consciouschoice to contribute back even when less efficient than AI privately.

commons survives if enough people make that choice.

appreciate you actually doing something

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Exactly this, Daniel. If you have strong feelings about something, do you sit back or do you take that first small step - with the hope it leads to another, and another?

This isn't a one-size-fits-all solution but if you feel strongly enough to want to do something then do something - for your own peace of mind if nothing else.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

exactly.action beats paralysis.

writing these articles is my version.making documenting reasoning publicly instead of keeping it private.

small steps compound if enough people take them.

appreciate you being vocal about this. other people reading might not comment but seeing someone actually commit to action (wikipedia, EFF, fediverse migration) makes it feel possible instead of just theoretical leadership by example.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

you're right that this might be first world problem when world has bigger issues. but i'd argue: developer knowledge infrastructure affects ALL software, including systems that DO address real world problems.

bad AI-generated code in healthcare systems? financial infrastructure? critical infrastructure? knowledge collapse has real-world consequences.

your point about SO already having flaws is fair . outdated answers, reputation bias. but those are curation problems we COULD fix. model collapse from AI training on AI is systemic.

love the "reinvent best of web 1/2/3" vision. decentralized knowledge commons without crypto overhead. public reasoning without gatekeeping.

maybe thats the answer - new platforms designed for AI era that keep web 1 authenticity with web 2 collaboration

what would that look like practically?

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Part of what draws me to the Fediverse is more than privacy - I wonder if moving away from major social media platforms and Big Tech toward a more decentralized internet could help us recapture the internet of old. I don’t know, but I’d love to see.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

fediverse makes sense for this. decentralized, community-owned, no algorithmic manipulation.

wondering if we need something similar for developer knowledge. not just social but structured Q&A on federated servers.

imagine. local instances for specific communities (rust, cloudflare, etc)
that federate for discovery but each community controls moderation/curation

might solve both problems. keep knowledge public while avoiding single
corporate owner who can enshittify

have you seen any attempts at this? or is it purely theoretical still?

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Well, nothing that I am aware of myself. It does seem to be a possible solution though, I agree. What I have read recently are opinions on how Stack Overflow could have avoided such a steep drop in traffic - the inclusion of a beginner-friendly Q&A section that didn't have to be included in the overall "tome of knowledge", or that they could have moved more towards a Wikipedia style format.

Whenever you have personally hit a brick wall with Stack Overflow or not, it can often appear as a hostile environment to many developers starting their journey. I was going to post about Stack Overflow this morning but felt it overlapped with some of points you had already raised, so I posted a discussion piece on Godot Engine instead. I do plan to publish it early next week though - keep a look out for it!

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

love that youre writing the SO piece. this is exactly how knowledge should compound. you build on what im exploring, add your angle, community learns more.

the beginner-friendly section idea is smart. SO's problem wasnt just hostility, it was mixing "canonical reference" with "help newbie debug." different goals, same platform.

publish your piece and tag me. id love to see your take on what SO could have done differently.

also if you find any attempts at federated dev knowledge platforms, let me know. feels like the right direction but needs someone to actually build it.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Will do, Daniel. No problem!

In regard to the Fediverse, I wonder if Mastodon servers such as Fosstodon could help foster a knowledge sharing platform? Just a thought...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

fosstodon is interesting starting point. already has dev community and fediverse architecture.

challenge: mastodon optimized for conversation not curation. no voting,
no accepted answers, search is weak.

but maybe that's solvable? build Q&A layer on top of activitypub protocol?

imagine: mastodon for discussion + separate fediverse app for structured
Q&A that federates with mastodon. best of both

someone probably needs to just BUILD this. open source, activitypub-based, community-owned stackoverflow alternative.

feels more realistic than hoping SO reforms or waiting for corporate solution.

you thinking about building or just observing?

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Observing and, potentially, supporting would be more accurate with my current knowledge base.

Of course, you're right about Mastodon being optimised for conversation over curation but the ActivityPub layer itself could be part of the solution perhaps?

Either way, it's within servers like Fosstodon where the knowledge resides - experienced people passionate about open source. Maybe they just need an alternative platform to Stack Overflow to share that same experience?

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

been thinking hard about this since you mentioned it.

im going to build it. or at least start.

technical path is clear. activitypub Q&A server, voting layer, federates with mastodon. open source from day one.

going to write the spec as article, then build minimal prototype. if fosstodon community is interested, we iterate together.

appreciate you pushing on this. sometimes you need someone to say "this
should exist" before you realize youre the one to build it.

ill keep you posted. might need your help rallying the fosstodon folks when its ready.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Sounds like a plan, Daniel. Sure other DEV members would be willing to lend a hand to the project too! Will do - best of luck!

Collapse
 
dannwaneri profile image
Daniel Nwaneri

50% AI-generated internet traffic is terrifying for training implications.

stack overflow (78% drop), wikipedia buried.visible. but most is invisible.content farms, synthetic responses.

your phrase "internet as something alive" hits hard. alive because humans are messy, opinionated, creative. AI smooths that into... efficient noise?

scariest part. we wont know when we cross from "mostly human" to "mostly AI"

already happening. most people dont see it.

Collapse
 
cesarkohl profile image
Cesar Kohl • Edited

This is the end of the world as we know it. The inevitable future is the immediate discredit of all digital content.

"How can I trust this specific knowledge that is being shared is truly reasonable? Does it really make sense? Who wrote this? Who is this person? What are his/her credentials?"
The answer is, go figure out for yourself.

And then, once again, I'll need to go back to Wikipedia, books written before 2022, and double check everything.

IMO, AI, just like social media feeds, decreased human progress.

Collapse
 
richardpascoe profile image
Richard Pascoe

You’re absolutely right, Cesar - we’re seeing this play out across every sphere: technology, politics, healthcare, and beyond. It certainly makes the three years’ worth of technical guides sitting on my external hard drive feel a lot less like a waste of time… though, in the end, time will tell!

Outside of the AI debate, it’s disheartening to see how the internet itself has changed - it now seems to thrive on constant contrarianism, which is why spaces like DEV remain so valuable.

Collapse
 
affable_shamik_efebf96072 profile image
Affable Shamik • Edited

True. And your words describe it exactly as it should be. We r lucky to have devs like you.

Collapse
 
richardpascoe profile image
Richard Pascoe

Thank you for your lovely comment, Affable. It means alot!

I understand AI - even in its current form - could end up being a useful tool but the way it is being leveraged with such a huge dose of FOMO is disappointing at best and utterly depressing at worse - particularly with concerns over LLM training going largely unheard and the resulting slop very much unchallenged.

Thread Thread
 
affable_shamik_efebf96072 profile image
Affable Shamik

About all you said, about we not questioning AI, and using it without any constraints, it really felt like someone talking about me. Till now couldn't understand why i felt lost because i let ai models guide me wherever they liked and after 2 months of wiriting code that the llm guided me towards and me implementing it without instinctively questioning it. it eventually broke and worse of all, i asked the same llm " what now " again ........

How does someone like me develop systems thinking. I am very compulsive in the sense i always question if what i did was the best solution to a problem like take a silly example: classes vs functions for some problem. I get stuck around this and continue make optimizations, this makes it more difficult for me to question an ai model and thus depend on it more and more . If i could get some insights, I'd really be grateful.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

I think @maame-codes put it best in a comment on one of her posts: for senior developers, AI acts as a force multiplier because they’re well-positioned to spot errors in AI-generated code - the so-called “hallucinations.”

For junior developers, or those just starting out, a lot of people in the field are realising that it’s still best to build strong foundations first. Learn and understand the basics before leaning on AI as a tool. Without that, there’s a real risk of never fully understanding what you’ve “vibe coded.”

That said, there are plenty of folks here on DEV who can offer much deeper technical insight than I can — but I hope this perspective helps, even in a small way.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

breakthrough moment.recognizing the trap is step one.

"let AI guide 2 months → broke → asked same LLM" = Below the API.

for "classes vs functions" paralysis:

dont ask AI to decide. ask AI for tradeoffs, YOU decide based on context.

ujja's approach: treat AI like confident junior. helpful but needs review

systems thinking builds through:

  • maintain your own code 6mo later (feel pain)
  • ask "what makes this wrong?" not "does this work?"
  • trust your questioning over AI confidence

start small: this week, deliberately choose differently than AI suggests once. understand why. build the muscle

youre already questioning. thats the foundation.

Thread Thread
 
richardpascoe profile image
Richard Pascoe

Couldn't have said it better myself, Daniel!

Thread Thread
 
affable_shamik_efebf96072 profile image
Affable Shamik

Means a lot. Thank you.

Collapse
 
leob profile image
leob

Fair points - in my opinion what we REALLY need to (MUST !) keep are (1) Wikipedia and (2) Stackoverflow ...

"Everyone's celebrating that AI finally killed the gatekeepers" - that's a funny statement, what exactly is "everyone" (?) celebrating - the "demise" of some 'cocky' know-it-all people on Stackoverflow?

I've heard people complaining about that, but it's not something that has ever bothered me ...

Collapse
 
dannwaneri profile image
Daniel Nwaneri

fair pushback on "everyone" . youre right thats overstated.

what i meant. theres a vocal contingent celebrating SO decline as karma for the "marked as duplicate" culture. but youre right that not everyone had negative experiences.

the gatekeeping thing wasnt my main point though. whether SO was hostile
or helpful, the real issue is. if it dies (78% traffic drop is real data), what replaces it?

private AI chats dont have the same properties. searchable, evolvable, publicly curated. thats the loss im worried about, not the personality of the answerers.

curious.you say we MUST keep wikipedia + SO. how do we do that when AI makes contributing feel redundant? genuine question

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

StackOverflow would eventually experience another kind of knowledge collapse due to outdated information occupying the top answer spots and answers by long standing seniors getting upvotes just because of their reputation. The gatekeeping was an effective spam filter, and it made me draft numerous questions that I never posted because I found the answer myself while refining a minimal reproducible example. But StackOverflow's (and other communities') gatekeeping also made a lot of valuable data get discarded just because people had other priorities than making an effort to solve their issues in public.

Collapse
 
leob profile image
leob

I've never had any negative experiences on SO, maybe it also depends on people's attitude? People who say:

"a vocal contingent celebrating SO decline as karma"

are peevish, resentful and bear a narrow-minded grudge :-)

Your point about the value and necessity of original content (SO and Wikipedia, and much more) is spot on ... I hope (and honestly I expect) that SO and Wikipedia (and similar community-driven sources) will survive!

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

ha fair. the people celebrating SO decline are probably louder than they are numerous.

youre right that attitude matters.respectful questions got better SO treatment. but the reputation (deserved or not) scared people away.

your optimism is interesting though. what makes you think SO/wikipedia survive when 78% traffic drop is real?

maybe people who value these platforms keep contributing even as casuals move to AI? quality over quantity?

id love to be wrong about collapse trajectory.

Thread Thread
 
leob profile image
leob

I guess it's the fact that there's still 22 percent left? Yeah and maybe "quality over quantity" - the "hard core" people won't walk away ...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

interesting take. maybe the 22% who stayed are the actual contributors and the 78% who left were just consumers?

if thats true it could work. wikipedia survives on tiny fraction of editors while millions read.

but heres the problem. even hardcore contributors need NEW questions to
answer. if juniors are asking AI instead of posting on SO, where do the questions come from?

and without fresh questions, do m experienced devs stick around? or does
it become an archive instead of living knowledge base?

Curious. can a platform survive on just the hardcore 22% if the pipeline of new questions dries up?

Thread Thread
 
leob profile image
leob

Well your concerns seem valid ... I don't know if the smaller "volume" will be enough for SO to survive, but I certainly hope so!

Next breakthrough for AI would be if it can "invent" something by itself, pose new questions on SO, autonomously write blog posts or create other content, instead of only cleverly regurgitating and recombining what's been fed to it ...

I guess that would be what they call "AGI" (artificial general intelligence)), and actually that's when it might get really scary for us humans, so let's be careful what we wish for ;-)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

the AGI question is the real fork.

scenario 1: AI stays sophisticated recombinator. knowledge collapse poisons training data. we're screwed.

scenario 2: AI achieves invention. knowledge collapse irrelevant but...
humans might be too?

uncle bob said "AI cant hold big picture or understand architecture." maybe invention REQUIRES that.

but if AI gets there... yeah, scary.

betting on "AGI will save us" feels risky when we're already seeing collapse.

Thread Thread
 
leob profile image
leob

Correct analysis - but what's the solution? Are the "AI big boys" (big tech) actually (explicitly) aiming for AGI - which would have "(super)human" capabilities? I think that would really be a bridge too far - governments might need to step in (not counting on Trump obviously, lol) ...

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

big tech explicitly aims for AGI.openai's mission, anthropic charter, deepmind goal

solution by timeline:

short: preserve commons deliberately. platforms rewarding public reasoning not just answers.

mid: regulatory guardrails on training data. EU might require disclosure if training on AI content. US wont.

long: if AGI emerges, irrelevant. if not,need intact commons.

maintain commons as insurance while hoping AGI makes it unnecessary.

imperfect but better than assuming AGI solves everything.

Collapse
 
maame-codes profile image
Maame Afua A. P. Fordjour

I’ve noticed that the friction of a broken script or a confusing doc is actually what forces me to understand the 'why.' When an AI gives a confident, polished answer, it’s tempting to skip that doubt step entirely. Developing that judging layer you mentioned feels like the most important thing I can focus on right now. Great follow-up piece!

Collapse
 
dannwaneri profile image
Daniel Nwaneri • Edited

this is it exactly.

friction teaches the "why" accidentally. smooth AI answers skip straight to "what" and we miss the foundation.

the fact that you're consciously building that judging layer puts you ahead of most devs who just optimize for speed without realizing what they're losing.

curious.when you catch AI being confidently wrong now, does it make you
more skeptical of future answers? or do you still have to fight the temptation to trust it?

Collapse
 
maame-codes profile image
Maame Afua A. P. Fordjour

To be honest I will never trust any AI tool a 100%, I personally think if you know & understand what you are doing, it's a great online assistant (that's when you are able to tell when it makes mistakes and not follow it blindly..) but aside that, depending on it a 100% is scary and would definitely cause more harm than good in the long run for anyone's personal growth

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is the core tension.

how do you GET to "know & understand what youre doing" if AI is your primary learning tool?

experienced devs like john h (comments) use AI well because they already have context. they can verify. juniors starting today dont have that foundation.

stack overflow forced skepticism through pain. AI doesnt. so can we teach "healthy doubt of AI" explicitly? or does it require the hard-won experience you already have?

might be the real divide. learned before AI vs learned with AI.

Thread Thread
 
maame-codes profile image
Maame Afua A. P. Fordjour

That's why I personally don't use AI as a primary learning tool (I accompany it with accredited resources after I have some vast knowledge), because it could always give you the wrong information, I usually just read books on topics I am learning. So after I have an idea of what I am doing, then I can use ai as an assistant / more or less a 'super search engine'. Personally, I learned the hard way of learning things the old school way (reading actual books and accredited online resources that have been written by developers & people with years of experience). That is helping me more in my learning journey than solely depending on ai to do the work for me. Because the moment ai goes downhill , those who depended FULLY on it will have zero value... these are my personal views on the topic in general :)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is the model that works.

foundation first (books, docs) → then AI as assistant. not the other way around.

the problem. juniors today see everyone using AI and skip straight to it. they never build the foundation that lets you verify.

youre doing it right because you learned the hard way. question. can we teach juniors your approach? or does it require getting burned first?

if verification skills require pain to learn, we're in trouble.

Thread Thread
 
maame-codes profile image
Maame Afua A. P. Fordjour

To be honest, I am still learning myself (junior level), but I got loads of advise from some really good developers who have been through the old school system (without AI). So I have been following their advise in doing so, and it has helped my personal growth because I am able to understand the technical aspects of most things now, as compared to using ai. I think everyone just needs to do what would help their personal growth, since we all learn in different ways :)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

wait.youre a JUNIOR but learned from devs who came up without AI.

so its not experienced vs junior. its mentored vs unmentored.

youre inheriting their verification habits. thats the transmission mechanism.

scary question. in 5 years when most seniors also learned with AI, who teaches juniors to be skeptical?

right now theres enough pre-AI devs to mentor. that window is closing.

youre lucky you found good mentors.

Thread Thread
 
maame-codes profile image
Maame Afua A. P. Fordjour

Mentorship is so important to me in my learning journey and I appreciate my mentors a lot

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

We'll probably look back to the 2010 decade and early 2020 years as the golden age of knowledge and open data unless we manage to change society's course. But maybe that's a temporary first world problem: knowledge curation might recover after a massive collapse of quality, and the real world problems are aren't how to find the right words and details but rather taking action in society and politics, stop war and terror and help people beyond our digital bubble.

Thanks for your thoughtful article. While I'd like to see AI fail due to model collapse, I should better hope that we can somehow fix its inherent flaws and that the next generations will know how to use AI and when to distrust it, just like nobody would flee a cinema screaming in fear when a steam locomotive approaches the camera in black and white, or panic when a fictitious audio book about a martian invasion plays on the radio.

Collapse
 
nandofm profile image
Fernando Fornieles

Brilliant! I wrote about this some months ago but you have explained it with much more detail.
dev.to/nandofm/ai-the-danger-of-en...

What we will get at the end is a rotten knowledge because it won't be fed with new and fresh ideas.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

just read yours. "entropy in knowledge" perfect framing. same conclusion,different angles.

"rotten knowledge" = "knowledge collapse" - same mechanism

appreciate generosity on execution. feels like building toward something

since you published months ago, seen any solution attempts? platforms preserving public knowledge? or just more acceleration?

would love to collaborate exploring this further.

Collapse
 
nandofm profile image
Fernando Fornieles

To be honest I only see acceleration. Maybe we need some kind of Foundation (like Asimov's) and/or a place where genuine content can be created and discussed, the fediverse?. AI generated content is everywhere, I'm not optimistic.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

exactly where im heading.

richard (in comments) committed to m building federated Q&A on activitypub.
same conclusion you reached.

asimovs foundation perfect metaphor. preserve knowledge through dark age. but BUILD it not hope for it.

next 2 articles. what stays "above API" m when AI codes, then building federated stackoverflow.

youre right. waiting for platforms to fix themselves = pessimism justified.

but if we BUILD alternative...

want to be involved? need people who've thought about this beyond hype cycle.

Thread Thread
 
nandofm profile image
Fernando Fornieles

I recently closed my private social media accounts and moved to the fediverse. Apart from that I'm building at home my own cloud server with Nextcloud in a Raspberry. These are my little actions to avoid the "enshitifcation" of the Intenet. Not too much because family deserves also my time but at least I'm doing what I can.

In any case, your idea seems interesting, not sure if I could contribute but I would like to know about the idea/project :-)

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is exactly the kind of builder we need.

youre not just talking.youre DOING (fediverse migration, nextcloud, raspberry pi infrastructure).

the federated Q&A idea: activitypub-based stackoverflow alternative questions/answers federate across instances. community-owned, open source.

richard committed to help. now you. that's enough to start.

going to write the spec as next article (after "above the API" piece). then build prototype weekend after.

can you join a small group chat to sketch architecture? just richard, you, me for now. keep it tight until we have working prototype

family time matters. this is volunteer/passion project, not job. we build what we can when we can.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

this is exactly the kind of builder we need

youre not just talking. youre DOING

the federated Q&A idea: activitypub-based stackoverflow alternative. questions/answers federate across instances. community-owned, open source.

richard committed to help. now you. that's enough to start

going to write the spec as next article (after "above the API" piece). then build prototype weekend after.

can you join a small group chat to sketch architecture? just richard, you, me for now. keep it tight until we have working prototype.

also: family time matters. this is volunteer/passion project, not job.
we build what we can when we can

Collapse
 
spo0q profile image
spO0q • Edited

AI changes the way we search and make our way to maintainable code bases and sustainable knowledge.

You can't read what it says at face value. You'll likely fail, but that's kinda the same with existing human misinformation.

I'll keep the skeptical approach, regardless of the source.

A bigger issue, though, as you frame it, could be the death of various ecosystems.

It's like AI's platforms, which are more or less platforms the unstoppable Tech giants, are reproducing the same mistakes with bigger weapons: the classic "sawing off our own branch."

Collapse
 
dannwaneri profile image
Daniel Nwaneri

that image perfectly captures it. sawing off our own branch.

skepticism works for misinformation (human or AI). but "death of ecosystems" is bigger threat.

SO dying isnt just "less accurate".its loss of platform where collective refinement happened.

tech giants consolidating. own models, training data, deployment. replacing public commons with private capital.

perfect visual. mind if i use in follow-up about building alternatives?

Collapse
 
spo0q profile image
spO0q

thanks for asking, realized this visual was probably not free to use, but the idea remains valid ^^.

Collapse
 
moopet profile image
Ben Sinclair

But here's what we're not asking

We're definitely asking that. We've been talking about it for a good couple of years by this point.
The problem is that the AI hype machine steamrolls everything. Too many people don't care, and will never care.

Collapse
 
ujja profile image
ujja • Edited

Great article. Really resonates. My approach is kind of zero-trust reasoning. I start by assuming any answer, AI or human, could be wrong. From there, I interrogate, verify, and cross-check before I act on it. It’s a bit more work upfront, but it’s the only way I’ve found to use AI safely without amplifying confident wrongness.

Feels like the key skill going forward isn’t just how to find answers **but **how to doubt them intelligently.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

"zero-trust reasoning" is perfect framework.

this is what ben santora calls keeping human as "judge". assume AI could be wrong, verify actively.

the "doubt intelligently" skill is key. paranoid rejection, but informed skepticism.

question. how did you develop this? mentorship? getting burned by AI errors? or just natural disposition?

curious because if this skill requires pain to learn, we're in trouble.

Collapse
 
ujja profile image
ujja

Honestly I learned it the hard way 😅
Mostly through dev work where I trusted AI a bit too much, moved fast, and only realized days later that a core assumption was wrong. By then it was already baked into the design and logic, so I had to scrap big chunks and start over.
That kind of experience changes how you think.
After a few rounds of that, you stop asking does this sound right and start asking what would make this wrong. It is not about distrusting AI, just treating it like a very confident junior dev. Super helpful, but needs review.
I do not think pain is required, but without some kind of feedback loop like wasted time or broken builds, it is hard to internalize. AI removes friction, so people skip verification until the cost shows up later.
So yeah, not paranoia. Just learned skepticism.

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

the mechanism exactly.

learned through pain but lesson was feedback loop not pain itself.

"what would make this wrong" vs "does this sound right" = solver to judge shift.

"confident junior dev" perfect framing

juniors today wont get burned because AI removes friction. cost shows later (production). by then someone elses problem.

how teach "learned skepticism" explicitly? build friction back in? make review mandatory? wait for burns?

article 3 territory. practical verification skills.

appreciate learning path share

Collapse
 
javz profile image
Julien Avezou

Thanks for sharing this article. Got me thinking on a lot of topics. How we are losing our authenticity in the way we communicate as we are regurgitating the same knowledge sources. There needs to be over time more choices of models in their design and source. Too many models are trained at a corporate level. We also need models trained by governments too to counterract incentives and produce richness in alternatives. The Swiss produced their national government model recently and it looks promising.

Collapse
 
dannwaneri profile image
Daniel Nwaneri

hadnt considered model diversity as defense against homogenization.

if all models train on corporate data with profit incentives, we get value convergence not just output convergence.

swiss government model interesting.public infrastructure, different optimization.

but question. does government AI solve knowledge collapse or just diversify AI layer? still need humans contributing novel experiences.

maybe government models + federated knowledge platforms. public AI on public knowledge, both community-owned.

Collapse
 
javz profile image
Julien Avezou • Edited

Agreed, I think one way is due to the different incentive structures, more people would be inclined and/or nudged to contribute novel experiences if framed differently
I find your suggestions in the last part interesting to consider

Thread Thread
 
dannwaneri profile image
Daniel Nwaneri

incentive framing is key.

SO worked because reputation. what makes someone publish AI reasoning
when private is faster?

government models might change default. contributing becomes civic act not just personal branding.

"your tax dollars fund this AI, help train it"

different motivation than corporate reputation.

exploring in next piece. sustainable commons incentives.

Thread Thread
 
javz profile image
Julien Avezou

exactly yes. looking forward to the next piece

Collapse
 
charanpool profile image
Charan Koppuravuri

I've watched this play out: AI excels at "solver" tasks (code gen) but fails "judge" roles without human scars from production failures. The real loss? No visible reasoning chains showing why solutions evolve — outdated SO answers had timestamps/debates; AI chats reset to zero.

Mitigation Strategies:

Publish reasoning paths: AI draft → verify → post full prompt chains + rejections on dev.to/GitHub Discussions. Turns private wins into evolvable docs.

Build verification rituals: Always ask "What makes this wrong?" post-AI. For architecture/security (expensive verification), mandate peer review before commit.

Federated Q&A platforms: ActivityPub-based SO alternatives—community Q&A federates across instances, dodging corporate enshittification while keeping curation.

Simple Team Fix:

Add to CONTRIBUTING.md: "AI-generated? Include rejection reasons + human verification." Forces judgment layer, feeds clean data back.

This preserves the commons without rejecting AI productivity. The human judge stays essential—let's rebuild curation around that. Thoughts on federated platforms?

Collapse
 
richardpascoe profile image
Richard Pascoe • Edited

Couldn’t have said it better myself, Charan. The erosion of actual reasoning is a huge part of the problem - especially when so much online interaction is already being generated by AI.

I was reading just yesterday about how open-source projects are really starting to suffer. AI-generated contributions are often rejected due to errors, which ends up consuming a lot of maintainers’ time. But the bigger issue is retention: many of these contributors aren’t sticking around. They get the green square on their GitHub contribution graph and move on.

As a result, a lot of projects are seeing a real drop-off in people who actually stay, learn the codebase, and contribute meaningfully over time.

Collapse
 
charanpool profile image
Charan Koppuravuri • Edited

Spot on — the maintainer bottleneck is brutal. Recent data shows AI-generated PRs spiking churn (code reverted <2 weeks) while dropping reuse, turning repos into "itinerant contributor" graveyards.

Core issue: AI lacks project context, submits plausible-but-breaking changes. Maintainers drown in noise; real contributors bail when interaction feels AI-faked.

Practical mitigations:

  1. Repos adopt "AI-Generated?" labels + mandatory human review checklists (e.g., "Context verified? Tests pass edge cases?").
  2. Tools like GitClear flag churn-prone PRs pre-merge.
  3. Contributor tiers: Verified humans get priority lanes.

This preserves signal. Seen projects implementing successfully!

Thread Thread
 
richardpascoe profile image
Richard Pascoe

I was about to write a reply along the lines of, “Oh, well, at least OpenAI and the rest are making money - right?!” complete with a healthy dose of sarcasm.

But even a cursory bit of research shows that the only company actually profiting from the AI bubble is Nvidia - for fairly obvious reasons.

OpenAI itself isn’t profitable. It’s spending vastly more on R&D, computing infrastructure, and staff than it earns, and the prevailing assumption is that it will continue posting losses until at least 2028.

Anthropic may reach break-even sometime around 2027 or 2028. Microsoft doesn’t break out AI "profits" separately, and whatever profit Google is making from AI largely comes from folding it into existing, already-profitable products rather than from AI as a standalone business.

The reason I mention any of this is that, in the race toward an eventual unicorn payday, AI development has been left with remarkably few boundaries - and as a result, undeniably useful efforts like open-source projects are suffering immeasurably.

Some comments may only be visible to logged-in visitors. Sign in to view all comments.