DEV Community

David Aronchick
David Aronchick

Posted on • Originally published at distributedthoughts.org

Emergence vs. Engineering: The Industry Just Bet Against the God Model

Monday, OpenAI, Anthropic, Google, Microsoft, and AWS jointly donated their agent infrastructure to the Linux Foundation. If any of them actually believed a single model would achieve AGI in 2-3 years, this would be the dumbest move in corporate history.

You don't standardize the plumbing when you're about to build God.

The Agentic AI Foundation launched with three projects: Anthropic's Model Context Protocol (MCP) for connectivity, Block's goose for execution, and OpenAI's AGENTS.md for instructions. Together they form a complete stack for building composable AI systems, many specialized tools working through standard interfaces.

This isn't a technical footnote. It IS a recognition that no one is going to be able to do it all themselves.

For many MANY years, we've tried to engineer general intelligence from first principles. The results are impressive but bounded. This week, you could argue, the AI industry quietly bet on a different approach: letting intelligence emerge from simpler components.
The Physics Problem
Tim Dettmers published "Why AGI Will Not Happen" the day after the MCP announcement. His argument is remarkably clear.

"Computation isn't abstract. It happens in silicon, constrained by the speed of light, thermodynamics, and the square-cube law. Moving global information to local neighborhoods scales quadratically with distance. Memory becomes more expensive relative to compute as transistors shrink. "If you want to produce 10 exaflops on a chip, you can do that easily," Dettmers writes, "but you will not be able to service it with memory."

GPUs maxed out their performance-per-dollar around 2018. The gains since then came from one-off features: 16-bit precision, Tensor Cores, HBM, 8-bit quantization, 4-bit inference. Those tricks are exhausted. Dettmers estimates maybe one or two more years of meaningful scaling improvements before we hit the wall.

The transformer architecture itself is already near physically optimal. There doesn't appear (BUT I HAVE BEEN WRONG MANY TIMES BEFORE) to be a clever redesign waiting in the wings to unlock another order of magnitude.

Superintelligence? Fantasy. Recursive self-improvement still obeys scaling laws. An AI improving itself faces the same diminishing returns as engineers improving it externally. You're filling gaps in capability, not extending the frontier.

If you can't engineer your way to general intelligence through scale, what's the alternative?

The same thing that produced intelligence in nature: emergence.
More Is Different
In 1972, physicist Philip Anderson published "More is Different" in Science. It became one of the most cited papers in complexity research and helped establish the Santa Fe Institute.

Anderson's argument was profound: reductionism doesn't imply constructionism. You can break a system down into its fundamental parts, but you cannot rebuild complex behavior by assembling those parts. "At each new level of complexity," he wrote, "entirely new properties appear."

Consciousness isn't hiding in neurons. Traffic patterns don't exist in individual cars. The economy isn't a property of any single transaction. These phenomena emerge from interactions between simpler component, and they can't be predicted or engineered from first principles.

This isn't mysticism. It's how complex systems actually work.

The Santa Fe Institute defines emergence as "properties at one scale that are not present at another scale." Complex adaptive systems share common features: many agents, each intelligent and adaptive within their domain, none possessing complete information about the whole. Global patterns arise from local interactions without central control.

You don't engineer emergence. You create conditions for it.
The Ant Colony Test
Deborah Gordon at Stanford has spent decades studying ant colonies. Her description of individual ants is memorable: "I probably wouldn't hire them."

And yet collectively, ants build complex nests, find food sources efficiently, coordinate defense, and adapt to changing environments. Zero central control. The queen doesn't manage; she reproduces. As Gordon puts it, "Tasks allocate workers, rather than a manager allocating tasks to workers."

The mechanism is stigmergy: coordination through environmental modification. Ants leave pheromone trails that influence other ants' behavior. Simple rules at the individual level (follow strong trails, lay pheromones when successful)produce sophisticated collective intelligence.

Gordon draws the parallel explicitly: "In many ways, understanding the behavior of ant colonies could teach us about the way billions of relatively simple neurons work together in our brains."

The brain follows the same pattern. Neurons aren't conscious. They fire or don't fire based on local inputs. Consciousness emerges from billions of these simple interactions. There's no central "intelligence unit" directing traffic, no homunculus watching the show.

Decentralized control. Simple rules. Local interactions producing global behavior. Resilience through redundancy. Adaptation without central planning.

This is how nature builds intelligence. Not by engineering a god, but by enabling a swarm.
The Pattern Repeats
The internet works the same way.

David Clark's 1988 paper on DARPA's design philosophy reveals remarkably minimal assumptions: the network can transport a datagram with reasonable, not perfect, reliability. That's it. Everything else emerges from endpoints following simple protocols.

TCP/IP split responsibility deliberately. Keep IP simple and flexible. Push complexity to the edges. "Fate-sharing" means intelligent endpoints, dumb pipes. The result: a decentralized system that scaled beyond anyone's imagination and survives failures that would destroy centralized alternatives.

Unix philosophy follows the same template. Ken Thompson and Doug McIlroy: "Make each program do one thing well. Expect the output of every program to become the input to another." Small tools, standard interfaces, emergent capability from composition.

Nobody said "let's build one giant program that does everything." That was the mainframe mentality, and it lost.

I watched this pattern win with Kubernetes. We didn't build bigger VMs. We built smaller containers with standard interfaces and let orchestration handle the complexity. The sophisticated behavior emerged from composition, not from engineering a monolith.
What MCP Actually Means
The MCP donation makes sense through this lens.

With 97 million monthly SDK downloads and adoption by Claude, ChatGPT, Gemini, Microsoft Copilot, Cursor, and VS Code, MCP has become TCP/IP for AI agents: the standard protocol for connecting models to tools, data, and services.

David Soria Parra, MCP's lead maintainer: "The main goal is to have enough adoption in the world that it's the de facto standard."

Nick Cooper from OpenAI: "We need multiple protocols to negotiate, communicate, and work together to deliver value for people, and that sort of openness and communication is why it's not ever going to be one provider, one host, one company."

Read that again. OpenAI's own engineer saying it's not ever going to be one company.

When your fiercest competitors agree on a protocol, they're hedging. They're building for a world where no single system wins. They're betting on emergence over engineering.
The Honest Assessment
This doesn't mean AI won't be transformative. It means the path isn't "scale until AGI."

It's: build composable tools, let emergence do the heavy lifting.

Dettmers contrasts the US "winner-take-all" philosophy (that is betting everything on frontier models) with China's "economic diffusion" approach, integrating AI capabilities throughout the economy. The diffusion strategy doesn't require AGI. It requires useful, composable tools that produce emergent value when combined.

The MCP ecosystem is infrastructure for exactly this. Specialized agents handling narrow tasks, connected through standard protocols, producing collective intelligence that no individual component possesses.

Ant colonies. Neural networks. The internet. Unix. Kubernetes. Now AI agents.

The pattern keeps winning because it's how complexity actually works.
The Kicker
Fifty years ago, Philip Anderson argued that you can't construct complexity from simple parts through pure engineering. Emergence requires different tools, different thinking. You don't build intelligence; you create conditions for it to arise.

This week, the AI industry admitted he was right.

When OpenAI, Anthropic, Google, Microsoft, and AWS all agree on something, pay attention. They're not building for a world where one model solves everything. They're building for emergence.

The god model was always a fantasy.

The swarm is real.


Originally published at Distributed Thoughts.

Top comments (0)