_Technical satire
You'd almost want to believe it. One fine morning, Symfony announces its "AI" module, and the whole ecosystem shivers as if the framework had just discovered quantum gravity. But very quickly, scratching beneath the polish, you realize you're not witnessing a technological revolution... but a makeover operation.
A school bus repainted white, decorated with three NASA stickers, and presented as a space shuttle.
Welcome to "Symfony AI," or the subtle art of pretending to be modern.
1. AI Integration Cosplay Style: Fake Chic on Real Emptiness
The AI component offers a ChatModelInterface perfectly DI-friendly, perfectly Symfony. But behind it, what's really there? A nicely wrapped HTTP request, and an object instantiation to make you believe magic is happening.
No serious streaming, no parallelism, no fine-grained token management at high cadence. Just a layer of architectural polish that transforms a simple API call into a sacred ritual.
It's technical cosplay: you dress up as an astronaut, but you stay in the backyard.
Streaming: When the 1980s Tires Explode
In the real world of AI, an LLM takes time to respond — sometimes 10, 20, 30 seconds. So we use streaming (Server-Sent Events) to display words one by one, giving the illusion of fluidity.
In Python (FastAPI):
- Native, asynchronous streaming
- One worker can handle 100+ simultaneous connections without breaking a sweat
- While OpenAI generates the response, the worker is free to process other requests
- Non-blocking architecture: everything is fluid
In Symfony (classic PHP-FPM):
- Making proper streaming work is already a pain
- Each streaming connection monopolizes one complete PHP worker
- If 50 users are streaming a response simultaneously, your 50 PHP workers are all frozen, patiently waiting for OpenAI to deign to send back a token
- Meanwhile? Your site doesn't respond anymore. Other visitors wait. Monitoring goes haywire.
- This is textbook worker starvation: all your workers are alive but useless, blocking on I/O while your queue fills up and users time out.
The school bus doesn't just have NASA stickers. It also has 1980s tires that explode as soon as you exceed 30 mph.
That's when you understand that synchronous PHP architecture was never designed for this. You can apply as much polish as you want, the foundation remains unsuitable.
2. Doctrine: A Ferrari with a Lawnmower Engine
Modern RAG relies on vector operations: cosine distances, ANN indexes, millions of points in memory. Doctrine, on the other hand, relies on PHP object hydration designed in 2009 for SQL relations.
But let's be honest: even for standard plowing — your everyday SELECT * FROM user WHERE active = 1 — Doctrine consumes like an ogre.
The Hidden Cost of "Simple CRUD"
Forced hydration:
It manufactures complete PHP objects with all the machinery (EventManager, UnitOfWork, lazy-loading proxies) just to display three fields in a JSON.
Memory footprint:
50,000 rows? The PHP process takes 400MB and the garbage collector screams. This isn't data management, it's helium inflation.
Subtle N+1:
Even senior devs forget a fetch="EAGER" and suddenly your page makes 47 SQL queries to list users. Doctrine doesn't protect you from yourself, it amplifies your mistakes.
DQL Overhead:
The DQL parser + SQL generator + result set mapping to transform SQL into objects... it's molecular gastronomy to make a sandwich. You wanted SELECT id, name FROM user? Doctrine offers you a ballet of 800 lines of internal code.
The Real Metaphor
You can announce the same power on paper — "millions of entries management, elegant abstraction" — but Doctrine isn't even a robust farm tractor.
It's a garden micro-tractor, with 25 HP, meant to plow flowerpots (your 200-line admin CRUD), that we're trying to pass off as intensive farming equipment.
And here, in AI, we're asking this micro-tractor to plow 50 hectares of 1536D vectors continuously.
Result?
- It melts its clutch (PHP segfault)
- It blows its tires (disk swap activated, server on its knees)
- The driver (the DBA) has to call for help at 3 AM
The metaphor "Ferrari with tractor engine" was already too flattering.
It's a Ferrari with a Honda lawnmower engine.
You can't race the 24 Hours of Le Mans with a block that was designed to mow the lawn.
A Concrete Example That Kills
Let's take a basic RAG chatbot: 50,000 documents, OpenAI embeddings (1536 dimensions), semantic search.
With Qdrant (or Pinecone, or Weaviate):
- Latency: 20-50ms
- RAM: ~2GB for 50k vectors
- Scale: linear up to several million vectors
With Symfony AI + Doctrine:
- Doctrine tries to hydrate thousands of PHP objects to calculate cosine distances
- MySQL (or PostgreSQL) does a full table scan on an
embeddingcolumn stored as JSON or BLOB - Latency: 3-8 seconds for a simple query
- RAM: the PHP process explodes to 512MB, then 1GB, then timeout
- The DBA receives an alert at 3 AM and resigns by email
And the worst part? Even if the dev adds a vector index (pgvector on PostgreSQL, for example), Doctrine doesn't know how to generate the specific search operator like pgvector's <->.
They have two options:
-
Write raw SQL with
NativeQuery→ the ORM is useless, we just added 3 layers of abstraction to... write SQL by hand - Use Doctrine's QueryBuilder → which will generate a slow and inefficient query, completely ignoring the vector index
The abstraction isn't just slow. It's useless. Worse: it's dangerous, because it gives the illusion that you're doing things properly while sabotaging performance.
It's a Ferrari with a lawnmower engine: it looks impressive on the brochure, but try exceeding 20 mph.
3. Economic Incoherence: Doing AI with Yesterday's Problem's Tool
Using Symfony to do AI is like using COBOL to make a website in 2025.
Technically possible? Yes, absolutely.
Has someone already done it? Probably, in some basement of the Finance Ministry.
Is it a good idea? No. Never. Under no circumstances.
The Real Economic Question
Facing a RAG project, an average company has two options:
Efficient option:
Two Python devs → FastAPI + Qdrant → robust prototype in two weeks → scales to 10M vectors with 2 servers → controlled cost, performance delivered.
Symfony option:
We try to fit embeddings into Doctrine → six months of refactoring → a budget equivalent to a country house → performance that makes a 200-line Python script smile → scales to 100k documents maximum before everything collapses.
It's not a question of Symfony devs' competence. It's a question of tool unsuitable for the problem.
Symfony AI is a solution for those who want to do AI without ever approaching AI. For those who prefer to pay six months of consulting rather than three weeks of Python training.
4. The Rubber Belt Against the Metal Chain
The rubber belt (Symfony AI) is exactly what we put in place of the metal chain (an AI-native architecture).
Why did the automotive industry replace chains with belts?
- Cost: a belt costs less to produce — like avoiding training a Python team or hiring an ML engineer.
- Silence: it makes less noise — no organizational friction, no questioning of the historical stack.
- Lightweight: it lightens — we don't change anything about hosting, we stay on a shared server that does what it can.
- Planned obsolescence: a belt is replaced regularly — exactly like these Symfony AI refactorings that come back every X months.
The problem? A belt breaks cleanly. No sign, no warning. It gives out. Brutally.
And when the Symfony AI belt breaks:
- embeddings explode the RAM of an OVH shared server
- Doctrine latency makes the chatbot timeout in production
- a "simple" RAG must handle 100k documents and MySQL triggers a 12-second full table scan
- the application becomes unavailable
- emergency committee improvised around a PowerPoint
... it's engine failure: valves in pistons, project to rewrite, budget to double.
The metal chain (Python + vector DB + AI-designed architecture), it makes noise at first, it's expensive to install, but it lasts 300,000 km. It's made to withstand.
With Symfony AI, we replaced a durable solution with a disposable one, to save 15% at startup and lose 85% later.
This is exactly the French IT department economy: preferring a controlled and predictable expense (changing the belt every 60,000 km) to an initial investment that guarantees survival (the chain).
5. Conclusion: Modernity Tailored to Reassure, Not to Advance
Symfony AI isn't dangerous, nor useless. It's simply cosmetic: an elegant way to tell teams "don't you dare change your stack."
It's makeup on an unsuitable architecture. A yellow school bus, solid but slow, to which we stick "AI ready," "Vector search inside" and two metallic stickers.
From afar, it shines. Up close, you still see traces of the old "Municipal Service" logo.
The illusion doesn't go into orbit, even with NASA stickers.
It's AI for those who are afraid of AI. A stagecoach disguised as a spaceship. Ceremonial modernity.
And in a world evolving at the speed of AI, it's funnier than it is serious.
Top comments (22)
This was a sharp and entertaining read — the metaphors land hard, but they make the architectural gaps very easy to understand. I especially like how you connect Symfony AI’s design choices to real-world performance and economic trade-offs, not just theory. Curious to see how you’d envision a pragmatic bridge for Symfony teams who know this but still need to ship something workable short-term.
Thanks Art! Glad the metaphors landed. You hit the nail on the head: it’s about choosing the right role for the tool.
For a pragmatic bridge, I’d suggest a "Coordinator vs. Engine" strategy:
1. The "Short-Term" Lane (The Prototype):
Use Symfony AI to ship a POC in days. It’s genuinely fast for simple, internal tools where 3-second latency and high RAM usage don't matter yet.
Treat this code as disposable.
2. The "Long-Term" Lane (The Architecture):
Use Symfony for what it’s world-class at: Auth, Security, Routing, and complex business logic.
Offload the "heavy lifting" to specialized workers (Python/FastAPI) and native vector stores (Qdrant, Milvus).
Use Symfony AI purely as the Interface Layer to coordinate these services, not to run the vector math itself.
The trap is believing the marketing that says the bus can eventually go to orbit. If you start with Symfony, start with the intent to decouple the "AI engine" the moment you need to scale or stream tokens to more than 10 users at once.
Don’t fight the framework’s DNA—use it as a coordinator, not the engine.
This is the nuance I missed in the post!
I know satire and nuance are no bedfellows.
From another comment I got you audited the setup you described in the post, but did you audit other setups?
It feels you made Symfony AI blind and gave it a knife, while you gave the Python setup a sniper-rifle.
David, you’re right: Satire is a spotlight, not a flashlight. Its job is to illuminate the structural hole in the boat, not to provide a 50-page manual on how to patch it with Swoole and custom bridges.
On the "Knife vs. Sniper Rifle" metaphor: I didn't give Symfony AI a knife; it came with one. The "knife" is the blocking I/O of PHP-FPM and the overhead of an ORM-first ecosystem. Python has a "sniper rifle" because it spent years integrating NumPy, PyTorch, and native async I/O into its DNA. Comparing them isn't being "blind"; it's acknowledging that for AI workloads, one is a specialized tool and the other is a multi-purpose tool trying to adapt.
On the Audits: Yes, I have audited "modern" setups (Swoole, FrankenPHP, standalone components). They are impressive engineering feats. But here is the pragmatic reality: 95% of the Symfony market isn't there. Most companies have a standard monolith, standard workers, and a team that knows Doctrine, not event loops. My post targets the marketing promise made to that 95%—the ones who think they can go to orbit just by installing a new symfony/ai package without changing their entire infrastructure.
About the Nuance: The nuance was in the "NASA stickers" metaphor. You can make the bus faster, you can give it a better engine, but if the goal is "Industrial RAG in orbit," the bus is simply the wrong vehicle.
Using Symfony as a coordinator (Auth, Business Logic, Routing) is where it excels. Using it as the AI engine is where the "knife" starts to show its limits.
I’m glad we reached this point of the debate. It proves that the "nuance" is exactly what teams need to hear before they commit their budget to the wrong stack.
My question was not did you audit the things separately, the question was did you audit Symfony AI with other setups. And that is the basis of my knife-rifle comparison.
Where do you get that number?
Knowing the community from the events, people that use Symfony are well aware of the limits of the solutions they use.
If that wasn't the case FrankenPHP wouldn't even exist.
I'm sure there is marketing that promises a quick fix solution. But I see that more as a failure to see through the marketing.
When solutions are presented you should kick the tires as hard as possible to see when they explode.
David, I love that we’re still at it. But let’s look at the logic here:
1. The "FrankenPHP" Trap: You say FrankenPHP exists because the community knows the limits. Precisely! If you need to swap the entire web server and PHP execution model to make it viable, then you’re proving my point: the standard Symfony stack is the "knife" in an AI "sniper" fight. My article is about that standard stack—the one 95% of the market uses on their standard VPS or PaaS.
2. The 95%: You ask where I get that number? Look at any hosting provider stats (AWS, DigitalOcean, Heroku). The vast majority of PHP apps run on standard FPM/Nginx. High-perf setups like FrankenPHP or Swoole are the 1% elite, not the "Symfony experience" sold to the masses.
3. Kicking the Tires: You say developers should "kick the tires" of marketing promises. That is exactly what my article does. I kicked the tires of the "AI-ready" marketing, and they didn't just pop—they evaporated. Blaming users for "failing to see through marketing" while defending the tool that uses that marketing is a bit of a paradox, don't you think?
4. The Audit: I didn't give Python a sniper rifle. Python is the sniper rifle because of 20 years of C-level data science integration. Symfony AI is a very elegant administrative layer, but putting a scope on a school bus doesn't make it a sniper rifle—it just makes it a bus that can see further while it’s still stuck in traffic.
I think we’ve reached the "agree to disagree" point. I’ll keep kicking the tires, and you keep building the specialized engines to make the bus fly!
The vast majority of PHP sites are WordPress, sadly. So that is the wrong metric to judge the use of "modern" PHP setups.
And even if they run FPM/Nginx you don't know how they are structured internally. They can run FrankenPHP/Swoole setups on internal networks.
Marketing is a part of every product that needs to be sold.
In software development we need to be more careful because it is not something temporary like food or perfume. That is why developers should have hands on experience with a tool before they implement it.
So no I don't think it a paradox, it is reality.
My comments are less about the component at the moment.
I react to the mischaracterizations you write.
Throughout your comments it feels like you have a low opinion of PHP developers.
David, let’s address the "personal" turn of this conversation, as it’s the most important point.
1. On the "Low Opinion" of PHP Developers: It’s actually the opposite. I have such a high opinion of PHP developers that I believe they deserve better than marketing-driven architecture. If I didn't care about this community, I wouldn't have spent 15 years writing tutorials on NGINX, PHP-FPM, and Bit Bashing to help us move past the bottlenecks of the past.
2. On WordPress and the "95%": Even if we exclude WordPress, the enterprise Symfony/Laravel world is still overwhelmingly dominated by standard FPM setups on managed PaaS or standard VPS. Speculating about "internal Swoole setups" doesn't change the reality of what is being marketed to the average dev team. If a tool requires a specialized, non-standard runner to be viable for AI, then it isn't "AI-ready" in its standard form.
3. On Marketing vs. Reality: You say developers should have hands-on experience before implementing. My article is that hands-on audit. I am sharing the results of "kicking the tires" so that others don't have to crash the bus to find out it won't fly.
4. The Nuance: I don’t hate Symfony. I hate seeing a great community being steered toward an architectural dead-end because of a "me-too" feature rush.
I’m glad we had this debate. It’s exactly the kind of "contrary" conversation that prevents us from becoming the "dinosaurs" I was already warning about in 2011.
Respect for the passion, David. I'll see you on the next technical challenge!
I understand your goal is to make people aware of the most performant tools, and I'm all for it.
The thing is there are plenty of applications that don't need the maximum performance.
If every application needed the maximum performance, everybody would be still writing assembly.
Using PHP is a conscious decision to sacrifice some performance for readability and more speedy development.
If performance is high on the list I wouldn't go with a script language.
The reason I think that you have a low opinion is because you make statements that makes the community look like a herd of sheep.
For as long as I been in the community I have met very smart people that made the decision to write PHP code. But they also know when not to write PHP code.
I didn't speculate about internal Swoole setups, I talked to people who done it even before I was aware of the tool.
I don't understand why you don't see that having a bad setup is not a good way to judge a solution?
It is like wanting to build a sandcastle at the waterline and blaming the shovel for the failure.
Build it a bit further and then you might discover the sand is too dry, go towards water again and there you find the perfect sand to build a castle.
Just like the castle there is an optimal way of working with the solution for the case that people are facing. There is not one solution that is the best in every situation.
Hahaha, I’m not building anything in Symfony or PHP, but this was genuinely a fun read 😄
It also confirmed my gut feeling - these days, when it comes to AI, a Python-based stack just feels like the most natural and practical choice.
Haha thanks Sylwia! Sometimes the best confirmation comes from people outside the ecosystem looking in.
Python for AI isn't about fanboyism — it's just the path of least resistance when the entire toolchain, community, and libraries are already there.
Glad you enjoyed the bus metaphors! 🚌🚀
Python the language, is inherently slower that PHP for task execution. What makes it efficient for AI jobs is its ecosystem of packages. What Symfony AI does is adding the ecosystem of packages to build AI solutions, and its does it by abstracting systems (vector store, embedding models); doing so it tends to stay in the common denominator.
It's unfair to say Doctrine (ORM) is the promoted solution for Symfony AI because it is the de-facto standard for building Symfony applications. The list of existing vector stores is the proof they are all integrated at the same level. Symfony is promoting standards and driving innovation, you are not stuck on a single stack they decided is best for you, but instead everything is open and moving.
Appreciate the nuance, Jérôme. You’re absolutely right — this is essentially a debate between ecosystem fit and architectural/performance fit. I think both perspectives have been explored pretty thoroughly here. Thanks for the thoughtful contribution.
This is hilarious,😂 but also so accurate. Symfony AI really does feel like slapping NASA stickers on a school bus, looks cool at first, but still stuck in the slow lane. The Doctrine analogy is perfect too. Trying to use it for AI is like racing with a lawnmower engine. You nailed the gap between the illusion of modernity and the reality of performance.
Thanks! The metaphors basically wrote themselves once I started auditing production Symfony AI setups.
The hardest part was keeping it satirical without being unfair — but when you see a real project trying to do RAG with Doctrine hydration, the lawnmower engine comparison becomes... generous. 😅
Glad it resonated!
It seems you haven't scratched deep enough to see how the solution works.
I suggest to do proper research before writing about a topic.
Thanks for the technical feedback, David.
You're right on the details: yes, asynchronicity exists in PHP (Swoole, FrankenPHP), and yes, the Store component can talk to Qdrant. But your argument actually confirms my exact point: we're talking about two different worlds.
It seems you are overestimating the capacity of Python and FastAPI when you try to compare it with PHP and Symfony.
The beauty of Symfony components is that they are standalone. So you don't need to use the Symfony framework if you don't want. You can write your custom worker code and use one or more components to save time.
If you think that it is only theory, then can I assume you never worked on exciting projects in PHP.
David, I see you’re leaning into the "No True Scotsman" fallacy with the "exciting projects" remark, but let’s stick to the engineering reality.
1. The Async Gap: Saying FastAPI and Symfony follow the same request-response model is like saying a glider and a jet follow the same laws of physics. Technically true, but the concurrency model is worlds apart. In standard PHP-FPM, a worker is deadlocked while waiting for a 20-second LLM stream. In FastAPI, that worker handles dozens of other tasks. Bolting on Swoole doesn't make Symfony "AI-ready"; it makes it a custom async engine wearing a Symfony hat.
2. The Abstraction Tax: Look at the Qdrant bridge implementation:
Agent,Toolbox,Platform,Store,Indexer,Vectorizer.... This is exactly the "NASA stickers on a school bus" problem. You’re building a cathedral ofusestatements for a simple vector lookup. In AI, agility and speed to production are key. Adding three layers of abstraction as a "middleman" is an architectural tax, not a feature.3. Data-First DNA: Python doesn't win because of a library. It wins because its entire stack (NumPy, PyTorch) handles high-dimensional vectors at the C level with zero overhead. PHP handles them as memory-heavy hash maps. Trying to do massive RAG in PHP is like doing carpentry with a Swiss Army knife: sure, you can do it, but why would you?
4. On Theory vs. Practice: My "research" comes from auditing "exciting" PHP projects that hit a wall the moment the scale went from "Chatbot for 10 users" to "Industrial RAG". Using the right tool for the job isn't dogmatic; it's pragmatic.
I love Symfony for what it does best, but advocating it for AI workloads because "it's technically possible" ignores the economic and architectural reality of production systems.
I think you mix components with the framework. It is not because it is a Symfony component you need to use the framework. It is possible to create a Swoole server with the Symfony AI components, no bolts needed.
PHP-FPM is a moot point when using Swoole.
I assume you referring to this. The components give you the option to select multiple AI solutions and multiple database solutions, this is going to require more abstractions than using a single AI solution and a single database.
I'm not saying it is the best solution in all cases, but you can see it as an example and trim the parts the application doesn't need if you want to make it as lean as possible.
I'm not going to deny Python data libraries are way more mature than PHP libraries.
While I can see cases where going to Python is a necessity. There is a lot of distance between low use chatbot and industrial RAG. And for some of those in between cases Symfony AI can be a solution.
I don't think Symfony AI is pretending it can handle every AI case. I'm sure Python and fastAPI can't handle all cases. Script languages are not the most performant languages.
The biggest problem I have that you just plain dismiss the component as a possible solution.
David, I appreciate that we're converging on the core technical points. You acknowledge Python's maturity advantage, and I'll acknowledge that Symfony AI can work in certain narrow cases.
Here's where we still diverge:
On the "just use Swoole" argument:
You're right that Swoole makes PHP-FPM irrelevant — if you control infrastructure and team expertise. But that's my exact point: if the solution requires departing from standard Symfony deployment, then "Symfony AI" becomes "Swoole + some Symfony components," not the integrated experience being marketed.
On standalone components:
Your suggestion to "trim the parts the application doesn't need" actually reinforces my satire. If the optimal approach is to bypass the framework and cherry-pick components, what's the value proposition vs. using tools purpose-built for AI workloads?
On dismissiveness:
I don't dismiss Symfony AI as technically possible — I critique it as economically and architecturally suboptimal for production AI at scale. There's a difference between "it can work" and "it should be your first choice."
For a low-stakes chatbot with an existing Symfony team? Sure, it's defensible. For anything with serious scale ambitions or performance requirements? The architectural compromises add up fast.
We probably agree more than we disagree at this point. The debate was valuable. 🤝
It is possible to use it as Swoole and the component, when the application needs the concurrency and asynchronicity. But you can use it with their framework too.
That is how all their components are marketed. That is why they add a "Who is using the component" section to many of their components documentation.
I see the choice of AI and databases as flexibility. But I was reacting to the sentence; You’re building a cathedral of use statements for a simple vector lookup.
And I wanted to acknowledge that there are cases where the abstractions are not needed. And then cherry-picking could be the better solution than to write code from scratch.
You prove over and over again your dismissal of the component.
I don't believe that any tool should be your first choice in all cases. Every case is a balance between time, expertise, and requirements.
I do think the component has more potential than you give it credit.