DEV Community

Cover image for 🧠Maybe I Just Do Not Get It!

🧠Maybe I Just Do Not Get It!

Mak Sò on December 02, 2025

I have been working with AI for a while now. Not as a tourist. Not as a weekend hacker. Deep in it. Shipping things, wiring models into products, ...
Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

Companies used probabilistic algorithms long before their current LLM "AI" level. In her 2018 text book Hello World: How to be Human in the Age of the Machine mathematician Hannah Fry discusses concepts, advantages and risks, and practical use in medicine, policing, and finance, as well as satnav and chess playing machines from the 1990s to just before the current AI hype. In Unmasking AI, Dr. Joy Buolamwini shares her journey from enthusiasm to disappointment after face recognotion algorithms repeatedly failed to recognize her face.

20th century science fiction literature also discusses autonomous agentic automation and its ethical dilemma a lot, coming up with Laws of Robotics and the idea of a Minority Report. And there is Skynet frequently mentioned in DEV discussions and Meme Mondays.

The European Union already established a legal Artificial Intelligence Act, so time is running out developers and managers still navie or trying to act dumb jumping the bandwagon of praising and trusting AI like it's some kind of Deus Ex Machina.

Hopefully articles and concepts like yours might help to establish limits and use the knowledge of decades of software engineering and development.

Collapse
 
marcosomma profile image
Mak Sò

Thank you for such a thoughtful comment.
You are completely right that none of this started with LLMs. We already had decades of experience with probabilistic systems, safety critical software, and the social impact of biased models, long before the current chat based hype.
What worries me is not "new math" but the combination of scale, delegation, and marketing. We took stochastic systems, wrapped them in a friendly chat UI, called them "intelligent," and then quietly started routing real decisions through them without doing the slow and boring governance work that older disciplines were forced to do.

I am also following the AI Act with a lot of interest. Regulation can set boundaries, but it is on us as builders to translate that into real control surfaces, traceability, and "no by default" paths in our systems, not just checklists.

If pieces like this one help reconnect current AI enthusiasm with that older body of knowledge that you mention, then it already feels worth writing them. Really appreciate you bringing this context into the discussion.

Collapse
 
dhasler profile image
Daniel Hasler

Dear Marco
Thank you so much for your very insightful article.
Working in a highly regulated industry, I often find myself in the same position like you, feeling like the 'party pooper' in a room full of enthusiasts. Thus reading your thoughts and convictions and realizing that "I'm not alone" is really encouraging.
Deterministic logic is not dead, combined with the "new world" it can achieve great things, but we need to find out how to combine the two in the appropriate way. And that is probably use-case dependent and requires clever humans, for now at least 🙄
Thanks again for this great article Marco.

Collapse
 
leob profile image
leob

Great article, but I think the solution is to not let AI run your business, but to let AI write the software code that runs your business - and that software (code) is deterministic, not stochastic ...

Right, or not?

Collapse
 
marcosomma profile image
Mak Sò

This makes sense, but it is not really the reality, or at least not the full vision.
AI is great at writing code, and with the right human in the loop that can be a good choice.

The thing is that many AI engineers I speak with nowadays are already using LLM based agents to solve real business problems and to operate entire pipelines, not just to generate code.

What scares me is that this is happening in high impact areas, from customer support to banking and healthcare. If all we had were simple wrapper apps, there would not be much to worry about. The risk comes from AI quietly moving from “tool that writes code” to “system that is effectively running parts of the business”.

Collapse
 
leob profile image
leob

You're right - ONLY using AI to write code (code which does not run AI/LLM models), that's not what's happening in the "real world", that train has left the station ...

So, I suppose we'll need a way ('framework'?) to manage those "AI risks", because I would tend to agree with you that there are risks ... on the other hand, humans can make mistakes too, so what's the difference - maybe it's that humans can be held accountable, and AI can't?

Thread Thread
 
marcosomma profile image
Mak Sò

Technically speaking, when a human makes a mistake at time T1, it is still the same human at Tn when you ask why that decision was made. The internal wiring of the brain has continuity, and at least in principle you can inspect the context, motives, and reasoning that led to the error. That is what we call introspection.

With current AI systems it is different. We work with probabilistic models that generate tokens by sampling. The model that produced an answer at T1 is not a stable agent you can later ask at Tn “why did you do that?” It cannot look at its own weights and activations and give you a causal story of how that specific decision emerged.

So yes, humans make mistakes, but they can explain them and update in a relatively consistent way. Today’s AI systems are large black boxes deployed at scale, without real introspection, accountability, or a controlled learning loop. That is exactly why we need frameworks for explainable and traceable AI, not just bigger models.

Thread Thread
 
leob profile image
leob

Interesting! So, "AI has no continuity" ... I wonder if it would be possible to add that capability - to create an "AI person", to put it like that ... but at the same time, I realize that that sounds pretty scary :-)

Thread Thread
 
marcosomma profile image
Mak Sò

skynet rise? Will be cool and scaring at same time :)

Collapse
 
capestart profile image
CapeStart

It seems that many people mistake fluent output for true dependability. In the end, code, permissions, and blast radius limits are the real guardrails, not prompts. Until these models become much more predictable, it seems like the only sensible course of action is to use LLMs as copilots with strict boundaries.

Collapse
 
riffi profile image
Vladimir

Interesting thoughts, and they resonate strongly with me. Indeed: prompt -> result is an unreliable and difficult-to-measure solution. I also believe that complex processes (something more than the todo list app) necessarily require orchestration with strict rules, format-based logistical control, clearly verifiable inputs and outputs outside the LLM, and mandatory human in the loop. LLM is a fantastic opportunity, but also an unreliable colleague, as you put it absolutely accurately. I will study your project, I think it will be interesting.

Collapse
 
marcosomma profile image
Mak Sò

Thanks, @riffi .
I am glad your post helped someone feel a bit less alone and sad in these noisy, busy times.

I have a question for you though, how do you handle it yourself?
I often find myself getting so frustrated that I start to really doubt my own vision. I feel like I am becoming a dinosaur who keeps pushing people to slow down when everything around us is about going faster.

Then I remember that I am not a dinosaur at all, I am actually an explorer moving carefully through an unexplored jungle 🙂

Collapse
 
riffi profile image
Vladimir

In my opinion, it is important to remain independent with a cool head here. On the one hand, from the neural network hype: do not run around with bulging eyes and shout to all subordinates "You are not using LLM, you are mammoths, you are wasting time." On the other hand, don't be a hostage of the old school and say, "LLM is complete nonsense, I'd rather do it myself."

As far as I understand, hype neural networks are present in your environment. It's bad. We recall articles about how companies hire expensive seniors to eliminate the noodle code made by pure vibe coding.

In my environment, on the contrary, my colleagues don't even try LLM for development at all. And if they try, it's at the level of: "I'll write a prompt in the chat, I'll get an answer," there is no system, there is no real benefit.

It's good that there are people with analytical thinking who are not fooled by HYPE, but also not stuck in the past.

That's why I signed up for dev.to - to find like-minded people with whom you can discuss, build, and systematize the development of today.

To summarize: no, you are not alone, you have critical thinking.

Collapse
 
riffi profile image
Vladimir

I looked at your OrKA-reasoning. I have a question. Does it invoke models on a request-response basis? Can terminal agents like codex and claude code be used?

Thread Thread
 
marcosomma profile image
Mak Sò

Right now, it is focused on simple LLM chat responses using local providers like Ollama or LM Studio, as my goal is to prove that well-orchestrated SLMs run locally can reach better performance than online services.

Collapse
 
eddieajau profile image
Andrew Eddie

I am counting down the days to when I actually don't have to worry about the code or configuration anymore. I'm more than comfortable with the fact that my job is trending towards being more conductor than composer, though the line blurs.

My hope is we are less than 1,000 days away from this reality. In the meantime though, yeah, this stuff, this wrestling matters. But isn't this crazy times when I'm only half-serious when I quip "Prompting! That is so April 2025!" :)

Collapse
 
art_light profile image
Art light

I completely resonate with the concerns you've raised. While the idea of autonomous AI agents is enticing, the reality is that relying solely on prompt-based control without rigorous safeguards can lead to significant risks. It's essential that we incorporate proper governance, validation, and human oversight into these systems to ensure that autonomy doesn’t translate into unmanageable authority.