DEV Community

Cover image for I Put an LLM in My Browser and Now It Writes My Commit Messages. The Results Were… Unexpected😭✨

I Put an LLM in My Browser and Now It Writes My Commit Messages. The Results Were… Unexpected😭✨

Sylwia Laskowska on November 27, 2025

Hi folks! Not long ago I posted an article about the funniest and weirdest commit messages from my projects (If You Think YOUR Commit Messages Are ...
Collapse
 
adamthedeveloper profile image
Adam - The Developer

That's awesomeeeeee, I'm definitely giving this a try

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Yay!! So happy to hear that!
If you end up experimenting with it, I’d love to see what you build ✨

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Refreshing, I had a good laugh! That said, your approach is very interesting... and shows the limitations of in-browser LLM.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Pascal, thank you! 😄💛
And yeah, 100% agree - it’s a fun little toy, but production-ready? Ehhh… not yet.
I still can’t imagine a serious use case for in-browser LLMs at this scale.

BUT running your own model locally on a proper server?
Now that’s tempting 😅🔥
If you’ve got the hardware… the possibilities start looking a lot more realistic.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

You're right — and I feel the same: with a code-oriented model like DeepSeek Coder 2 Lite, things can already get surprisingly usable. But below ~7B parameters, it still feels more like a clever experiment than something you’d rely on.
Running a stronger model locally, though… that’s where it finally starts to make sense. It opens the door to more realistic workflows, even if we're not quite there yet.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Absolutely - I feel exactly the same.
Once you hit that ~7B+ range with something code-tuned (DeepSeek Coder Lite, Qwen Coder, etc.), it suddenly goes from “fun experiment” to “okay wait… this is actually usable.”

Below that, yeah… it’s still more of a playground than something you’d trust in a real workflow 😅

But running a stronger model locally?
THAT’S the moment it starts getting exciting.
It really feels like we’re one hardware upgrade away from some genuinely practical, private, on-device dev tools - not quite there yet, but sooo close.

Thread Thread
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Exactly — that “almost there” feeling is really striking. The gap between a tiny browser model and a 7B+ local model is huge, and it shows how close we’re getting to genuinely useful on-device tools.
What surprises me the most is how fast that threshold is moving. A year ago, none of this felt realistic — now it’s right at the edge of being practical. It’s a fascinating moment to watch.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Pascal, exactly! The speed of this shift is insane - a year ago none of this felt even remotely practical, and now we’re basically at the edge of “yeah, this could actually work.” 🤯

It reminds me of a friend who ran a model fully locally - no network at all -
and the model kept insisting “I’m contacting the server now.” 😂
Sir… you are the server.

This whole on-device LLM era is getting fascinating really fast 😅

Thread Thread
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

That’s exactly it — the models still behave as if some mysterious backend is doing the work, even when they’re literally running on our GPU. The “Sir… you are the server” moment is priceless 😄

What really surprises me is how quickly local setups are becoming viable. A year ago this felt like sci-fi, and now we’re basically a small hardware bump away from doing all of this comfortably at home. It’s a fun era to watch unfold.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Exactly - the “mysterious backend” hallucination always cracks me up 😂
It’s like the models can’t emotionally accept that they’re running on our sad little GPUs. 😅

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

nice!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Hehe glad you liked it! 😅✨

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

I follow you on github!

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Aww thank you, Benjamin! 😄💛
That honestly means a lot - hope you enjoy whatever chaos I build next 😂✨

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

No problem :)

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

You should see some of my earliest repo :)

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Haha I will!!!

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

:)

Collapse
 
toboreeee profile image
Laurina Ayarah

Definitely trying this. I was at Google DecFest last weekend, and someone talked about this too...I'm definitely trying this!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Yesss!! That makes me so happy!
Browser LLMs are such a fun rabbit hole - let me know what you build! 🤖✨

Collapse
 
amvitor-cm profile image
Tam ⚛️

Quite elaborate and well detailed 🤝

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Aww thank you! 😄💛 Happy it landed well!

Collapse
 
johnnysturat profile image
John Kenny

I tried running LLMs once, Failed badly 🤣