DEV Community

Cover image for I Put an LLM in My Browser and Now It Writes My Commit Messages. The Results Were… Unexpected😭✨
Sylwia Laskowska
Sylwia Laskowska

Posted on

I Put an LLM in My Browser and Now It Writes My Commit Messages. The Results Were… Unexpected😭✨

Hi folks! Not long ago I posted an article about the funniest and weirdest commit messages from my projects (If You Think YOUR Commit Messages Are Bad, Just Wait…).
The post itself was cool, but — as usual — your comments were pure gold.

A few people pointed out that…
👉 “Why write commit messages yourself when AI can do it?”

And then I thought: SAY. NO. MORE.
And since I'm a frontend developer, then of course:

Let’s do it in JavaScript! 😎

A hot topic at frontend conferences lately is running LLMs directly in the browser. No server, no tokens, no payments, no sending your code anywhere.
The main player here is Transformers.js, Hugging Face’s LLM runtime for browsers.

And since I’ve been wanting to play with it for months… now I had the perfect excuse.
The result?
👉 in two evenings, I built a prototype app
👉 (I promise, tomorrow I’ll finally turn on Netflix like a normal human)


🚀 TL;DR

Repo here: https://github.com/sylwia-lask/friendly-commit-messages
Feel free to play with it, use it, learn from it… or make PRs, because this is more of a POC than production-ready 😂


🛠 How does it work?

The idea was simple:

  1. You paste a git diff / code snippet
  2. The model analyzes the changes
  3. It generates a commit message
  4. All locally in the browser
  5. No asking an API for permission to exist

Sounds beautiful, right? And honestly?
It was really fun. But… not without some adventures 🤣


🤖 Choosing the model — a.k.a. “do I even know what I’m looking for?”

Most tutorials show super simple cases — e.g., a model that completes sentences.
But I needed:

  • code understanding
  • inferring intent
  • generating commit messages
  • compatibility with Transformers.js + ONNX (otherwise the model won’t run in the browser!)

The first problem:
👉 I couldn’t find a list of models that actually work with Transformers.js.

If this ever happens to you — here’s the link right away:
The List Of Free Models

Note:
not every model runs in the browser!
You need to filter by:

  • support for transformers.js
  • ONNX format (best for browser)
  • pipeline tag text-generation / chat-completion

I eventually chose:
👉 onnx-community/Qwen2.5-Coder-0.5B-Instruct

Why?

  • it’s small → fast in the browser
  • trained on code → commit messages are basically code reasoning
  • works with Transformers.js out of the box

But remember:
this is NOT GPT-5 or Gemini-3, just a tiny model.

And you can tell 😅


🧪 Examples

✔ When I paste proper code → I get a reasonable commit message (maybe far from perfect, but well, that's the effect of just two evenings of coding 😎)

The image shows

✔ When I paste broken code → I get the prompt-defined response “That's not even a code!!!”

The image shows

❌ When I asked about the weather in Brussels…
The model happily responded 🤣

The image shows

Small LLMs be like.

Moral of the story:
👉 In projects like this, the hardest part is the prompt + model selection, not the actual coding.


⚙️ Performance — or “why is my UI dying?”

This was a funny discovery.

The model loads only once.
Cool.

But:
👉 inference blocks the main thread
👉 React doesn’t have time to render “Generating…”
👉 The UI looks like nothing is happening

I could have thrown in a hack like:

setTimeout(() => runModel(), 0)
Enter fullscreen mode Exit fullscreen mode

but…
👉 Don’t do that. It just masks the real issue.

The real solution:
👉 move the model to a Web Worker

Transformers.js works beautifully in workers.
100% recommended.


🎁 The final result

What I ended up with:

  • lightweight UI in React + Tailwind
  • Transformers.js + ONNX running in the browser
  • a Web Worker hosting the model
  • a prompt that detects non-code inputs
  • a commit message generator that works offline (!)

Plus of course:
🥚 a little easter egg — I couldn’t resist adding commit names like:

  • “initial commit”
  • “do the needful”
  • “it finally works I guess”

🎓 Lessons learned

  • LLMs in the browser = super fun, but:

    • the models are small
    • prompt engineering matters A LOT
    • model selection is half the battle
  • Web Workers are a MUST if you don’t want UI freezes

  • Transformers.js is genuinely well made

  • You can build full, local AI tools without any backend at all!


💬 What do you think?

  • Have you ever tried running LLMs in the browser?
  • Do you have any favorite ONNX models?
  • Or maybe you want a version of this app with:

    • answer streaming?
    • model selection?
    • multiple commit message suggestions?

Let me know 💜

Repo again:
https://github.com/sylwia-lask/friendly-commit-messages


🦄 That’s it — thanks for reading!

And remember:
commit messages don’t have to be perfect — you just need a cute little local AI to generate them.

Top comments (23)

Collapse
 
adamthedeveloper profile image
Adam - The Developer

That's awesomeeeeee, I'm definitely giving this a try

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Yay!! So happy to hear that!
If you end up experimenting with it, I’d love to see what you build ✨

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Refreshing, I had a good laugh! That said, your approach is very interesting... and shows the limitations of in-browser LLM.

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Pascal, thank you! 😄💛
And yeah, 100% agree - it’s a fun little toy, but production-ready? Ehhh… not yet.
I still can’t imagine a serious use case for in-browser LLMs at this scale.

BUT running your own model locally on a proper server?
Now that’s tempting 😅🔥
If you’ve got the hardware… the possibilities start looking a lot more realistic.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

You're right — and I feel the same: with a code-oriented model like DeepSeek Coder 2 Lite, things can already get surprisingly usable. But below ~7B parameters, it still feels more like a clever experiment than something you’d rely on.
Running a stronger model locally, though… that’s where it finally starts to make sense. It opens the door to more realistic workflows, even if we're not quite there yet.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Absolutely - I feel exactly the same.
Once you hit that ~7B+ range with something code-tuned (DeepSeek Coder Lite, Qwen Coder, etc.), it suddenly goes from “fun experiment” to “okay wait… this is actually usable.”

Below that, yeah… it’s still more of a playground than something you’d trust in a real workflow 😅

But running a stronger model locally?
THAT’S the moment it starts getting exciting.
It really feels like we’re one hardware upgrade away from some genuinely practical, private, on-device dev tools - not quite there yet, but sooo close.

Thread Thread
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Exactly — that “almost there” feeling is really striking. The gap between a tiny browser model and a 7B+ local model is huge, and it shows how close we’re getting to genuinely useful on-device tools.
What surprises me the most is how fast that threshold is moving. A year ago, none of this felt realistic — now it’s right at the edge of being practical. It’s a fascinating moment to watch.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Pascal, exactly! The speed of this shift is insane - a year ago none of this felt even remotely practical, and now we’re basically at the edge of “yeah, this could actually work.” 🤯

It reminds me of a friend who ran a model fully locally - no network at all -
and the model kept insisting “I’m contacting the server now.” 😂
Sir… you are the server.

This whole on-device LLM era is getting fascinating really fast 😅

Thread Thread
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

That’s exactly it — the models still behave as if some mysterious backend is doing the work, even when they’re literally running on our GPU. The “Sir… you are the server” moment is priceless 😄

What really surprises me is how quickly local setups are becoming viable. A year ago this felt like sci-fi, and now we’re basically a small hardware bump away from doing all of this comfortably at home. It’s a fun era to watch unfold.

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Exactly - the “mysterious backend” hallucination always cracks me up 😂
It’s like the models can’t emotionally accept that they’re running on our sad little GPUs. 😅

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

nice!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Hehe glad you liked it! 😅✨

Collapse
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

I follow you on github!

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Aww thank you, Benjamin! 😄💛
That honestly means a lot - hope you enjoy whatever chaos I build next 😂✨

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

No problem :)

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

You should see some of my earliest repo :)

Thread Thread
 
sylwia-lask profile image
Sylwia Laskowska

Haha I will!!!

Thread Thread
 
benjamin_nguyen_8ca6ff360 profile image
Benjamin Nguyen

:)

Collapse
 
toboreeee profile image
Laurina Ayarah

Definitely trying this. I was at Google DecFest last weekend, and someone talked about this too...I'm definitely trying this!

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Yesss!! That makes me so happy!
Browser LLMs are such a fun rabbit hole - let me know what you build! 🤖✨

Collapse
 
amvitor-cm profile image
Tam ⚛️

Quite elaborate and well detailed 🤝

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

Aww thank you! 😄💛 Happy it landed well!

Collapse
 
johnnysturat profile image
John Kenny

I tried running LLMs once, Failed badly 🤣

Some comments may only be visible to logged-in visitors. Sign in to view all comments.