Hi folks! Not long ago I posted an article about the funniest and weirdest commit messages from my projects (If You Think YOUR Commit Messages Are ...
For further actions, you may consider blocking this person and/or reporting abuse
That's awesomeeeeee, I'm definitely giving this a try
Yay!! So happy to hear that!
If you end up experimenting with it, I’d love to see what you build ✨
Refreshing, I had a good laugh! That said, your approach is very interesting... and shows the limitations of in-browser LLM.
Pascal, thank you! 😄💛
And yeah, 100% agree - it’s a fun little toy, but production-ready? Ehhh… not yet.
I still can’t imagine a serious use case for in-browser LLMs at this scale.
BUT running your own model locally on a proper server?
Now that’s tempting 😅🔥
If you’ve got the hardware… the possibilities start looking a lot more realistic.
You're right — and I feel the same: with a code-oriented model like DeepSeek Coder 2 Lite, things can already get surprisingly usable. But below ~7B parameters, it still feels more like a clever experiment than something you’d rely on.
Running a stronger model locally, though… that’s where it finally starts to make sense. It opens the door to more realistic workflows, even if we're not quite there yet.
Absolutely - I feel exactly the same.
Once you hit that ~7B+ range with something code-tuned (DeepSeek Coder Lite, Qwen Coder, etc.), it suddenly goes from “fun experiment” to “okay wait… this is actually usable.”
Below that, yeah… it’s still more of a playground than something you’d trust in a real workflow 😅
But running a stronger model locally?
THAT’S the moment it starts getting exciting.
It really feels like we’re one hardware upgrade away from some genuinely practical, private, on-device dev tools - not quite there yet, but sooo close.
Exactly — that “almost there” feeling is really striking. The gap between a tiny browser model and a 7B+ local model is huge, and it shows how close we’re getting to genuinely useful on-device tools.
What surprises me the most is how fast that threshold is moving. A year ago, none of this felt realistic — now it’s right at the edge of being practical. It’s a fascinating moment to watch.
Pascal, exactly! The speed of this shift is insane - a year ago none of this felt even remotely practical, and now we’re basically at the edge of “yeah, this could actually work.” 🤯
It reminds me of a friend who ran a model fully locally - no network at all -
and the model kept insisting “I’m contacting the server now.” 😂
Sir… you are the server.
This whole on-device LLM era is getting fascinating really fast 😅
That’s exactly it — the models still behave as if some mysterious backend is doing the work, even when they’re literally running on our GPU. The “Sir… you are the server” moment is priceless 😄
What really surprises me is how quickly local setups are becoming viable. A year ago this felt like sci-fi, and now we’re basically a small hardware bump away from doing all of this comfortably at home. It’s a fun era to watch unfold.
Exactly - the “mysterious backend” hallucination always cracks me up 😂
It’s like the models can’t emotionally accept that they’re running on our sad little GPUs. 😅
nice!
Hehe glad you liked it! 😅✨
I follow you on github!
Aww thank you, Benjamin! 😄💛
That honestly means a lot - hope you enjoy whatever chaos I build next 😂✨
No problem :)
You should see some of my earliest repo :)
Haha I will!!!
:)
Definitely trying this. I was at Google DecFest last weekend, and someone talked about this too...I'm definitely trying this!
Yesss!! That makes me so happy!
Browser LLMs are such a fun rabbit hole - let me know what you build! 🤖✨
Quite elaborate and well detailed 🤝
Aww thank you! 😄💛 Happy it landed well!
I tried running LLMs once, Failed badly 🤣