DEV Community

Cover image for When DEV.to Stats Aren't Enough: Building My Own Memory
Pascal CESCATO
Pascal CESCATO

Posted on • Edited on

When DEV.to Stats Aren't Enough: Building My Own Memory

One Tuesday morning at 9:14 AM, my six-month-old article got 37 views in 20 minutes. DEV.to's dashboard just said "+37 views". No context. No cause. No pattern.

I wanted to know why. Was it a comment from someone influential? A share somewhere? A title change from weeks ago finally paying off? The platform couldn't tell me. So I decided to steal my own data.

Not to optimize. Not to perform. But to understand how my articles actually live over time.

The Starting Point

I started with devto-analytics-pro by @gnomeman4201 — a solid foundation for collecting basic metrics. But I wanted more: a temporal vision, a memory that could tell the story of an article over time.

First step: store everything in a database. Not once, but every 4 to 6 hours. Automatically.

Why this frequency? Because with daily snapshots, you miss the fine variations. You miss what happens between noon and 6 PM. You smooth everything out. But with this frequency, suddenly, you see the breathing. You see when an article wakes up, when it falls asleep, when something revives it.

What I Discovered by Looking at My Data

The first thing the data taught me is that I didn't know my own articles as well as I thought.

For example, I discovered that a simple like from an active DEV community member can change everything. Not a spectacular reaction, just a like. But enough for DEV.to to feature the article. And there, the views climb. Not violently, but distinctly. Without regular data tracking, this phenomenon would have completely escaped me.

I also saw that a title change can triple visibility. Same content, same tags, same structure. Just a reformulated title. And suddenly, the exposure curve takes off again. It's not a "shock" — it's a lesson. A lesson you can only learn by watching temporal evolution, not by consulting a cumulative total.

Another discovery: some articles I thought were "dead" continue to bring readers six months after publication. Not many, but regularly. Two views per day, three comments per week. A discreet but real life. Without history, I would never have known they were still breathing.

And then there are the strange rhythms. My latest article on Cloud Run: 15 views at once on January 11 at 11 AM. Then nothing for 24 hours. Then 10 views on the 13th at 7 AM. Then silence. Then 12 views on the 15th at 11 AM. Then 10 more views on the 17th at 7 AM. Like jerky breathing. Without this collection every 4 hours, I would have only seen a total: "139 views in a week". With it, I see an article that lives in spurts, waking up at specific moments, then going back to sleep.

What the Tool Revealed About Me

By looking at my own data, I understood things my intuition didn't tell me.

The tool automatically classified my articles into four categories: "Tech Expertise", "Human & Career", "Culture & Agile", and "Free Exploration". I didn't choose these categories — the content analysis made them emerge.

And there, surprise:

Free Exploration    ████████ 7.3% engagement
Culture & Agile     ███ 2.5% engagement  
Tech Expertise      ██ 2.6% engagement
Enter fullscreen mode Exit fullscreen mode

My "Free Exploration" articles — the freest, most personal ones — generate almost three times more engagement than technical pieces. These texts only reach 211 people on average, but these 211 people react, comment, discuss.

Respiration, for example: 460 views, 8.7% engagement. An article about burnout, writing, breathing. Nothing technical. Just a personal reflection. And it's the one that creates the most conversation.

My "Culture & Agile" articles bring more visibility: 819 views on average, but only 2.5% engagement. Actually Agile: Against Performance Theater: 2154 views, 4% engagement. It reaches many people, but engagement is shallower.

A revealing detail: "Actually Agile" generated 29 comments over 33 days. "Respiration" generated 10 comments over 3 days. The first created a conversation that stretched over time. The second created a concentrated explosion, then silence. Two types of engagement, two different rhythms.

So I wrote three more "Free Exploration" pieces the following month. Not because I was chasing engagement, but because I finally understood what kind of writing created real conversations.

Reading Times (Or: Who Really Reads?)

The tool also collects a metric that DEV.to provides but that nobody really looks at: cumulative reading time.

And there, we encounter surprising things.

My "Cloud Run Bill" article: on January 17, 25 views, 480 seconds of reading. That's 19 seconds average per view. The article is 5 minutes of reading. Conclusion: most people didn't read it. They scrolled, saw the title, maybe looked at the first paragraph, then left.

But on January 16: 15 views, 729 seconds of reading. That's 48 seconds average. Still not 5 minutes, but significantly more. These 15 people actually read part of the article.

And on January 15: 22 views, 30 seconds of reading. 1.4 seconds per view. These people didn't even open the article. They just saw the title in their feed.

What this metric reveals is that "views" means nothing. Some views are real readings. Others are lightning-fast passes. Others are click errors.

If I only look at total views (139), I think: "Not bad for a week."
If I look at reading times, I think: "In reality, maybe 30 to 40 people actually read the article."

And that completely changes the perspective.

What Tags Reveal (And What They Hide)

The tool also analyzes performance by tag. And there again, surprises arrive.

The "performance" tag: 5 articles, 3028 total views, 606 views on average. It's my most visible tag.

The "devjournal" tag: 1 single article, 460 views, but 8.7% engagement. It's "Respiration". A unique article, unclassifiable, unlike anything else I've written.

The "scrum" tag: 1 article, 2154 views, 4% engagement. It's "Actually Agile". The most viewed, but not the most engaging.

What these numbers say is that my most personal articles reach fewer people but create more conversation. My most "professional" articles reach more people but engage less deeply.

And that's exactly the kind of lesson you can only draw by crossing multiple dimensions: views, engagement, tags, temporality. A single metric tells nothing. It's the relationship between metrics that makes sense emerge.

Loyal Readers (Or: Who Really Comes Back?)

The tool also analyzes comments. Not just their number, but who comments, on how many articles, with what regularity, over what duration.

And there, we discover something that DEV.to stats don't show: who your real readers are. Not those who pass once and disappear, but those who come back.

In my case:

  • One reader commented on 9 different articles, over a period of 86 days. 38 comments total, 261 characters average. This isn't someone who says "Nice post!" and leaves. This is someone who really reads, thinks, discusses.
  • Three other readers commented on 3 articles each, over periods of 27, 33, and 58 days. They come back. Not systematically, but regularly.

What these numbers reveal is that I have a small core of loyal readers. Not thousands of followers, not tens of thousands of views. But a dozen people who really read what I write, who come back, who engage in conversation.

And that, for me, is worth a thousand times more than 10,000 views from people who skim and move on.

When Do Comments Arrive?

Another discovery: comment timing.

When I publish an article, 39.8% of comments arrive in the first 24 hours. Then there's a secondary peak between 24 and 72 hours (27.8%). Then it slows down: 10.4% between 3 and 7 days, 6.9% between 1 and 4 weeks.

But — and this is where it gets interesting — 15% of comments arrive more than a month after publication.

That means my articles continue to create conversations long after they come out. Not massively, but constantly. A comment here, another there, three weeks later, two months later. People who stumble upon an old text, read it, have something to say.

Without this temporal analysis of comments, I would never have known that my articles had this long, discreet life.

(Note in passing: the tool also detects spam. "Lost your crypto? Don't panic!" on an article about CVs. Sometimes, data also tells the absurdities of the web.)

Real Analytics Isn't About Counting. It's About Storytelling.

When you collect data regularly, you no longer see totals. You see trajectories. Rhythms. Moments when something happens.

Let's take a concrete example. My article "How I Cut My Cloud Run Bill by 96%". If I only look at DEV.to stats, I see: "139 views in 7 days". That's all.

But if I look at my timeline collected every 4 hours:

January 10, 7 PM: 14 views (article published 1 hour before)
January 11, 11 AM: +15 views at once (peak)
January 11, 3 PM: +1 view
January 11-12: +1 to +5 views every 4 hours (slow growth)
January 13, 7 AM: +10 views (second peak)
January 13-14: complete stagnation (0 views for 24h)
January 15, 7 AM: +10 views (awakening)
January 15, 11 AM: +12 views (peak)
January 15-16: back to calm (+1 to +2 views)
January 17: +10 views in morning, +10 views at 11 AM, +5 views at 3 PM (last surge)
January 18: complete silence
Enter fullscreen mode Exit fullscreen mode

You see the difference?

Raw data says: "139 views".

The timeline tells: "This article lived in waves. A first peak at publication, then slow growth, then three brutal awakenings on the 13th, 15th, and 17th of January, always in the morning. Then, silence. The article fell asleep."

And now, I can ask real questions: why these morning peaks? Did someone share the article in a morning newsletter? Does DEV.to have a recommendation logic that works in waves?

Without this fine memory, I would only see a number. With it, I see a story.

What I Did With These Discoveries

Nothing spectacular. I didn't change my way of writing. I didn't set up a content strategy. I didn't start writing for the numbers.

But I understood what resonates. I understood that my most personal texts create more conversations, even if they reach fewer people. I understood that certain technical subjects continue to be useful long after their publication. I understood that changing a title can revive an article, but it's not a recipe — it's a possibility.

And above all, I understood that my articles live in time. Not when I publish them. But in their trajectory.

The Uncomfortable Truths

Building this tool also forced me to face things I didn't want to see.

Some articles I was proud of are genuinely dead. Not sleeping. Dead. Zero views for weeks. No comments. No reactions. Just silence. I kept thinking "maybe they need time to find their audience". The data said: no, they're just not interesting to anyone.

I also discovered that some of my "high view" articles were inflated by my own bugs. One article showed 2500 hours of reading time over a week. Impressive, right? Except when you do the math: that's 104 days of continuous reading compressed into 7 days. Impossible. Turned out to be a SQL query error — I was doing a SUM on a field that already contained cumulative totals. The real reading time was closer to 57 hours. Still good, but not magical. And embarrassing: I was impressed by my own coding mistake.

And the hardest truth: most people don't read. They skim. They see the title in their feed, click, scroll for 3 seconds, leave. The "view" counts as engagement for DEV.to's algorithm, but it's not a real read. It's just... noise.

Without this tool, I could have lived in comfortable illusions. With it, I had to face reality: writing into the void is real, inflated metrics are real (even when you inflate them yourself by accident), and most "engagement" is shallow.

But strangely, that made me feel better. Because now I know which articles genuinely connect with people. And those few real connections matter more than any vanity metric.

Why Other Authors Might Want This

You don't need it if you write from time to time, without seeking to understand how your texts are received. DEV.to stats are more than enough.

But if you want to know:

  • Why certain articles "take off" and others don't
  • If a title change really had an effect
  • How your texts live over time
  • If your old articles continue to bring readers
  • What topics really trigger conversations

Then you need a memory. A tool that observes, not a tool that counts.

Why DEV.to Can't Do This (And Why That's Normal)

DEV.to is a platform, not an analytics tool. Its role is to give you a quick overview: how many views, how many reactions, how many comments. That's already a lot.

But a platform can't indefinitely store the detailed history of every author. It would be an enormous burden, for marginal use. Most authors don't need to know exactly what time an article took off on March 14th.

I do.

Not to "perform". Not to optimize. But because I want to understand what's happening. Because I am — and remain — an observer. Someone who likes to watch how things evolve over time, how an article lives, how a conversation develops.

That's why I built this tool. For me first. To understand my own texts, my own trajectories.

My Stance: Observer, Not Strategist

I don't optimize. I don't chase metrics. I don't compare myself to others.

I observe. I watch how my words live over time. I see what creates real conversations versus what just accumulates views.

This tool is an observation instrument. Not a strategy tool. Not a growth hack. Just a way to see what happens when you write honestly and let time reveal the patterns.

Conclusion: What Numbers Don't Say

DEV.to shows the data. This tool shows the story.

The data says: "This article has 139 views."
The story says: "This article lived in waves. A first peak at publication, then three brutal awakenings on January 13, 15, and 17, always in the morning, then silence."

The data says: "Your 'Agile' articles have 819 views on average."
The story says: "Your 'Agile' articles reach wide but engage little (2.5%). Your 'Free Exploration' articles reach 211 people but engage three times more (7.3%)."

The data says: "Respiration has 460 views."
The story says: "Respiration has 8.7% engagement — your best ratio — because it's the text where you opened up the most."

And sometimes, the story is much more interesting than the total views.

If you too want to see the secret life of your words, steal your data and listen. The rest is just noise.


Technical Annex: How It Works

The tool runs on my machine automatically every 4 hours via cron, calling devto_tracker.py --collect.

At each collection:

  1. Query DEV.to's API for all articles and metrics
  2. Store a complete snapshot in SQLite: views, reactions, comments, reading time
  3. Detect changes: modified titles, added tags, deleted articles
  4. Record events: "Staff Pick" detected, view spikes >3x average
  5. Collect and analyze comments (length, timing, author)

SQLite is the heart of the system. Not MongoDB, not PostgreSQL — just SQLite. For 20-30 articles with snapshots every 4 hours, it's more than enough. A single file, easily backupable, easily queryable.

Analysis scripts:

dashboard.py — Overview: most viewed articles, engagement rate, performance by tag, author DNA

comment_analyzer.py --full-report — Comment analysis: who comments, when, on how many articles, with what depth

traffic_analytics.py --article ID — Precise timeline: views per day, reading time, reactions

seismograph.py — Correlation detection: title change → view spike, influential comment → exposure boost

Each script queries the same database with a different question. Simple Python. No complicated frameworks. Just scripts that read SQLite and display results in the terminal.

Installation (3 Steps)

The code is on GitHub: github.com/pcescato/devto_stats

# 1. Clone
git clone https://github.com/pcescato/devto_stats.git
cd devto_stats

# 2. Install dependencies
pip install requests python-dotenv

# 3. Configure API key
cp .env.example .env
# Edit .env, add your DEV.to API key
# (get it at https://dev.to/settings/extensions)
Enter fullscreen mode Exit fullscreen mode

First collection:

# Initialize database
python3 devto_tracker.py --init

# Collect first data
python3 devto_tracker.py --collect
Enter fullscreen mode Exit fullscreen mode

Automate (recommended):

chmod +x setup_automation.sh
./setup_automation.sh
Enter fullscreen mode Exit fullscreen mode

This script will create a cron wrapper and offer different collection frequencies (2x/day, 4x/day, 6x/day).

After a few days: trends emerge.
After a few weeks: complete timelines.
After a few months: a real memory of your texts.

It's not a fancy dashboard. It's not a Web interface with animated graphics. It's a command-line tool for those who want to understand, not impress.


🚀 What Happened Next: From Prototype to Production

Update (February 2026): Two weeks after building this SQLite prototype, I took on a new challenge: migrate the entire system to production-grade cloud infrastructure using GitHub Copilot CLI as an execution engine.

The approach: Treat AI as an "execution engine" — I defined architectural constraints and business logic, Copilot handled implementation.

The result: A live platform deployed in 30 hours of actual work:
✅ PostgreSQL 18 with monthly partitioning (7,000+ records)
✅ Authentik SSO with role-based access control
✅ Real-time sync workers with advisory locking
✅ Production-grade security (forward auth proxy)

The Sismograph you read about above? It's now live and tracking article "pulses" in real-time.

👉 Read the full migration story: From Local SQLite Scripts to a Cloud Platform with GitHub Copilot CLI

Try the live demo:
🌐 Dashboard: https://streamlit.weeklydigest.me
🔐 Credentials: judge / Github~Challenge/2k26


Top comments (42)

Collapse
 
sylwia-lask profile image
Sylwia Laskowska

But wow, this is a great tool!
As for those articles people scroll through instead of actually reading: I always think that if something is genuinely read by 30–40 people, that’s already a successful article. That’s a full room at a meetup or a workshop, after all!

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Exactly! That's the shift this tool gave me. Before, I'd see "139 views" and think "meh, not great". Now I see "maybe 35 actual reads" and think "that's a packed room of people who chose to spend 5 minutes with my words".

The room analogy is perfect. Would I rather speak to 1000 people scrolling their phones, or 35 people genuinely listening? The answer became obvious once I could see the difference.

Thanks for getting it :)

Collapse
 
canro91 profile image
Cesar Aguirre

Conclusion: most people didn't read it. They scrolled, saw the title, maybe looked at the first paragraph, then left.

That's not only on dev.to. That's pretty much everywhere online. These days, I finished the book, Smart Brevity And that was its #1 lesson: adapt to how people read by writing shorter, clearer pieces.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Interesting, but my data shows the opposite. My longest articles (>10 min) actually get slightly more engagement than my shortest ones. And my most personal pieces (6-10 min) generate 7.3% engagement vs 2.6% for short technical posts.

Most people skim, yes. But the people who read longer pieces engage much more deeply. I'd rather write for 30-40 real readers than optimize for 1000 skimmers.

Smart brevity works for corporate comms. For conversation-driven writing? Depth beats brevity.

Collapse
 
anchildress1 profile image
Ashley Childress

This is really awesome! I built a small static mirror in GitHub if you or anyone else is interested in boosting AI searchability on dev posts. I've found it helps improve traffic from outside sources. I'm curious to put the two together and see if it's increasing readability at all. Thanks!

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Thanks for reaching out, Ashley! Your tool for boosting AI searchability sounds like a great addition. Since I’m currently focused on tracking the data Dev.to doesn't provide, I’d be interested in seeing if we can combine our approaches. Adding a lightweight tracking component to your static mirror could be the perfect way to actually measure that boost in readability you mentioned.

Collapse
 
thormeier profile image
Pascal Thormeier

Thank you so much for sharing this thing! As a data geek myself, I can't wait to try this, seems like I have weekend plans now.

Need to play around a bit with a RaspberryPi, maybe I'll even build a small dashboard with this. I could imagine that certain companies posting on here would also love this as a web-accessible dashboard and insight tool.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Perfect use case for a Raspberry Pi! SQLite + Python + cron runs great on minimal hardware.

If you build a dashboard, share it — I've kept mine terminal-based on purpose, but I'm curious what others will do with it.

The real value is for individual authors who want to understand their writing, not companies optimizing metrics. But open source means people can take it wherever they want.

Collapse
 
alptekin profile image
alptekin I.

hi, thanks for this interesting post and the tool.
The results are quite interesting, indeed.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Thanks! The uncomfortable truths were the most valuable part — turns out observation beats optimization.

Collapse
 
canro91 profile image
Cesar Aguirre

Thanks for sharing Pascal. You have the numbers to prove my assumption that a good headline is where 80% of the work lies when writing.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Interesting take! The data actually shows headlines get attention, but content drives engagement. I've had great headlines with shallow engagement (clicks but no reads) and mediocre headlines with deep conversations.

Headline opens the door. Content keeps people in the room. Both matter, different reasons.

Collapse
 
realvorl profile image
Viorel PETCU

Thank you for doing this, I can now cross it off my list. Ever since I did my own version of this, some years ago, I always wanted to bring the stats into the Terminal (where I spend a lot of time). I'll give this a try, but if I don't like it I'll make a "competing product" 😁

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Thanks! I'm curious what "your own version" was — the linked article is about app monitoring (Sentry/Prometheus), which is a different beast than DEV.to analytics. Or did you mean a different project?

Either way, build a "competing product" if this doesn't fit your needs — that's exactly how good tools happen.

Collapse
 
realvorl profile image
Viorel PETCU

True, I had a different target at that time but your approach got my curios. Also cool that you take the time to interact with comments. 👍

Collapse
 
nadeem_rider profile image
Nadeem Zia

Good work

Collapse
 
sophia_devy profile image
Sophia Devy

Such a fascinating journey into understanding your articles beyond the surface-level stats. The idea of tracking temporal data and seeing the story behind each article’s life is brilliant. It’s not just about views, but about recognizing engagement rhythms, subtle shifts, and long-tail conversations. This tool is a great reminder that real insights come from patterns, not just numbers.

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Thanks! The shift from "numbers" to "patterns" changed everything.

The uncomfortable truth: most engagement is shallow. But the small percentage that's real creates the actual value — loyal readers who think about what you wrote and come back.

The tool just made those patterns visible through the noise of totals.

Collapse
 
richardpascoe profile image
Richard Pascoe

Fantastic tool with an in-depth post to explain how and why it came about - great stuff!

I'm in a lucky situation at the moment that the majority of my posts are about my learning journey, so have extra value to me. If any one else reads them and gets inspired - so much the better! As I progress though, I'm sure there will be a shift in my writing, so this gave me some great insights into what to expect and why.

Brilliant work, Pascal - well done!

Collapse
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Thank you! And I love your approach — writing for your learning journey first means you're already writing for the right reasons.

What's interesting is that when your writing shifts (and it will), this kind of tool helps you see how it shifts, not just that it shifted. You'll see the moment when your "learning journey" posts start attracting different readers, or when a more polished piece gets more views but less real engagement.

It's not about optimizing. It's about staying aware of what you're actually creating, even as it evolves.

Good luck with the journey — and if you end up using the tool, I'd be curious to hear what patterns emerge for you!

Collapse
 
richardpascoe profile image
Richard Pascoe

Thanks for the lovely reply, Pascal. When the time comes, I will be giving your tool a look and will get back to you!

Thread Thread
 
pascal_cescato_692b7a8a20 profile image
Pascal CESCATO

Looking forward to hearing about it! The learning journey → established voice shift is always interesting to observe.

Some comments may only be visible to logged-in visitors. Sign in to view all comments. Some comments have been hidden by the post's author - find out more