Napoleon's Life Lessons on Hacker News
On Saturday, May 31, 2025 11:06:36 PM UTC, I submitted a link to the How often does my IP address change post on this blog to the famed “Hacker News” web site.
On Saturday, July 26, 2025 2:25:36 PM UTC, I submitted another link, this time to the 10 Lessons from Napoleon post on this blog to the same web site.
At the end of May, Hacker News had just featured a post from someone in Colorado Springs who had CenturyLink fiber figuring out how often his home’s IP address changed. My post got 3 upvotes. Nobody bothered to comment. Matt Sayar’s post got 3 upvotes and 6 comments.
Hacker News often features Life Lessons From Napoleon posts. In June, I thought I would try my luck with a meta take on that category of post. It didn’t do so great. Two upvotes, and one review by a disappointed reader, who expected more original content.

Here’s the number of HTTP GET requests for How often does my home’s IP address change. The first GET arrived at 2025-05-31T23:06:38Z, 2 seconds after “Hacker News” showed it at 2025-05-31T23:06:36Z. That GET had a user agent of “Drakma/2.0.10”, a Lisp HTTP client said to run on FreeBSD. I lost a log file, so I only have about 54 minutes of GET requests, but my web server still saw 338 GET requests.

Above, the number of HTTP GET requests for 10 Lessons from Napoleon about 10 Lessons from Napoleon.
for each minute from 14:25Z to 15:48Z.
The first GET with a hn.ycombinator.com referrer
rolled in at 14:30:34Z, almost exactly 5 minutes after the link
showed up on “Hacker News”.
The first request immediately after “Hacker News” showed the link was also
from “Drakma/2.0.10”, at 14:25:42Z, 6 short seconds after the link was posted.
62 GET requests came in between 4:25:42 and 14:30:34, a few of them human (see below), but a lot with user agent strings like “drakma”, “undici”, “Embed PHP library”, “node”, “Ruby”, and “Slackbot-LinkExpanding” that openly indicate some program was doing automatic GET requests. All told, there were 318 GET requests for my Napoleon’s Life Lessons page 2025-07-26Z.
Winnowing out humans’ requests
I’ll use the 10 Lessons from Napoleon post as an example. I did similar things, and got similar results, on the How often post.
I decided that a human clicking the “Hacker News” link would cause
their browser to do a GET of /posts/napoleans-lessons/,
and then an immediate GET of /css/style.css,
which the HTML references in its <head> section.
#!/usr/bin/env bash
grep -E 'GET (/posts/napoleans-lessons/|/css/style\.css..*napoleans-lessons)' httpd.log > gets.log
awk '
/napoleans-lessons/ { ip = $1 }
/style.css/ { print ip; ip = "" }
' gets.log | sed '/^$/d' | sort | uniq > human.ips
grep 'GET /posts/napoleans-lessons/' httpd.log | grep -F -f human.ips > human.gets
Note the regular expression I used to print lines representing HTTP GET of the Napoleon’s Lessons post and the HTTP GET of the CSS style sheet for my blog’s theme. I require a GET of the CSS style sheet to have a referrer of Napoleon’s Lessons. This might weed out a few humans viewing the web page, but it also removes obviously inauthentic behavior. Something running at 204.44.192.31 did a GET every 10 minutes from 2025-07-26T14:51:01 to 2025-07-27T04:11:48. Requiring a referrer on CSS style sheets eliminated that weirdo.
I found 43 human GET requests under these criteria. There’s a reasonable mix of user agents, 8 Linux, 5 Android, 7 iPhone, 9 Macs, 12 Windows. “Hacker News” is a fairly sophisticated, 15-20% Linux seems right.
Non-human requests
Perhaps I had some luck with my mostly-ignored “Hacker News” posts. I was able to see a surge in traffic, and I was able to determine that only about 40-50 people clicked the “Hacker News” links to look at my articles. I could see a shaggy pattern of non-human users.
I believe that
at least 80% of the requests came from programs of some sort
for both “Hacker News” posts.
I’ve read a lot of Apache combined log files over the years.
This is the first time I’ve seen a lot of the bot user agents.
A comprehensive list of what’s weird or new would be tedious,
but here are some highlights:
- Drakma. a “Common Lisp HTTP client” hit my web server less than 10 seconds after “Hacker News” posted a link.
- 34 or 35 different Mastodon instances requested each URL.
- LinkedIn asks for the URL
- A variety of RSS readers show up
- “AI” company scrapers show up, “ChatGPT-User”, “GPTBot”, “MyAINewsBo”
- A “SlackBot”, a “DiscordBot”, an IRC related bot, and a “TelegramBot” appear. There must be various chat channels dedicated to “Hacker News” feeds.
- Pinterest looks like it has a bot watching “Hacker News” as well.
- 48 of the non-human request IP addresses were common between the two time spans.
Maybe the worst thing about the bots is that the majority of their requests contain user agent strings that look just like regular, human-driven browser user agent strings. I have to assume that since whoever runs these bots tries to conceal their bots’ activities, there’s at least some bad intent.
How did Fathom Analytics do?
Although I have closed my Fathom Analytics account, I downloaded all the raw data Fathom provides. Fathom gives you a CSV file called “Pages.csv” that has ah hour granularity timestamp, a hostname, a path, and a couple of counts, “views” and “unique views”. Fathom recorded a total of 36 views for the “IP Address Changes” post, and 31 views for the “Napoleon’s Life Lessons” post. These numbers are a good deal less than my count of human users. I’m not surprised that the bots don’t show up much. Fathom’s numbers depend on an HTTP client executing a small JavaScript program. What I am surprised by is human-initiated requests, from regular old browsers, not showing up. Am I seeing humans who read “Hacker News” blocking a lot of Fathom’s analytics, one way or the other? The other alternative is that Fathom puts more criteria on what they record as “views” than just mere “HTTP GET for some URL”. Do they do some bot detection?