Hoe Usenet-backbones werken

Most people pick a Usenet provider based on price and retention numbers. But the backbone infrastructure behind the provider is what actually determines your experience. This is a look under the hood at how that infrastructure works.

Wat is Usenet Geschiedenis Tijdlijn Aan de slag vs Torrents Beveiliging & Privacy Woordenlijst Feed Size

Wat is een backbone?

A Usenet backbone is the infrastructure that receives, stores, indexes, and serves Usenet articles. It's what sits between the global Usenet network and the end user. When you connect to your provider and download an article, that article is sitting on a backbone somewhere.

Some providers operate their own backbone. They own the servers, run the storage, manage the peering connections, and control the entire article lifecycle from ingestion to delivery. These are called Tier-1 providers.

Other providers don't own any backbone infrastructure. They lease access to a Tier-1 operator's servers and sell that access under their own brand name. These are resellers. The reseller's website, pricing, and support are their own, but the articles you download come from someone else's hardware.

The distinction matters because providers on the same backbone carry the same articles. If an article gets removed from one, it's gone from all of them. If an article was never ingested by that backbone in the first place, no reseller on it will have it. Our independence page covers the business implications. This page covers the technical ones.

The Life of a Usenet Article

Understanding backbones starts with understanding what happens to an article from the moment it's posted to the moment you download it.

Step 1: Posting

A user composes an article (text post or encoded binary file) in their newsreader and hits "post." The newsreader connects to the user's provider via NNTP and sends the article using the POST or IHAVE command. The provider's front-end server receives it, assigns a Message-ID if one wasn't provided, and stores it locally.

Step 2: Ingestion and Indexing

The backbone's ingestion system takes the new article and writes it to spool storage. The article gets indexed by its Message-ID, newsgroup(s), date, and headers. This index is what allows your newsreader to search and retrieve articles later. The speed and quality of this indexing system is one of the things that separates a good backbone from a mediocre one.

Step 3: Peering (Propagation)

The backbone offers the new article to its peering partners. Each partner checks its own index to see if it already has an article with that Message-ID. If it doesn't, it accepts the article and stores it on its own spool. If it already has it, it rejects the offer. This exchange happens continuously, thousands of times per second, between every pair of peering backbones in the network. Our peering deep-dive covers the technical details of how this negotiation works at the protocol level.

Step 4: Retrieval

When you search for or download an article, your newsreader sends a request to your provider's front-end server. The front-end looks up the article in the index, locates it on the spool storage, and streams it back to you over your NNTP connection. If SSL is enabled (and it should be), this entire exchange is encrypted.

Step 5: Expiration

At some point, an article ages past the backbone's retention window. The spool software marks it for expiration and eventually frees the storage. At NewsDemon, that window is currently 5,695+ days and it grows by one each day. Our AI-driven filtering system also removes junk, spam, and sporge before it ever reaches the spool, which is how we keep the archive clean and storage efficient.

Front-End vs. Back-End Servers

A Usenet backbone is not a single machine. It's a distributed system with distinct layers, each handling different parts of the workload.

Front-End Servers (Transit/Reader Servers)

These are what your newsreader connects to. They handle NNTP authentication (your username and password), manage your connections, process your search and download requests, and stream articles back to you. Front-end servers are optimized for concurrent connections and throughput. A busy backbone might have dozens of front-end servers load-balanced behind a single DNS address like news.newsdemon.com.

NewsDemon operates front-end servers in three geographic regions: US East (Virginia), US West (California), and EU (Netherlands). This gives you a nearby connection point wherever you are, which reduces latency and improves download speed.

Back-End Servers (Spool Storage)

These hold the actual article data. The spool is where the petabytes live. Modern backbones use tiered storage: fast NVMe drives for the most recent and most-requested articles (the "hot" tier), and high-density spinning disks for older content (the "cold" tier). Some operators also use tape archives for very old content. NewsDemon uses NVMe spool sets that deliver sub-3ms article latency on hot content.

The spool software is the brains of the operation. It decides where to write new articles, how to index them for fast retrieval, when to move articles from hot to cold storage, and when to expire them. Different backbone operators run different spool software. Some use open-source solutions like Diablo or Cyclone. Others run proprietary systems they've built in-house. The spool software is the single biggest differentiator in backbone performance, and it's the thing you never see as an end user.

Feeder Servers (Peering)

A separate layer handles the exchange of articles with peering partners. Feeder servers run persistent connections to other backbones and negotiate the constant flow of new articles in both directions. A busy feeder server might process millions of article offers per hour. We cover this in detail on the peering page.

Spool Storage: Where the Articles Live

The spool is the core of any backbone. It's the physical storage system that holds every article the backbone has ingested and hasn't yet expired. At the scale of a modern Usenet backbone processing ~500TB of new content per day, spool engineering is non-trivial.

Scale

A backbone with 5,000+ days of retention is storing a staggering amount of data. The exact number depends on how much of the incoming feed the operator keeps (some filter aggressively, others store almost everything) and how deduplication is handled. NewsDemon uses AI-driven filtering to strip out junk and duplicates before articles hit the spool, which keeps our storage cleaner and more efficient than a "store everything" approach.

Tiered Storage

Not all articles need the same access speed. A post from yesterday gets requested thousands of times. A post from 2015 gets requested once a month. Backbone operators take advantage of this by tiering storage. Recent and popular articles go on fast NVMe or SSD drives. Older articles move to high-density spinning disks optimized for sequential reads. The transition between tiers is managed by the spool software based on access patterns and age.

NewsDemon's NVMe spool sets deliver sub-3ms retrieval latency on hot articles. Cold articles on spinning disk are slower but still perfectly functional for download. The user experience is that new stuff downloads at full speed and old stuff downloads a bit slower, but it's all there.

Spool Software

The software that manages the spool is arguably the most important piece of a backbone's technology stack. It handles article writes (from ingestion), reads (from user requests), indexing (by Message-ID, newsgroup, date), tiering (moving articles between storage layers), and expiration (removing articles past the retention window). Popular open-source options include Diablo (older, battle-tested) and Cyclone (newer, faster on modern hardware). Some backbone operators, including some of the largest, run entirely proprietary spool systems tuned to their specific hardware and workloads.

Peering: How Articles Spread

No single backbone sees every article posted to Usenet. Articles spread between backbones through peering, a system of bilateral article exchange agreements between operators.

The process works roughly like this: Backbone A receives a new article (either from a user posting directly or from another peering partner). Backbone A checks whether Backbone B already has this article by sending a batch of Message-IDs. Backbone B responds with which ones it wants. Backbone A sends the requested articles. Backbone B does the same in the other direction. This happens continuously across dozens of peering relationships simultaneously.

Peering agreements vary. Some are full-feed (both sides exchange everything). Some are partial (only specific newsgroup hierarchies). Some are one-directional. The quality and breadth of a backbone's peering relationships directly affects its article inventory. A backbone with strong peering across many partners will have a more complete article pool than one with limited peering.

This is also why independent backbones carry different articles. Even with good peering, no two backbones have identical content. Articles get posted to one backbone and may not propagate to another due to timing, filtering, or selective peering. That difference is what makes pairing providers on different backbones useful. For the full technical breakdown, see our peering deep-dive.

Backbone Operators vs. Resellers

A backbone operator (Tier-1 provider) runs its own infrastructure: front-end servers, spool storage, feeder servers, peering connections. It controls every part of the article lifecycle.

A reseller doesn't own backbone infrastructure. It purchases access to a Tier-1 operator's servers, usually through a wholesale arrangement, and sells that access under its own brand. The reseller handles marketing, billing, and support, but the articles come from someone else's spool.

From a user's perspective, the experience with a reseller can be perfectly fine. The speed, retention, and completion you see are the backbone's, delivered through the reseller's branding. The issue arises when you're comparing providers and don't realize that two (or five, or ten) of them are all reselling the same backend. They'll have different prices and different websites, but identical article pools.

The question "which backbone is this provider on?" is more useful than "how many days of retention does this provider claim?" Our independence page covers the ownership angle, and our provider selection guide walks through the full decision framework.

What NewsDemon Runs

We operate our own backbone across three server regions. The setup is straightforward but expensive to build and maintain.

Front-end servers in US East (Virginia), US West (California), and EU (Netherlands), load-balanced behind news.newsdemon.com. Every connection is 256-bit SSL encrypted. 50 simultaneous connections per account.

NVMe spool storage for hot articles (sub-3ms retrieval). High-density spinning disk for cold storage. AI-driven filtering removes spam, junk, and sporge before articles hit the spool. Article deduplication keeps the archive clean.

Peering connections with multiple upstream feeds. We also recovered a large collection of articles from magnetic tape archives going back over 20 years. These articles were never migrated by other operators and are exclusive to our spool.

Independently owned. K&L Technologies, Inc. No parent corporation, no shared ownership with other Usenet brands, no external entity making content or pricing decisions.

Go Deeper

How Usenet Peering Works

The technical details of article exchange between backbones. IHAVE/CHECK/TAKETHIS commands, streaming feeds, and why peering quality matters.

Lezen →

Usenet Retention Explained

What retention really means, why the numbers are misleading, and how our tape archive recovery goes beyond the day count.

Lezen →

What "Independent" Actually Means

The business side of backbone ownership. Why running your own servers isn't the same as being independently owned.

Lezen →

How to Choose a Provider

An honest buyer's guide. What matters, what doesn't, and what affiliate review sites leave out.

Lezen →

Frequently Asked Questions

What is a Usenet backbone?
The core server infrastructure that receives, stores, and distributes Usenet articles. It includes front-end servers (user connections), back-end spool storage (article data), and feeder servers (peering with other backbones).
How does peering work?
Backbone operators exchange article feeds with each other. When one backbone receives a new article, it offers it to its peering partners. Each partner checks its own index and accepts articles it doesn't already have. This continuous exchange is how articles spread across the Usenet network.
What's the difference between Tier-1 and a reseller?
A Tier-1 provider operates its own backbone. A reseller leases access to someone else's backbone and sells it under its own brand. Resellers on the same backbone carry identical articles. Different branding, same content.
What is spool storage?
The physical storage system where articles live. Modern backbones use tiered spools: fast NVMe drives for recent/popular articles, high-density spinning disks for older content. The spool software manages indexing, retrieval, tiering, and expiration.
Why does backbone independence matter?
Providers on the same backbone carry the same articles. If one is missing an article, they all are. An independent backbone has its own article pool. That's why pairing providers on different backbones gives better coverage than pairing two resellers of the same one.

See What an Independent Backbone Looks Like

NewsDemon runs its own infrastructure across 3 server regions. 5,695+ days retention, NVMe spools, exclusive tape archive content. Plans from $3/month.

Abonnementen bekijken