Current Daily Feed Size
A Tier-1 Usenet backbone in 2026 ingests approximately 400 to 600 terabytes of new articles per day. The exact number fluctuates based on posting volume, but the average sits around 500TB daily. NewsDemon ingests at the high end of this range across all major newsgroup hierarchies.
Per Day
Newsgroups
Average
To put that in perspective: 500TB per day is roughly 5.8 gigabytes per second, continuously, 24 hours a day. That volume would fill a consumer 20TB hard drive approximately every 58 minutes.
What Makes Up the Feed
Binary articles (95%+ of volume)
The overwhelming majority of the Usenet feed by volume is binary data posted to alt.binaries.* groups. Binary articles are files that have been encoded into text format using yEnc and split into article-sized segments. A single large file can generate thousands of individual articles. This is where the volume comes from.
Text articles (under 5% of volume)
Discussion posts in text newsgroups (comp.*, sci.*, rec.*, soc.*, alt.*, and others) make up a tiny fraction of the daily feed by data volume. A text article is typically a few kilobytes. A binary article segment is typically 750KB to 1MB. There are more text articles by count than most people realize, but they are dwarfed by binaries in terms of storage.
Spam, sporge, and noise
A meaningful percentage of the raw feed is junk: spam, sporge (mass-posted fake articles), malformed headers, duplicates, and other noise. Good providers filter this out before it reaches their spool, which is why smart ingestion matters more than raw ingestion volume. NewsDemon runs an AI-driven filtering system that removes noise while keeping legitimate content.
How the Feed Has Grown
The Usenet feed has grown roughly 10x every decade since the 1990s:
| Era | Approximate Daily Feed | What Changed |
|---|---|---|
| Early 1990s | Megabytes/day | Text-only; academic and tech communities |
| Late 1990s | Low single-digit GB/day | UUEncode binaries; ISPs offering Usenet |
| Early 2000s | 10-50 GB/day | yEnc encoding (2001) cuts overhead by 30x; NZB format emerges |
| Mid 2000s | 100-500 GB/day | Broadband adoption; retention race begins |
| Early 2010s | 1-5 TB/day | Broadband maturity; larger average article sizes; automation tools |
| Late 2010s | 10-50 TB/day | Higher-capacity uploads; increased posting volume |
| 2020s | 100-600 TB/day | Faster global bandwidth; larger average file sizes; more active posting |
The growth is driven by two things: the average size of files posted to binary newsgroups keeps increasing as encoding formats and storage capacity evolve, and global upload bandwidth keeps getting faster, allowing posters to upload more volume per day.
Why Feed Size Matters for Providers
Storage requirements
At 500TB per day, keeping one year of retention requires approximately 180 petabytes of storage, before accounting for redundancy and replication. NewsDemon maintains over 5,695 days across three server regions. The storage engineering behind this is substantial and is one of the key differentiators between providers. More on this in our spool software page.
Ingestion infrastructure
Accepting 500TB per day requires dedicated ingestion servers with high-bandwidth peering connections, fast disk arrays for write throughput, and software that can deduplicate, filter, index, and store articles without falling behind. If ingestion falls behind the feed, articles get missed, and completion rate drops.
Bandwidth costs
The raw bandwidth required to receive the full feed from multiple peers, plus serve it to thousands of simultaneous readers, is a significant operating expense. This is one of the reasons many Usenet brands are resellers rather than backbone operators: building and maintaining the infrastructure to handle this volume is expensive and technically demanding.
This is why backbone independence matters. Providers that operate their own backbone control their ingestion pipeline end-to-end. Resellers depend on their upstream backbone to ingest the full feed without gaps. If the upstream misses articles, every reseller on that backbone misses them too. NewsDemon runs its own independent backbone with direct peering to other Tier-1 operators.
Why Feed Size Matters for Users
As a user, you do not interact with the feed directly. But the feed size affects your experience in several ways:
Completion rate. A provider that cannot keep up with the feed drops articles. Those dropped articles become missing segments in your downloads. High ingestion capacity is a prerequisite for high completion.
Retention depth. The bigger the daily feed, the more storage is needed per day of retention. Providers that offer deep retention at current feed sizes are making a much larger storage investment than providers that offered the same retention number five years ago when the daily feed was smaller.
Speed. During peak posting hours, the feed can spike well above average. Providers with robust ingestion infrastructure handle these spikes without affecting reader performance. Underpowered infrastructure means your download speed dips when the feed is heavy.
Article availability. The daily feed is the starting point for everything your provider stores. Unique articles that are posted once and never reposted must be ingested the first time or they are lost. Providers with aggressive, well-peered ingestion systems capture more of the feed than those with slower or poorly connected systems.
How NewsDemon Handles the Feed
NewsDemon ingests approximately 500TB of new articles per day across all major newsgroup hierarchies through direct Tier-1 peering connections.
Ingestion pipeline: Dedicated feeder servers accept articles from multiple peers simultaneously, deduplicate by Message-ID, and write to spool storage in real-time. Our AI-driven filtering system removes spam and noise before articles reach the spool, keeping our archive clean and searchable.
Three-region replication: Every article is replicated across US East, US West, and EU (Netherlands). The full feed is ingested at each region independently for redundancy.
NVMe spool for recent articles: The most recent portion of the feed goes to NVMe storage with sub-3ms retrieval latency. Older articles move to high-density storage optimized for sequential reads. Details on our server architecture page.
Result: 99%+ completion rate across the full 5,695+ day retention window, plus exclusive tape archive content that predates our current spool.
Frequently Asked Questions
500TB Ingested Daily. 99%+ Completion. Three Regions.
Independent backbone, full feed ingestion, NVMe spool storage, 5,695+ days retention. Plans from $3/month.
Tarife ansehen