Usenet-Feed-Größe: How Much Data Flows Through Usenet Every Day

The Usenet feed is the continuous stream of new articles flowing into servers worldwide. In 2026, that stream is measured in hundreds of terabytes per day. This page breaks down the numbers, what drives them, and why they matter.

Backbones Architecture Retention Feed Size Completion

Current Daily Feed Size

A Tier-1 Usenet backbone in 2026 ingests approximately 400 to 600 terabytes of new articles per day. The exact number fluctuates based on posting volume, but the average sits around 500TB daily. NewsDemon ingests at the high end of this range across all major newsgroup hierarchies.

~500TB
New Articles
Per Day
110K+
Active
Newsgroups
~21TB
Per Hour
Average

To put that in perspective: 500TB per day is roughly 5.8 gigabytes per second, continuously, 24 hours a day. That volume would fill a consumer 20TB hard drive approximately every 58 minutes.

What Makes Up the Feed

Binary articles (95%+ of volume)

The overwhelming majority of the Usenet feed by volume is binary data posted to alt.binaries.* groups. Binary articles are files that have been encoded into text format using yEnc and split into article-sized segments. A single large file can generate thousands of individual articles. This is where the volume comes from.

Text articles (under 5% of volume)

Discussion posts in text newsgroups (comp.*, sci.*, rec.*, soc.*, alt.*, and others) make up a tiny fraction of the daily feed by data volume. A text article is typically a few kilobytes. A binary article segment is typically 750KB to 1MB. There are more text articles by count than most people realize, but they are dwarfed by binaries in terms of storage.

Spam, sporge, and noise

A meaningful percentage of the raw feed is junk: spam, sporge (mass-posted fake articles), malformed headers, duplicates, and other noise. Good providers filter this out before it reaches their spool, which is why smart ingestion matters more than raw ingestion volume. NewsDemon runs an AI-driven filtering system that removes noise while keeping legitimate content.

How the Feed Has Grown

The Usenet feed has grown roughly 10x every decade since the 1990s:

EraApproximate Daily FeedWhat Changed
Early 1990sMegabytes/dayText-only; academic and tech communities
Late 1990sLow single-digit GB/dayUUEncode binaries; ISPs offering Usenet
Early 2000s10-50 GB/dayyEnc encoding (2001) cuts overhead by 30x; NZB format emerges
Mid 2000s100-500 GB/dayBroadband adoption; retention race begins
Early 2010s1-5 TB/dayBroadband maturity; larger average article sizes; automation tools
Late 2010s10-50 TB/dayHigher-capacity uploads; increased posting volume
2020s100-600 TB/dayFaster global bandwidth; larger average file sizes; more active posting

The growth is driven by two things: the average size of files posted to binary newsgroups keeps increasing as encoding formats and storage capacity evolve, and global upload bandwidth keeps getting faster, allowing posters to upload more volume per day.

Why Feed Size Matters for Providers

Storage requirements

At 500TB per day, keeping one year of retention requires approximately 180 petabytes of storage, before accounting for redundancy and replication. NewsDemon maintains over 5,695 days across three server regions. The storage engineering behind this is substantial and is one of the key differentiators between providers. More on this in our spool software page.

Ingestion infrastructure

Accepting 500TB per day requires dedicated ingestion servers with high-bandwidth peering connections, fast disk arrays for write throughput, and software that can deduplicate, filter, index, and store articles without falling behind. If ingestion falls behind the feed, articles get missed, and completion rate drops.

Bandwidth costs

The raw bandwidth required to receive the full feed from multiple peers, plus serve it to thousands of simultaneous readers, is a significant operating expense. This is one of the reasons many Usenet brands are resellers rather than backbone operators: building and maintaining the infrastructure to handle this volume is expensive and technically demanding.

This is why backbone independence matters. Providers that operate their own backbone control their ingestion pipeline end-to-end. Resellers depend on their upstream backbone to ingest the full feed without gaps. If the upstream misses articles, every reseller on that backbone misses them too. NewsDemon runs its own independent backbone with direct peering to other Tier-1 operators.

Why Feed Size Matters for Users

As a user, you do not interact with the feed directly. But the feed size affects your experience in several ways:

Completion rate. A provider that cannot keep up with the feed drops articles. Those dropped articles become missing segments in your downloads. High ingestion capacity is a prerequisite for high completion.

Retention depth. The bigger the daily feed, the more storage is needed per day of retention. Providers that offer deep retention at current feed sizes are making a much larger storage investment than providers that offered the same retention number five years ago when the daily feed was smaller.

Speed. During peak posting hours, the feed can spike well above average. Providers with robust ingestion infrastructure handle these spikes without affecting reader performance. Underpowered infrastructure means your download speed dips when the feed is heavy.

Article availability. The daily feed is the starting point for everything your provider stores. Unique articles that are posted once and never reposted must be ingested the first time or they are lost. Providers with aggressive, well-peered ingestion systems capture more of the feed than those with slower or poorly connected systems.

How NewsDemon Handles the Feed

NewsDemon ingests approximately 500TB of new articles per day across all major newsgroup hierarchies through direct Tier-1 peering connections.

Ingestion pipeline: Dedicated feeder servers accept articles from multiple peers simultaneously, deduplicate by Message-ID, and write to spool storage in real-time. Our AI-driven filtering system removes spam and noise before articles reach the spool, keeping our archive clean and searchable.

Three-region replication: Every article is replicated across US East, US West, and EU (Netherlands). The full feed is ingested at each region independently for redundancy.

NVMe spool for recent articles: The most recent portion of the feed goes to NVMe storage with sub-3ms retrieval latency. Older articles move to high-density storage optimized for sequential reads. Details on our server architecture page.

Result: 99%+ completion rate across the full 5,695+ day retention window, plus exclusive tape archive content that predates our current spool.

Frequently Asked Questions

How much data is posted to Usenet every day?
Approximately 400 to 600 terabytes per day in 2026, averaging around 500TB. The exact amount fluctuates daily based on posting activity. Over 95% of this volume is binary content in alt.binaries.* newsgroups.
Has the Usenet feed always been this large?
No. The feed has grown roughly 10x per decade. In the early 2000s, it was tens of gigabytes per day. The growth is driven by increasing average file sizes and faster global upload speeds.
Does every Usenet provider ingest the full feed?
Only Tier-1 backbone operators ingest the full feed directly. Many providers are resellers that connect to a backbone operated by someone else. They carry whatever their upstream ingested. If the upstream missed articles, the reseller misses them too.
Why does feed size affect my downloads?
Your provider must ingest an article before it can serve it to you. If the daily feed overwhelms the provider's infrastructure and articles get dropped during ingestion, those articles become permanently missing from that provider's spool. This directly impacts completion rate.
How much storage does one year of retention require?
At 500TB per day, one year of retention requires roughly 180 petabytes of raw storage, before replication and redundancy. Multi-year retention at current feed volumes requires substantial infrastructure investment, which is why most resellers depend on a backbone operator rather than building their own.

500TB Ingested Daily. 99%+ Completion. Three Regions.

Independent backbone, full feed ingestion, NVMe spool storage, 5,695+ days retention. Plans from $3/month.

Tarife ansehen