NNTP pipelining is now fully deployed across every NewsDemon server region: news.newsdemon.com (US East), uswest.newsdemon.com (US West), and eu.newsdemon.com (EU). If you have ever felt like your Usenet downloads should be faster than they actually are, pipelining is probably what you were missing.

What Is NNTP Pipelining?

Traditional NNTP works in a strict back-and-forth pattern. Your client sends a command, waits for the server to respond, sends the next command, waits again, and so on. Each article in your download is a separate little conversation:

Client: ARTICLE <message-id-1>
Server: [sends article 1]
Client: ARTICLE <message-id-2>
Server: [sends article 2]
Client: ARTICLE <message-id-3>
Server: [sends article 3]

That back-and-forth adds a round-trip delay between every single article. On a fast local connection it barely registers. On a higher-latency connection, say 80ms from the UK to a US server, every article costs you 80ms just waiting for the request to arrive and another 80ms waiting for the response to start. Multiply that by tens of thousands of articles in a large download, and suddenly a huge chunk of your download time is literally just light traveling through fiber.

Pipelining throws out the strict request-response pattern. Your client sends multiple requests back to back without waiting for each individual reply:

Client: ARTICLE <id-1>
Client: ARTICLE <id-2>
Client: ARTICLE <id-3>
Client: ARTICLE <id-4>
Server: [streams article 1]
Server: [streams article 2]
Server: [streams article 3]
Server: [streams article 4]

The server processes the requests as they come in and streams the responses back in order. The connection stays saturated. You stop paying a latency penalty on every article.

Why This Matters

Here is the part that surprises most people: bandwidth is not usually what limits Usenet download speeds. Latency is. If you have a gigabit internet connection but every article requires a 60ms round trip before the server starts sending data, you are capping yourself at maybe a few hundred Mbps no matter how fat your pipe is.

This is why the classic advice has always been to use a lot of simultaneous connections. Fifty connections means fifty parallel conversations, and while one connection is waiting for its round trip, the other forty-nine are moving data. It works, but it is a workaround for a protocol limitation.

Pipelining fixes the underlying problem. Each connection becomes dramatically more efficient, which means you can saturate your link with fewer connections, or saturate it better with the same number.

Who Benefits the Most?

Anyone with a fast internet connection who is geographically far from their Usenet server. Specific cases:

  • European customers hitting US servers for retention reasons. Transatlantic round trips average 80-120ms, which is a lot of dead time per article.
  • West Coast US customers hitting East Coast servers (or vice versa). Adds roughly 60-80ms of round trip.
  • Customers on gigabit or multi-gigabit residential connections who have never been able to saturate their link.
  • International customers in Asia, Oceania, and South America where round trips can exceed 200ms.

If you already live a few hops from one of our server farms, you will see less dramatic improvement because there was less latency to cut in the first place. But everyone benefits at least a little.

What About Customers Close to Our Servers?

Even on low-latency connections, pipelining reduces overhead. You are still eliminating the tiny delay between articles, and that adds up when you are downloading at high speeds. Think of it like this: a 5ms round trip sounds trivial, but at line-rate speeds you are processing articles fast enough that 5ms gaps add real idle time on the wire. Pipelining closes those gaps.

We also recommend all customers take advantage of our geo-DNS routing. When you connect to news.newsdemon.com, you are automatically routed to the nearest server farm. Combined with pipelining, you get both minimum latency and maximum efficiency.

Do You Need to Do Anything?

Probably not. Most modern Usenet clients already support pipelining and will negotiate it automatically when they connect to a pipelining-capable server. You do not need to change any configuration, install anything, or check a box. It just works.

Clients with strong pipelining support include SABnzbd, NZBGet, NewsBin Pro, and NewsLeecher. If you are using one of these, you are probably already benefiting. If you are using an older or more unusual client, check its settings for something labeled "pipelining," "command pipelining," or "server pipelining" and make sure it is enabled.

Can I Reduce My Connection Count Now?

Maybe. The historical advice of using 20-50 connections was largely a workaround for the latency problem pipelining solves. With pipelining enabled, some customers find they can drop to 10-20 connections and still saturate their link, which is easier on both your router and our servers.

That said, there is no penalty to keeping your connection count high. All NewsDemon plans include 50 simultaneous SSL connections, and you are welcome to use all of them. If downloads are working well at your current setting, there is no reason to change.

Real-World Impact

In our internal testing, we saw the biggest improvements on long-distance connections where latency was the bottleneck. Specifically:

  • A European customer on a gigabit fiber line, previously pulling around 500 Mbps from our US East servers, can now saturate the full gigabit.
  • Transcontinental US connections (East to West Coast) see noticeable improvements in peak throughput, particularly on multi-hundred-megabit residential lines.
  • Customers on low-latency connections see smaller but still measurable gains, generally in the 5 to 15 percent range.

Your results will depend on your distance from the server, your client software, your connection count, and the size distribution of the articles you are downloading. But everyone should see at least some improvement with no configuration changes at all.

Technical Notes

For the curious, a few specifics:

  • Supported commands: ARTICLE, HEAD, BODY, STAT, GROUP, LIST, and others can all be pipelined.
  • Pipeline depth: Our servers accept deep pipelines, so clients that aggressively queue requests will see the most benefit.
  • Backward compatibility: Non-pipelining clients continue to work exactly as before. Nothing changes for them.
  • Interaction with SSL: Pipelining works over SSL/TLS connections with no extra overhead. Combined with our recently deployed post-quantum key exchange, you get both speed and future-proof encryption on every connection.

Why We Did This

The honest answer is that pipelining has been available as an NNTP protocol feature for a long time, and most Usenet servers never bothered to implement it well. We have been working on our server infrastructure steadily for the last couple of years, and this was one of the items on the list that really moves the needle for a lot of customers. It required changes to how our spool software handles request queues and response ordering, but once we got it working in testing, the results made the effort worthwhile.

This is part of a broader push to make sure NewsDemon's infrastructure is not just the longest-running independent Usenet backbone but also the best-performing. If there is something else you would like to see us work on, our support chat is open 24/7 and we read every message.

Your Usenet just got faster. No configuration required, no price change, no catch.