iDatam

IN AFRICA

ALBANIA

ARGENTINA

AUSTRALIA

AUSTRIA

AZERBAIJAN

B AND H

BANGLADESH

BELGIUM

BRAZIL

BULGARIA

CANADA

CHILE

CHINA

COLOMBIA

COSTA RICA

CROATIA

CYPRUS

CZECH

DENMARK

ECUADOR

EGYPT

EL SALVADOR

ESTONIA

FINLAND

FOR BACKUP AND STORAGE

FOR DATABASE

FOR EMAIL

FOR MEDIA STREAMING

FRANCE

GEORGIA

GERMANY

GREECE

GUATEMALA

HUNGARY

ICELAND

IN ASIA

IN AUSTRALIA

IN EUROPE

IN NORTH AMERICA

IN SOUTH AMERICA

INDIA

INDONESIA

IRELAND

ISRAEL

ITALY

JAPAN

KAZAKHSTAN

KENYA

KOSOVO

LATVIA

LIBYA

LITHUANIA

LUXEMBOURG

MALAYSIA

MALTA

MEXICO

MOLDOVA

MONTENEGRO

MOROCCO

NETHERLANDS

NEW ZEALAND

NIGERIA

NORWAY

PAKISTAN

PANAMA

PARAGUAY

PERU

PHILIPPINES

POLAND

PORTUGAL

QATAR

ROMANIA

RUSSIA

SAUDI ARABIA

SERBIA

SINGAPORE

SLOVAKIA

SLOVENIA

SOUTH AFRICA

SOUTH KOREA

SPAIN

SWEDEN

SWITZERLAND

TAIWAN

THAILAND

TUNISIA

TURKEY

UK

UKRAINE

UNITED ARAB EMIRATES

URUGUAY

USA

UZBEKISTAN

VIETNAM

The 1 Million Concurrent Connections Stress Test: Nginx vs. LiteSpeed on a Single NVMe Server

We pushed a single iDatam NVMe dedicated server to the absolute limit. Discover the exact Linux kernel tuning, DB optimization, and hard data from our 1-million concurrent user benchmark comparing Nginx and LiteSpeed.

Every system administrator, DevOps engineer, and backend developer knows the feeling. It starts with a sudden alert in your monitoring dashboard. CPU usage spikes. RAM maxes out. Your database connection pool catches fire. You’ve just hit the front page of Reddit, your product went viral on TikTok, or your Black Friday sale was a little too successful.

The industry’s default advice for surviving a massive traffic tsunami is to build a complex, auto-scaling cloud cluster behind expensive load balancers. But scaling horizontally introduces massive complexity, synchronization issues, and astronomical cloud egress fees.

It begs the question: If you have a properly tuned, brutally powerful bare-metal server, how much traffic can it actually handle alone? To answer this, we decided to tackle the modern equivalent of the C10K problem: the C1M Problem. We took a single iDatam dedicated server equipped with Gen4 NVMe storage and subjected it to an extreme load test, pushing it to 1 million concurrent connections.

To make it interesting, we turned it into a heavyweight title fight between the two most popular high-performance web servers on the planet: Nginx (the reigning open-source champion) vs. LiteSpeed Enterprise (the high-concurrency challenger).

Here is the masterclass on how we tuned the Linux kernel, optimized the application stack, and gathered the hard data on exactly when a server breaks.

The Contenders and the Hardware Beast

Before we look at the software, we need to acknowledge the physics of the C1M problem. A single TCP connection consumes memory. A million idle connections require roughly 4GB to 10GB of RAM just to maintain the sockets. Add in TLS handshakes, PHP execution, and database queries, and you need a monster of a machine.

The iDatam Bare-Metal Benchmark Node:
  • CPU: Dual AMD EPYC 9004 Series (128 Cores / 256 Threads Total)

  • RAM: 512GB DDR5 ECC (4800 MT/s)

  • Storage: 4x 4TB Enterprise PCIe Gen 4 NVMe SSDs in RAID 10 (Hardware Controller)

  • Network: 100Gbps Unmetered Uplink

  • OS: Ubuntu 24.04 LTS

The Application Stack:

We deployed a standard dynamic PHP/MySQL application—a customized WordPress installation loaded with WooCommerce, representing a heavy, database-driven e-commerce site.

  • Contender A: Nginx 1.24 + PHP 8.3 (PHP-FPM) + FastCGI Micro-caching

  • Contender B: LiteSpeed Enterprise 6.1 + PHP 8.3 (LSAPI) + LSCache

The Masterclass: Tuning Linux for 1 Million Connections

Out of the box, standard Linux distributions are built for general-purpose computing. If you attempt to send 1 million connections to a default Ubuntu installation, it will crash at roughly 65,000 connections due to ephemeral port exhaustion and file descriptor limits.

To survive this benchmark, we had to tear open the Linux kernel and rewrite the rules of network routing. Here are the exact tweaks we made.

1. Bypassing the File Descriptor Limit

In Linux, "everything is a file." Every open TCP socket is treated as a file descriptor. The default limit is usually set far too low for high-traffic environments. We edited /etc/sysctl.conf and /etc/security/limits.conf to drastically raise the ceiling.

# /etc/sysctl.conf
fs.file-max = 2000000
fs.nr_open = 2000000

We also updated the security limits to ensure the web server processes (user www-data or nobody) had permission to open those files.

# /etc/security/limits.conf
* soft nofile 2000000
* hard nofile 2000000
2. TCP Stack Optimization (The Magic Sauce)

When a connection closes, it enters a TIME_WAIT state to ensure all delayed packets are received. During a massive traffic spike, these ghost connections will eat up all 65,535 available local ports, locking out new users. We had to optimize the TCP stack in sysctl.conf to recycle ports and handle massive connection queues.

# Expand the local port range to the absolute maximum
net.ipv4.ip_local_port_range = 1024 65535

# Allow the kernel to reuse TIME_WAIT sockets for new connections
net.ipv4.tcp_tw_reuse = 1

# Drastically increase the maximum connection queue size (default is 128)
net.core.somaxconn = 65535
net.ipv4.tcp_max_syn_backlog = 65535

# Reduce the time the kernel holds onto FIN-WAIT sockets
net.ipv4.tcp_fin_timeout = 15

# Increase the maximum memory allocated to TCP buffers
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

Database and PHP Tuning

A web server is only as fast as its slowest component. For dynamic sites, that’s MySQL and PHP.

  • Aggressive Micro-caching: We configured both Nginx (FastCGI) and LiteSpeed (LSCache) to cache the dynamic PHP homepage output into static RAM for exactly 10 seconds. This means for every 100,000 users hitting the site per second, the database is only queried once.

  • MySQL Connection Pooling: We allocated 256GB of RAM to the innodb_buffer_pool_size so the entire database lived in memory. We capped max_connections to 5,000 to prevent MySQL from suffocating under context-switching overhead.

The Methodology: Firing the Virtual Laser

You cannot generate 1 million concurrent connections from a single laptop. The client machine will run out of ports just like a server would.

To execute this test, we deployed a distributed botnet of 20 high-compute virtual machines, utilizing K6 by Grafana. The K6 cluster was programmed to execute a ramping load profile:

  • Phase 1: Ramp to 100,000 concurrent virtual users (VUs) over 5 minutes.

  • Phase 2: Ramp to 500,000 concurrent VUs over 10 minutes.

  • Phase 3: Push to 1,000,000 concurrent VUs, hold for 3 minutes, or until the server drops 5% of connections (The Breaking Point).

The Benchmark Results: Nginx vs. LiteSpeed

With the kernel tuned, the NVMe drives humming, and the 100Gbps network pipe wide open, we launched the test. Here is how the two titans of web hosting performed under apocalyptic load.

Phase 1: 100,000 Concurrent Users (The Warm-Up)

At 100k users, both web servers laughed at the load. Thanks to the massive compute power of the dual AMD EPYC processors and aggressive RAM caching, neither server broke a sweat.

  • Nginx: Average TTFB (Time to First Byte): 28ms. CPU Load: 8%. RAM Usage: 18GB.

  • LiteSpeed: Average TTFB: 24ms. CPU Load: 5%. RAM Usage: 14GB.

Verdict: A tie. At this level, proper caching and NVMe storage completely neutralize the traffic.

Phase 2: 500,000 Concurrent Users (The Sweat)

At half a million simultaneous users, the physics of network traffic began to show. The server was processing millions of packets per second. The interrupts on the network interface cards (NICs) were hammering the CPU.

  • Nginx: Average TTFB climbed to 145ms. CPU Load hit 62%. Nginx's worker processes began consuming massive amounts of RAM (115GB) to manage the state of the FastCGI caching layer. Occasional latency spikes of 500ms were observed as Linux garbage-collected the TIME_WAIT sockets.

  • LiteSpeed: Average TTFB stayed remarkably low at 58ms. CPU Load was 41%. LiteSpeed's event-driven architecture and highly integrated LSCache module proved vastly more efficient at memory management, using only 65GB of RAM.

Verdict: LiteSpeed pulls ahead. Nginx’s architecture requires PHP-FPM to act as a separate process, and the IPC (Inter-Process Communication) overhead between Nginx and PHP-FPM starts creating a bottleneck.

Phase 3: 1,000,000 Concurrent Users (The Breaking Point)

As the K6 cluster crossed the 800,000 user mark, the sirens went off. This is where architectures fracture.

  • Nginx (Failure at 840,000): Nginx fought valiantly, but at roughly 840,000 concurrents, the server reached its breaking point. TTFB skyrocketed to over 3 seconds. The PHP-FPM worker pool was completely exhausted, resulting in cascading 502 Bad Gateway and 504 Gateway Timeout errors. RAM usage pegged at 410GB, causing the server to dip into swap memory on the NVMe drives. We terminated the Nginx test to prevent a hard kernel panic.

  • LiteSpeed (Survival at 1,000,000): LiteSpeed reached the 1 million concurrent user milestone. It was not a perfectly smooth ride—average TTFB rose to 820ms (sluggish, but technically still online). CPU usage was pinned at 96%, and RAM hovered dangerously at 480GB. However, thanks to the LSAPI (LiteSpeed SAPI), the communication between the web server and PHP was highly optimized, preventing the 502 gateway errors that killed Nginx. The server maintained a 99.1% success rate on HTTP 200 responses.

The "Citeable" Asset: The Web Server Survivability Curve

For the developers and sysadmins debating their infrastructure stack, here is the hard data summarized. Feel free to copy, share, and cite this survivability matrix when planning your next deployment.

Concurrent Users Nginx + PHP-FPM
(TTFB / CPU / Status)
LiteSpeed + LSAPI
(TTFB / CPU / Status)
10,000 12ms / 2% / Ultra-Fast 10ms / 1% / Ultra-Fast
100,000 28ms / 8% / Stable 24ms / 5% / Stable
250,000 65ms / 28% / Stable 40ms / 18% / Stable
500,000 145ms / 62% / Sluggish 58ms / 41% / Stable
850,000 FAIL (502 Errors) / 100% / Offline 210ms / 78% / Sluggish
1,000,000 N/A 820ms / 96% / Surviving

Note: Tests conducted on a single iDatam Dual AMD EPYC server with Gen4 NVMe and 100Gbps network. Dynamic PHP pages were micro-cached for 10 seconds.

Analysis: Why Did LiteSpeed Defeat Nginx?

Nginx is incredible. It powers the vast majority of the top 100,000 websites on earth. But in a brutal, extreme-concurrency environment running dynamic PHP applications, LiteSpeed has a distinct architectural advantage.

  1. The PHP Bottleneck: Nginx acts as a reverse proxy to PHP-FPM. Every time a PHP script executes, Nginx has to talk to PHP-FPM over a local socket. At 1 million concurrents, this socket communication becomes a massive CPU bottleneck. LiteSpeed uses LSAPI, which is built directly into the server. It bypasses this overhead entirely, allowing PHP processes to spin up and down with significantly less CPU taxation.

  2. Cache Integration: Nginx uses FastCGI Cache, which works well, but it relies on the file system (even if that file system is mapped to RAM via tmpfs). LiteSpeed’s LSCache is native to the server core. It serves cached dynamic content almost identically to how it serves static HTML files, bypassing PHP and the database entirely with unparalleled efficiency.

  3. Event-Driven Supremacy: While both servers are event-driven (unlike the old Apache process-per-request model), LiteSpeed handles memory allocation for massive concurrent connections slightly better, preventing the catastrophic memory ballooning that ultimately killed Nginx in our test.

The Verdict: You Probably Don't Need a Cloud Cluster

This stress test proves a critical point about modern web architecture. The tech industry has been brainwashed into thinking that surviving a viral traffic spike requires spinning up a sprawling, 20-node Kubernetes cluster on AWS or Google Cloud, complete with expensive load balancers and terrifying variable billing.

That is simply not true.

A single, properly tuned iDatam bare-metal dedicated server, equipped with enterprise NVMe storage and dual AMD EPYC processors, successfully sustained 1,000,000 concurrent users.

If you are a SaaS founder, a high-traffic media publisher, or an e-commerce giant preparing for Black Friday, the most cost-effective, high-performance solution isn't horizontal cloud scaling. It's vertical bare-metal scaling combined with aggressive software optimization.

  • If you have a skilled sysadmin team: Deploy an iDatam NVMe dedicated server, tune your Linux kernel precisely as we detailed above, set up aggressive micro-caching, and watch your single machine shrug off traffic that would melt a standard cloud setup.

  • If you want enterprise performance out of the box: Pair an iDatam bare-metal node with a LiteSpeed Enterprise license. You will achieve maximum concurrency without having to spend weeks tweaking FastCGI buffers and PHP-FPM worker pools.

Discover iDatam Dedicated Server Locations

iDatam servers are available around the world, providing diverse options for hosting websites. Each region offers unique advantages, making it easier to choose a location that best suits your specific hosting needs.