Ready to take this to the next level?
Our team can help implement enterprise-grade solutions. Get personalized recommendations in a free 30-minute consultation.
How Bandwidth-Delay Product (BDP) Works
The Bandwidth-Delay Product represents the maximum amount of data that can be "in flight" on a network connection at any given moment. Understanding BDP is crucial for optimizing TCP performance, especially on high-bandwidth or high-latency links.
BDP Formula
BDP (bytes) = Bandwidth (bits/sec) × RTT (seconds) ÷ 8Example: A 1 Gbps link with 100ms RTT has BDP = 1,000,000,000 × 0.1 ÷ 8 = 12.5 MB
For TCP to fully utilize available bandwidth, the receive window must be at least as large as the BDP. If the window is smaller, the sender must wait for acknowledgments before transmitting more data, leaving bandwidth unused.
Understanding TCP Window Scaling
The original TCP specification uses a 16-bit window field, limiting the maximum window size to 64 KB. RFC 7323 introduced window scaling to support much larger windows needed for modern high-speed networks.
| Scale Factor | Maximum Window | Use Case |
|---|---|---|
| 0 (no scaling) | 64 KB | Low-bandwidth legacy systems |
| 4 | 1 MB | Most LAN connections |
| 7 | 8 MB | High-speed WAN links |
| 10 | 64 MB | 10 Gbps+ datacenter links |
| 14 (maximum) | 1 GB | Extreme high-bandwidth paths |
MTU vs MSS: Understanding the Difference
MTU (Maximum Transmission Unit) and MSS (Maximum Segment Size) are related but distinct concepts that affect TCP performance differently.
MTU (Maximum Transmission Unit)
- Layer 2 (Data Link) concept
- Includes all headers
- Standard Ethernet: 1500 bytes
- Jumbo frames: 9000 bytes
MSS (Maximum Segment Size)
- Layer 4 (TCP) concept
- TCP payload only
- MSS = MTU - IP header - TCP header
- Typical: 1460 bytes (IPv4), 1440 (IPv6)
Buffer Sizing Recommendations by Use Case
| Scenario | Recommended Buffer | Notes |
|---|---|---|
| Desktop/Client | 2× BDP | Balance between performance and memory |
| Web Server | 1-1.5× BDP | Many connections, conserve memory |
| Bulk Transfer | 4× BDP | Large file transfers, backups |
| Satellite Links | 4× BDP + 25% | High latency, variable conditions |
| Datacenter | 1× BDP | Low latency, high bandwidth |
Real-World Network Scenarios
Satellite Internet (GEO)
25 Mbps, 550ms RTT: BDP = 1.72 MB. Requires window scaling factor of 5+. BBR congestion control recommended due to high latency and variable packet loss.
Transcontinental WAN (US to Asia)
1 Gbps, 175ms RTT: BDP = 21.9 MB. Default buffers are insufficient. Configure tcp_rmem/tcp_wmem max to at least 44 MB for optimal performance.
Datacenter (Same Region)
10 Gbps, 3ms RTT: BDP = 3.75 MB. Jumbo frames (9000 MTU) recommended. CUBIC congestion control works well in this low-latency environment.
Citations & References
- RFC 7323: TCP Extensions for High Performance - Defines window scaling, timestamps, and PAWS. IETF RFC 7323
- RFC 5681: TCP Congestion Control - Specifies slow start, congestion avoidance, fast retransmit, and fast recovery. IETF RFC 5681
- RFC 6349: Framework for TCP Throughput Testing - Methodology for testing TCP performance. IETF RFC 6349
- Mathis, M., et al. "The Macroscopic Behavior of the TCP Congestion Avoidance Algorithm" - Foundation for TCP throughput modeling under packet loss. ACM Digital Library
- Google BBR: Congestion-Based Congestion Control - BBR algorithm design and implementation. Google Research
- Linux Kernel TCP Tuning - Documentation for Linux TCP buffer configuration. Linux Kernel Documentation
Frequently Asked Questions
Common questions about the TCP Window Size Calculator
The Bandwidth-Delay Product (BDP) is calculated by multiplying bandwidth by round-trip time: BDP = Bandwidth × RTT. For example, a 1 Gbps link with 100ms RTT has a BDP of 1,000,000,000 × 0.1 / 8 = 12.5 MB. This represents the maximum amount of data 'in flight' on the network at any moment, and your TCP window size should at least match this value to fully utilize the available bandwidth.
MTU (Maximum Transmission Unit) is the largest packet size a network can transmit, typically 1500 bytes for Ethernet. MSS (Maximum Segment Size) is the maximum TCP payload size, calculated as: MSS = MTU - IP Header (20 bytes for IPv4, 40 for IPv6) - TCP Header (20 bytes minimum + options). With standard Ethernet and IPv4, MSS = 1500 - 20 - 20 - 12 (timestamps) = 1448 bytes.
For WAN links, your TCP receive buffer should be at least equal to the Bandwidth-Delay Product (BDP). For high-latency links (100ms+ RTT), multiply BDP by 2-4x for optimal performance. For example, a 100 Mbps link with 200ms RTT needs at least 2.5 MB buffers. The calculator generates OS-specific commands for Linux (sysctl), Windows (netsh), and macOS.
TCP window scaling (RFC 7323) is needed when the Bandwidth-Delay Product exceeds 64 KB (65,535 bytes), which is the maximum value in the standard 16-bit TCP window field. Any modern high-speed or long-distance link requires window scaling. The scale factor is negotiated during the TCP handshake and allows windows up to 1 GB. Most operating systems enable this by default, but it's important to verify on older systems.
For high-latency WAN links and satellite connections, BBR (Bottleneck Bandwidth and RTT) generally outperforms loss-based algorithms. For datacenter and low-latency environments, CUBIC (Linux default) works well. For networks with high packet loss, BBR is more resilient. The calculator recommends algorithms based on your specific network characteristics.
The Mathis formula estimates maximum throughput under packet loss: Throughput ≈ (MSS / RTT) × (1.22 / √loss_rate). With 1% packet loss, a link with 1500-byte MSS and 100ms RTT is limited to about 14.6 Mbps regardless of available bandwidth. This makes reducing packet loss critical for WAN performance. BBR congestion control handles loss better than traditional algorithms like CUBIC or Reno.
Linux TCP buffers are controlled by three sysctl parameters: net.core.rmem_max/wmem_max (maximum buffer size), net.ipv4.tcp_rmem/tcp_wmem (minimum, default, maximum per-socket), and net.core.netdev_max_backlog (packet queue size). For high-performance WAN links, set rmem_max/wmem_max to at least 2× your BDP, and tcp_rmem/tcp_wmem's maximum to the same value.
TCP throughput is limited by the smallest of three factors: available bandwidth, TCP window size / RTT, and Mathis limit (packet loss impact). If your window size is too small for your link's BDP, throughput will be capped. Run this calculator with your bandwidth, RTT, and packet loss to identify the bottleneck and get tuning recommendations.
Default TCP buffer sizes vary by OS: Linux typically defaults to 128 KB - 6 MB (auto-tuning enabled), Windows 10/11 defaults to 64 KB with auto-tuning, and macOS defaults to 256 KB - 4 MB. These defaults work for most LAN scenarios but may limit performance on high-bandwidth WAN links. The calculator shows current defaults and recommends optimal values for your network.
VPNs add encapsulation overhead that reduces effective MTU: IPsec typically uses 1400 bytes, WireGuard uses 1420 bytes, and OpenVPN uses 1450 bytes. If MTU is not adjusted, packets may fragment or be dropped, severely impacting performance. The MTU/MSS Optimizer tab helps calculate correct values for various tunnel types and generates PMTUD recommendations.
Explore More Tools
Continue with these related tools
Related External Resources
Additional tools from our partner sites
ℹ️ Disclaimer
This tool is provided for informational and educational purposes only. All processing happens entirely in your browser - no data is sent to or stored on our servers. While we strive for accuracy, we make no warranties about the completeness or reliability of results. Use at your own discretion.