Internet performance tests may provide different results for a lot of reasons. Three of the main reasons for different results among tests are listed below:
1. Differences in the location of testing servers
Every performance test has two parts:
- client: This is the software that runs on the user’s machine and shows the user their speed results.
- server: This is the computer on the Internet to which the client connects to complete the test.
A test generates data between the client and the server, and measures performance between these two points. The location of these two points is important in terms of understanding the results of a given test.
If the server is located within your Internet Service Provider’s (ISP’s) own network (also known as the “last mile”), this is referred to as an “on-net” measurement. This approach lets you know about how your Internet connection is performing intra-network within your ISP, but it does not necessarily reflect the full experience of using the Internet, which almost always involves using inter-network connections (connections between networks) to access content and services that are hosted somewhere outside of your ISP. Results from on-net testing are often higher than those achieved by using other methods, since the “distance” traveled is generally shorter, and the network is entirely controlled by one provider (your ISP).
“Off-net” measurements occur between your computer and a server located outside of your ISP’s network. This means that traffic crosses inter-network borders and often travels longer distances. Off-net testing frequently produces results that are lower than those produced from on-net testing.
M-Lab’s measurements are always conducted off-net. This way, M-Lab is able to measure performance from testers’ computers to locations where popular Internet content is often hosted. By having inter-network connections included in the test, test users get a real sense of the performance they could expect when using the Internet.
2. Differences in testing methods
Different Internet performance tests measure different things in different ways. M-Lab’s NDT test tries to transfer as much data as it can in ten seconds (both up and down), using a single connection to an M-Lab server. Other popular tests try to transfer as much data as possible at once across multiple connections to their server. Neither method is “right” or “wrong,” but using a single stream is more likely to help diagnose problems in the network than multiple streams would. Learn more about M-Lab’s NDT methodology.
All NDT data collected by M-Lab are publicly available in both visualized (graphic), queryable, and raw (unanalyzed) forms.
3. Changing network conditions and distinct test paths
The Internet is always changing, and test results reflect that. A test conducted five minutes ago may show very different results from a test conducted twenty minutes ago. This can be caused by the test traffic being routed differently. For example, one test might travel over a path with broken router, while another may not. A test run today may be directed to a test server located farther away than a test run yesterday. Additionally, IPv4 and IPv6 routes may take different physical paths. Some IPv6 routes may be tunneled through IPv4, from the client, or at any point after the client depending on local network management.
In short, running one test will give you a sense of network conditions at that moment, across the best network path available at that time, to the specific server coordinating the test. But because Internet routing and infrastructure change dynamically, testing regularly and looking at the data over time are much more reliable ways to gauge representative performance.