Short answer: Yes.
Long answer: There are several aspects on data transfer that can be measured on an amount-per-time basis; Amount of data per second is one of them, but perhaps misleading if not properly explained.
From the network performance point of view, these are the important factors (quoting Wikipedia here):
- Bandwidth - maximum rate that information can be transferred
- Throughput - the actual rate that information is transferred
- Latency - the delay between the sender and the receiver decoding it
- Jitter - variation in the time of arrival at the receiver of the information
- Error rate - corrupted data expressed as a percentage or fraction of the total sent
So you may have a 10Mb
connection, but if 50% of the sent packages are corrupted, your final throughput is actually just 5Mb
. (Even less, if you consider that a substantial part of the data may be control structures instead of data payload.
Latency may be affected by mechanisms such as Nagle's algorythm and ISP-side buffering:
As specified in RFC 1149, An ISP could sell you a IPoAC package with 9G bits/s, and still be true to its words, if they sent to you 16 pigeons with 32GB SD cards attached to them, average air time around 1 hour - or ~3,600,000 ms latency.