diff options
-rw-r--r-- | doc/tor-design.tex | 28 |
1 files changed, 16 insertions, 12 deletions
diff --git a/doc/tor-design.tex b/doc/tor-design.tex index af9d96da7e..f931bb95af 100644 --- a/doc/tor-design.tex +++ b/doc/tor-design.tex @@ -1541,18 +1541,22 @@ performance. % Right now the first $500 \times 500\mbox{B}=250\mbox{KB}$ %of the stream arrives %quickly, and after that throughput depends on the rate that \emph{relay %sendme} acknowledgments arrive. -For example, we did some informal tests using a test network of 4 nodes on -the same machine. We downloaded a 60 megabyte file from {\tt debian.org} -every 30 minutes for 2 days (100 sample points). It arrived in about -300 seconds on average, compared to 210s for a direct download. We ran -the same test on the main Tor network, pulling down the front page of -{\tt cnn.com}: while a direct download consistently took about 0.5s, -the performance through Tor was highly variable. Some downloads were -as fast as 0.6s, with others as slow as 25s (the average was 2.5s). It -seems that as the network expands, the chance of getting a slow circuit -(one that includes a slow or heavily loaded Tor node) is increasing. On -the other hand, we still have users, so this performance is good enough -for now. +To quantify these effects, we did some informal tests using a network of 4 +nodes on the same machine (a heavily loaded 1GHz Athlon). We downloaded a 60 +megabyte file from {\tt debian.org} every 30 minutes for 54 hours (108 sample +points). It arrived in about 300 seconds on average, compared to 210s for a +direct download. We ran a similar test on the production Tor network, +fetching the front page of {\tt cnn.com} (55 kilobytes): while a direct +download consistently took about 0.5s, the performance through Tor was highly +variable. Some downloads were as fast as 0.6s, with a median at 2.7s, and +80\% finishing within 5.7s. It seems that as the network expands, the chance +of building a slow circuit (one that includes a slow or heavily loaded node +or link) is increasing. On the other hand, as our users remain satisfied +with this increased latency, we can address our performance incrementally as we +proceed with development.\footnote{For example, we have just begun pushing + a pipelining patch to the production network that seems to + decrease latency for medium-to-large files; we will present revised + benchmarks as they become available.} %With the current network's topology and load, users can typically get 1-2 %megabits sustained transfer rate, which is good enough for now. |