aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--bandwidth-file-spec.txt36
-rw-r--r--cert-spec.txt8
-rw-r--r--param-spec.txt12
-rw-r--r--proposals/324-rtt-congestion-control.txt151
-rw-r--r--proposals/333-vanguards-lite.md24
-rw-r--r--rend-spec-v3.txt22
-rw-r--r--tor-spec.txt43
7 files changed, 215 insertions, 81 deletions
diff --git a/bandwidth-file-spec.txt b/bandwidth-file-spec.txt
index 5ad946f..bad13f6 100644
--- a/bandwidth-file-spec.txt
+++ b/bandwidth-file-spec.txt
@@ -99,6 +99,8 @@ Table of Contents
Also adds Tor version.
1.5.0 - Removes "recent_measurement_attempt_count" KeyValue.
1.6.0 - Adds congestion control stream events KeyValues.
+ 1.7.0 - Adds ratios KeyValues to the relay lines and network averages
+ KeyValues to the header.
All Tor versions can consume format version 1.0.0.
@@ -518,6 +520,23 @@ Table of Contents
This Line was added in version 1.4.0 of this specification.
+ "mu" Int NL
+
+ [Zero or one time.]
+
+ The network stream bandwidth average calculated as explained in B4.2.
+
+ This Line was added in version 1.7.0 of this specification.
+
+ "muf" Int NL
+
+ [Zero or one time.]
+
+ The network stream bandwidth average filtered calculated as explained in
+ B4.2.
+
+ This Line was added in version 1.7.0 of this specification.
+
KeyValue NL
[Zero or more times.]
@@ -1038,6 +1057,23 @@ Table of Contents
This KeyValue was added in version 1.6.0 of this specification.
+ "r_strm" Float
+
+ [Zero or one time.]
+
+ The stream ratio of this relay calculated as explained in B4.3.
+
+ This KeyValue was added in version 1.7.0 of this specification.
+
+ "r_strm_filt" Float
+
+ [Zero or one time.]
+
+ The filtered stream ratio of this relay calculated as explained in B4.3.
+
+ This KeyValue was added in version 1.7.0 of this specification.
+
+
2.4.2.2. Torflow
Torflow RelayLines include node_id and bw, and other KeyValue pairs [2].
diff --git a/cert-spec.txt b/cert-spec.txt
index 1782141..a70e100 100644
--- a/cert-spec.txt
+++ b/cert-spec.txt
@@ -92,8 +92,9 @@ Table of Contents
Before processing any certificate, parties SHOULD know which
identity key it is supposed to be signed by, and then check the
- signature. The signature is formed by signing the first N-64
- bytes of the certificate.
+ signature. The signature is created by signing all the fields in
+ the certificate up until "SIGNATURE" (that is, signing
+ sizeof(ed25519_cert) - 64 bytes).
2.2. Basic extensions
@@ -127,6 +128,9 @@ Table of Contents
the non-signature parts of the certificate, prefixed with the
string "Tor TLS RSA/Ed25519 cross-certificate".)
+ Just like with the Ed25519 certificates above, the EXPIRATION_DATE
+ operates in HOURS after the epoch.
+
This certificate type is used to mean, "This Ed25519 identity key
acts with the authority of the RSA key that signed this
certificate."
diff --git a/param-spec.txt b/param-spec.txt
index a63ad3b..123cedc 100644
--- a/param-spec.txt
+++ b/param-spec.txt
@@ -105,11 +105,17 @@ Table of Contents
"KISTSchedRunInterval" -- How frequently should the "KIST" scheduler
run in order to decide which data to write to the network? Value in
- units of milliseconds. If 0, then the KIST scheduler should be
- disabled.
- Min: 0. Max: 100. Default: 10.
+ units of milliseconds.
+ Min: 2. Max: 100. Default: 2
First appeared: 0.3.2
+ "KISTSchedRunIntervalClient" -- How frequently should the "KIST" scheduler
+ run in order to decide which data to write to the network, on clients? Value
+ in units of milliseconds. The client value needs to be much lower than
+ the relay value.
+ Min: 2. Max: 100. Default: 2.
+ First appeared: 0.4.8.2
+
3. Voting-related parameters
"bwweightscale" -- Value that bandwidth-weights are divided by. If not
diff --git a/proposals/324-rtt-congestion-control.txt b/proposals/324-rtt-congestion-control.txt
index 625fab2..582c54d 100644
--- a/proposals/324-rtt-congestion-control.txt
+++ b/proposals/324-rtt-congestion-control.txt
@@ -163,14 +163,13 @@ false (no stall), and the RTT value is used.
2.1.2. N_EWMA Smoothing [N_EWMA_SMOOTHING]
-Both RTT estimation and SENDME BDP estimation require smoothing, to
-reduce the effects of packet jitter.
+RTT estimation requires smoothing, to reduce the effects of packet jitter.
This smoothing is performed using N_EWMA[27], which is an Exponential
Moving Average with alpha = 2/(N+1):
- N_EWMA = BDP*2/(N+1) + N_EWMA_prev*(N-1)/(N+1)
- = (BDP*2 + N_EWMA_prev*(N-1))/(N+1).
+ N_EWMA = RTT*2/(N+1) + N_EWMA_prev*(N-1)/(N+1)
+ = (RTT*2 + N_EWMA_prev*(N-1))/(N+1).
Note that the second rearranged form MUST be used in order to ensure that
rounding errors are handled in the same manner as other implementations.
@@ -179,10 +178,9 @@ Flow control rate limiting uses this function.
During Slow Start, N is set to `cc_ewma_ss`, for RTT estimation.
-After Slow Start, for both RTT and SENDME BDP estimation, N is the number
-of SENDME acks between congestion window updates, divided by the value of consensus
-parameter 'cc_ewma_cwnd_pct', and then capped at a max of 'cc_ewma_max',
-but always at least 2:
+After Slow Start, N is the number of SENDME acks between congestion window
+updates, divided by the value of consensus parameter 'cc_ewma_cwnd_pct', and
+then capped at a max of 'cc_ewma_max', but always at least 2:
N = MAX(MIN(CWND_UPDATE_RATE(cc)*cc_ewma_cwnd_pct/100, cc_ewma_max), 2);
@@ -247,12 +245,11 @@ have been given different names than in those two mails. The third algorithm,
[TOR_NOLA], simply uses the latest BDP estimate directly as its congestion
window.
-These algorithms will be evaluated by running Shadow simulations, to
-help determine parameter ranges, but experimentation on the live network
-will be required to determine which of these algorithms performs best
-when in competition with our current SENDME behavior, as used by real
-network traffic. This experimentation and tuning is detailed in section
-[EVALUATION].
+These algorithms were evaluated by running Shadow simulations, to help
+determine parameter ranges, and with experimentation on the live network.
+After this testing, we have converged on using [TOR_VEGAS], and RTT-based BDP
+estimation using the congestion window. We leave the algorithms in place
+for historical reference.
All of these algorithms have rules to update 'cwnd' - the current congestion
window, which starts out at a value controlled by consensus parameter
@@ -271,7 +268,6 @@ The 'deliver_window' field is still used to decide when to send a SENDME. In C
tor, the deliver window is initially set at 1000, but it never gets below 900,
because authenticated sendmes (Proposal 289) require that we must send only
one SENDME at a time, and send it immediately after 100 cells are received.
-This property turns out to be very useful for [BDP_ESTIMATION].
Implementation of different algorithms should be very simple - each
algorithm should have a different update function depending on the selected algorithm,
@@ -308,22 +304,23 @@ is circuit-scoped.
At a high-level, there are three main ways to estimate the Bandwidth-Delay
Product: by using the current congestion window and RTT, by using the inflight
-cells and RTT, and by measuring SENDME arrival rate.
+cells and RTT, and by measuring SENDME arrival rate. After extensive shadow
+simulation and live testing, we have arrived at using the congestion window
+RTT based estimator, but we will describe all three for background.
All three estimators are updated every SENDME ack arrival.
-The SENDME arrival rate is the most accurate way to estimate BDP, but it
-requires averaging over multiple SENDME acks to do so. The congestion window
-and inflight estimates rely on the congestion algorithm more or less correctly
-tracking an approximation of the BDP, and then use current and minimum RTT to
-compensate for overshoot.
+The SENDME arrival rate is the most direct way to estimate BDP, but it
+requires averaging over multiple SENDME acks to do so. Unfortunatetely,
+this approach suffers from what is called "ACK compression", where returning
+SENDMEs build up in queues, causing over-estimation of the BDP.
-The SENDME estimator tends to be accurate after ~3-5 SENDME acks. The cwnd and
-inflight estimators tend to be accurate once the congestion window exceeds
-BDP.
-
-We specify all three because they are all useful in various cases. These cases
-are broken up and combined to form the Piecewise BDP estimator.
+The congestion window and inflight estimates rely on the congestion algorithm
+more or less correctly tracking an approximation of the BDP, and then use
+current and minimum RTT to compensate for overshoot. These estimators tend to
+under-estimate BDP, especially when the congestion window is below the BDP.
+This under-estimation is corrected for by the increase of the congestion
+window in congestion control algorithm rules.
3.1.1. SENDME arrival BDP estimation
@@ -380,6 +377,8 @@ in Shadow simulation, due to ack compression.
3.1.2. Congestion Window BDP Estimation
+This is the BDP estimator we use.
+
Assuming that the current congestion window is at or above the current BDP,
the bandwidth estimate is the current congestion window size divided by the
RTT estimate:
@@ -420,12 +419,17 @@ and all circuit queues have drained without blocking the local orconn, we stop
updating this BDP estimate, because there are not sufficient inflight cells
to properly estimate BDP.
+While the research literature for Vegas says that inflight estimators
+performed better due to the ability to avoid overhsoot, we had better
+performance results using other methods to control overshot. Hence, we do not
+use the inflight BDP estimator.
+
3.1.4. Piecewise BDP estimation
-The piecewise BDP estimation is used to help respond more quickly in the event
-the local OR connection is blocked, which indicates congestion somewhere along
-the path from the client to the guard (or between Exit and Middle). In this
-case, it takes the minimum of the inflight and SENDME estimators.
+A piecewise BDP estimation could be used to help respond more quickly in the
+event the local OR connection is blocked, which indicates congestion somewhere
+along the path from the client to the guard (or between Exit and Middle). In
+this case, it takes the minimum of the inflight and SENDME estimators.
When the local OR connection is not blocked, this estimator uses the max of
the SENDME and cwnd estimator values.
@@ -527,6 +531,9 @@ each time we get a SENDME (aka sendme_process_circuit_level()):
TCP Vegas control algorithm estimates the queue lengths at relays by
subtracting the current BDP estimate from the current congestion window.
+After extensive shadow simulation and live testing, we have settled on this
+congestion control algorithm for use in Tor.
+
Assuming the BDP estimate is accurate, any amount by which the congestion
window exceeds the BDP will cause data to queue.
@@ -1180,7 +1187,7 @@ We will calibrate the Shadow simulator so that it has similar CDFs for all of
these metrics as the live network, without using congestion control.
Then, we will want to inspect CDFs of these three metrics for various
-congestion control algorithms and parameters.
+congestion control algorithms and parameters.
The live network testing will also spot-check performance characteristics of
a couple algorithm and parameter sets, to ensure we see similar results as
@@ -1212,7 +1219,7 @@ These are sorted in order of importance to tune, most important first.
- Description:
Specifies which congestion control algorithm clients should
use, as an integer.
- - Range: [0,3] (0=fixed, 1=Westwood, 2=Vegas, 3=NOLA)
+ - Range: 0 or 2 (0=fixed windows, 2=Vegas)
- Default: 2
- Tuning Values: [2,3]
- Tuning Notes:
@@ -1221,6 +1228,8 @@ These are sorted in order of importance to tune, most important first.
values, and even the optimal algorithm itself, will likely depend
upon how much fixed sendme traffic is in competition. See the
algorithm-specific parameters for additional tuning notes.
+ As of Tor 0.4.8, Vegas is the default algorithm, and support
+ for algorithms 1 (Westwood) and 3 (NOLA) have been removed.
- Shadow Tuning Results:
Westwood exhibited responsiveness problems, drift, and overshoot.
NOLA exhibited ack compression resulting in over-estimating the
@@ -1252,7 +1261,7 @@ These are sorted in order of importance to tune, most important first.
cc_sendme_inc:
- Description: Specifies how many cells a SENDME acks
- - Range: [1, 255]
+ - Range: [1, 254]
- Default: 31
- Tuning Values: 25,33,50
- Tuning Notes:
@@ -1266,7 +1275,7 @@ These are sorted in order of importance to tune, most important first.
cells that fit in a TLS frame. Much of the rest of Tor has
processing values at 32 cells, as well.
- Consensus Update Notes:
- This value MUST only be changed by a factor of 2, every 4 hours.
+ This value MUST only be changed by +/- 1, every 4 hours.
If greater changes are needed, they MUST be spread out over
multiple consensus updates.
@@ -1427,14 +1436,6 @@ These are sorted in order of importance to tune, most important first.
allocation, though. Values of 50-100 will be explored after
examining Shadow Guard Relay Utilization.
- cc_bdp_alg:
- - Description: The BDP estimation algorithm to use.
- - Range: [0,7]
- - Default: 7 (Piecewise EWMA)
- - Tuning Notes:
- We don't expect to need to tune this.
- - Shadow Tuning Results:
- We leave this as-is, but disable it in Vegas instead, below.
6.5.2. Westwood parameters
@@ -1502,41 +1503,40 @@ These are sorted in order of importance to tune, most important first.
- Range: [0, 1000] (except delta, which has max of INT32_MAX)
- Defaults:
# OUTBUF_CELLS=62
- cc_vegas_alpha_exit (2*OUTBUF_CELLS)
+ cc_vegas_alpha_exit (3*OUTBUF_CELLS)
cc_vegas_beta_exit (4*OUTBUF_CELLS)
cc_vegas_gamma_exit (3*OUTBUF_CELLS)
- cc_vegas_delta_exit (6*OUTBUF_CELLS)
+ cc_vegas_delta_exit (5*OUTBUF_CELLS)
cc_vegas_alpha_onion (3*OUTBUF_CELLS)
- cc_vegas_beta_onion (7*OUTBUF_CELLS)
- cc_vegas_gamma_onion (5*OUTBUF_CELLS)
- cc_vegas_delta_onion (9*OUTBUF_CELLS)
+ cc_vegas_beta_onion (6*OUTBUF_CELLS)
+ cc_vegas_gamma_onion (4*OUTBUF_CELLS)
+ cc_vegas_delta_onion (7*OUTBUF_CELLS)
- Tuning Notes:
The amount of queued cells that Vegas should tolerate is heavily
dependent upon competing congestion control algorithms. The specified
defaults are necessary to compete against current fixed SENDME traffic,
- but are much larger than neccessary otherwise. As the number of
- clients using fixed windows is reduced (or circwindow is lowered), we
- can reduce these parameters, which will result in less overall queuing
- and congestion on the network.
+ but are much larger than neccessary otherwise. These values also
+ need a large-ish range between alpha and beta, to allow some degree of
+ variance in traffic, as per [33]. The tuning of these parameters
+ happened in two tickets[34,35]. The onion service parameters were
+ set on the basis that they should increase the queue until as much
+ queue delay as Exits, but tolerate up to 6 hops of outbuf delay.
+ Lack of visibility into onion service congestion window on the live
+ network prevented confirming this.
- Shadow Tuning Results:
We found that the best values for 3-hop Exit circuits was to set
- beta and gamma to the size of the outbufs times the number of hops.
- This has the effect that Vegas detects congestion and backs off
- when this much queue delay is detected. Alpha is set to one TLS
- record/sendme_inc below this value. If the queue length is detected
- to be below that, we increase the congestion window. We still
- need to verify that the path length multiplier still holds for
- other types of circuits, specifically onion services.
-
- cc_sscap_sbws_{exit,onion,sbws}:
+ alpha and gamma to the size of the outbufs times the number of
+ hops. Beta is set to one TLS record/sendme_inc above this value.
+
+ cc_sscap_{exit,onion,sbws}:
- Description: These parameters describe the RFC3742 'cap', after which
congestion window increments are reduced. INT32_MAX disables
RFC3742.
- Range: [100, INT32_MAX]
- Defaults:
sbws: 400
- exit: 500
- onion: 600
+ exit: 600
+ onion: 475
- Shadow Tuning Results:
We picked these defaults based on the average congestion window
seen in Shadow sims for exits and onion service circuits.
@@ -1555,19 +1555,18 @@ These are sorted in order of importance to tune, most important first.
'cc_sendme_inc' multiples of gap allowed between inflight and
cwnd, to still declare the cwnd full.
- Range: [0, INT16_MAX]
- - Default: 1-2
+ - Default: 4
- Shadow Tuning Results:
- A value of 0 resulted in a slight loss of performance, and increased
- variance in throughput. The optimal number here likely depends on
- edgeconn inbuf size, edgeconn kernel buffer size, and eventloop
- behavior.
+ Low values resulted in a slight loss of performance, and increased
+ variance in throughput. Setting this at 4 seemed to achieve a good
+ balance betwen throughput and queue overshoot.
cc_cwnd_full_minpct:
- - Description: This paramter defines a low watermark in percent. If
+ - Description: This paramter defines a low watermark in percent. If
inflight falls below this percent of cwnd, the congestion window
is immediately declared non-full.
- Range: [0, 100]
- - Default: 75
+ - Default: 25
cc_cwnd_full_per_cwnd:
- Description: This parameter governs how often a cwnd must be
@@ -1618,7 +1617,7 @@ These are sorted in order of importance to tune, most important first.
This threshold plus the sender's cwnd must be greater than the
cc_xon_rate value, or a rate cannot be computed. Unfortunately,
unless it is sent, the receiver does not know the cwnd. Therefore,
- this value should always be higher than cc_xon_rate minus
+ this value should always be higher than cc_xon_rate minus
'cc_cwnd_min' (100) minus the xon threshhold value (0).
cc_xon_rate
@@ -1656,7 +1655,7 @@ These are sorted in order of importance to tune, most important first.
- Tuning Notes:
Setting this higher will smooth over changes in the rate field,
and thus avoid XONs, but will reduce our reactivity to rate changes.
-
+
6.5.6. External Performance Parameters to Tune
@@ -1668,7 +1667,7 @@ These are sorted in order of importance to tune, most important first.
- Description: Specifies the percentage cutoff for the circuit build
timeout mechanism.
- Range: [60, 80]
- - Default: 80
+ - Default: 80
- Tuning Values: [70, 75, 80]
- Tuning Notes:
The circuit build timeout code causes Tor to use only the fastest
@@ -2309,3 +2308,9 @@ receive more data. It is sent to tell the sender to resume sending.
32. RFC3742 Limited Slow Start
https://datatracker.ietf.org/doc/html/rfc3742#section-2
+
+33. https://people.csail.mit.edu/venkatar/cc-starvation.pdf
+
+34. https://gitlab.torproject.org/tpo/core/tor/-/issues/40642
+
+35. https://gitlab.torproject.org/tpo/network-health/analysis/-/issues/49
diff --git a/proposals/333-vanguards-lite.md b/proposals/333-vanguards-lite.md
index 5e62b03..8c1ccb9 100644
--- a/proposals/333-vanguards-lite.md
+++ b/proposals/333-vanguards-lite.md
@@ -46,14 +46,14 @@ Implemented-In: 0.4.7.1-alpha
Service intro: C -> G -> L2 -> M -> Intro
Service hsdir: C -> G -> L2 -> M -> HSDir
-# 3. Rotation Period Analysis
+# 2. Rotation Period Analysis
From the table in Section 3.1 of Proposal 292, with NUM_LAYER2_GUARDS=4 it
can be seen that this means that the Sybil attack on Layer2 will complete
with 50% chance in 18*7 days (126 days) for the 1% adversary, 4*7 days (one
month) for the 5% adversary, and 2*7 days (two weeks) for the 10% adversary.
-# 4. Tradeoffs from Proposal 292
+# 3. Tradeoffs from Proposal 292
This proposal has several advantages over Proposal 292:
@@ -69,7 +69,25 @@ Implemented-In: 0.4.7.1-alpha
protected, and this proposal might provide those services with a false sense
of security. Such services should still use the vanguards addon [VANGUARDS_REF].
-# 4. References
+# 4. Implementation nuances
+
+ Tor replaces an L2 vanguard whenever it is no longer listed in the most
+ recent consensus, with the goal that we will always have the right
+ number of vanguards ready to be used.
+
+ For implementation reasons, we also replace a vanguard if it loses
+ the Fast or Stable flag, because the path selection logic wants middle
+ nodes to have those flags when it's building preemptive vanguard-using
+ circuits.
+
+ The design doesn't have to be this way: we might instead have chosen
+ to keep vanguards in our list as long as possible, and continue to use
+ them even if they have lost some flags. This tradeoff is similar to
+ the one in https://bugs.torproject.org/17773 about whether to continue
+ using Entry Guards if they lose the Guard flag -- and Tor's current
+ choice is "no, rotate" for that case too.
+
+# 5. References
[PROP292_REF]: https://gitlab.torproject.org/tpo/core/torspec/-/blob/main/proposals/292-mesh-vanguards.txt
[VANGUARDS_REF]: https://github.com/mikeperry-tor/vanguards
diff --git a/rend-spec-v3.txt b/rend-spec-v3.txt
index 274303c..56083a1 100644
--- a/rend-spec-v3.txt
+++ b/rend-spec-v3.txt
@@ -998,6 +998,28 @@ Table of contents:
Consider that the service is at 01:00 right after SRV#2: it will upload its
second descriptor using TP#2 and SRV#2.
+2.2.4.3. Directory behavior for handling descriptor uploads [DIRUPLOAD]
+
+ Upon receiving a hidden service descriptor publish request, directories MUST
+ check the following:
+
+ * The outer wrapper of the descriptor can be parsed according to
+ [DESC-OUTER]
+ * The version-number of the descriptor is "3"
+ * If the directory has already cached a descriptor for this hidden service,
+ the revision-counter of the uploaded descriptor must be greater than the
+ revision-counter of the cached one
+ * The descriptor signature is valid
+
+ If any of these basic validity checks fails, the directory MUST reject the
+ descriptor upload.
+
+ NOTE: Even if the descriptor passes the checks above, its first and second
+ layers could still be invalid: directories cannot validate the encrypted
+ layers of the descriptor, as they do not have access to the public key of the
+ service (required for decrypting the first layer of encryption), or the
+ necessary client credentials (for decrypting the second layer).
+
2.2.5. Expiring hidden service descriptors [EXPIRE-DESC]
Hidden services set their descriptor's "descriptor-lifetime" field to 180
diff --git a/tor-spec.txt b/tor-spec.txt
index b42ee26..669851f 100644
--- a/tor-spec.txt
+++ b/tor-spec.txt
@@ -49,6 +49,7 @@ Table of Contents
5.6. Handling relay_early cells
6. Application connections and stream management
6.1. Relay cells
+ 6.1.1. Calculating the 'Digest' field
6.2. Opening streams and transferring data
6.2.1. Opening a directory stream
6.3. Closing streams
@@ -1791,6 +1792,48 @@ see tor-design.pdf.
understood, the cell must be dropped and ignored. Its contents
still count with respect to the digests and flow control windows, though.
+6.1.1. Calculating the 'Digest' field
+
+ The 'Digest' field itself serves the purpose to check if a cell has been
+ fully decrypted, that is, all onion layers have been removed. Having a
+ single field, namely 'Recognized' is not sufficient, as outlined above.
+
+ When ENCRYPTING a RELAY cell, an implementation does the following:
+
+ # Encode the cell in binary (recognized and digest set to zero)
+ tmp = cmd + [0, 0] + stream_id + [0, 0, 0, 0] + length + data + padding
+
+ # Update the digest with the encoded data
+ digest_state = hash_update(digest_state, tmp)
+ digest = hash_calculate(digest_state)
+
+ # The encoded data is the same as above with the digest field not being
+ # zero anymore
+ encoded = cmd + [0, 0] + stream_id + digest[0..4] + length + data +
+ padding
+
+ # Now we can encrypt the cell by adding the onion layers ...
+
+ When DECRYPTING a RELAY cell, an implementation does the following:
+
+ decrypted = decrypt(cell)
+
+ # Replace the digest field in decrypted by zeros
+ tmp = decrypted[0..5] + [0, 0, 0, 0] + decrypted[9..]
+
+ # Update the digest field with the decrypted data and its digest field
+ # set to zero
+ digest_state = hash_update(digest_state, tmp)
+ digest = hash_calculate(digest_state)
+
+ if digest[0..4] == decrypted[5..9]
+ # The cell has been fully decrypted ...
+
+ The caveat itself is that only the binary data with the digest bytes set to
+ zero are being taken into account when calculating the running digest. The
+ final plain-text cells (with the digest field set to its actual value) are
+ not taken into the running digest.
+
6.2. Opening streams and transferring data
To open a new anonymized TCP connection, the OP chooses an open