1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
|
Filename: 210-faster-headless-consensus-bootstrap.txt
Title: Faster Headless Consensus Bootstrapping
Author: Mike Perry, Tim Wilson-Brown, Peter Palfrader
Created: 01-10-2012
Last Modified: 02-10-2015
Status: Open
Target: 0.2.8.x+
Overview and Motiviation
This proposal describes a way for clients to fetch the initial
consensus more quickly in situations where some or all of the directory
authorities are unreachable. This proposal is meant to describe a
solution for bug #4483.
Design: Bootstrap Process Changes
The core idea is to attempt to establish bootstrap connections in
parallel during the bootstrap process, and download the consensus from
the first connection that completes.
Connection attempts will be performed on an exponential backoff basis.
Initially, connections will be performed to randomly chosen hard
coded directory mirrors. If none of these connections complete within
5 seconds, connections will also be performed to randomly chosen
canonical directory authorities.
We specify that mirror connections retry after half a second, and then
double the retry time with every connection:
0, 0.5, 1, 2, 4, 8, 16, ...
We specify that directory authority connections start after a 5 second
delay, and retry after 5 seconds, doubling the retry time with every
connection:
5, 10, 20, ...
The first connection to complete will be used to download the consensus
document and the others will be closed, after which bootstrapping will
proceed as normal.
We expect the vast majority of clients to succeed within 4 seconds,
after making up to 5 connection attempts to mirrors. Clients which can't
connect in the first 5 seconds, will then try to contact a directory
authority. We expect almost all clients to succeed within 10 seconds,
after up to 6 connection attempts to mirrors and up to 2 connection
attempts to authorities. This is a much better success rate than the
current Tor implementation, which fails k/n of clients if k of the n
directory authorities are down. (Or, if the connection fails in
certain ways, (k/n)^2.)
If at any time, the total outstanding bootstrap connection attempts
exceeds 10, no new connection attempts are to be launched until an
existing connection attempt experiences full timeout. The retry time
is not doubled when a connection is skipped.
Design: Fallback Dir Mirror Selection
The set of hard coded directory mirrors from #572 shall be chosen using
the 100 Guard nodes with the longest uptime.
The fallback weights will be set using each mirror's fraction of
consensus bandwidth out of the total of all 100 mirrors.
This list of fallback dir mirrors should be updated with every
major Tor release. In future releases, the number of dir mirrors
should be set at 20% of the current Guard nodes (approximately 200 as
of October 2015), rather than fixed at 100.
Performance: Additional Load with Current Parameter Choices
This design and the connection count parameters were chosen such that
no additional bandwidth load would be placed on the directory
authorities. In fact, the directory authorities should experience less
load, because they will not need to serve the consensus document for a
connection in the event that one of the directory mirrors complete their
connection before the directory authority does.
However, the scheme does place additional TLS connection load on the
fallback dir mirrors. Because bootstrapping is rare and all but one of
the TLS connections will be very short-lived and unused, this should not
be a substantial issue.
The dangerous case is in the event of a prolonged consensus failure
that induces all clients to enter into the bootstrap process. In this
case, the number of TLS connections to the fallback dir mirrors within
the first second would be 3*C/100, or 60,000 for C=2,000,000 users. If
no connections complete before the 10 retries, 7 of which go to
mirrors, this could reach as high as 140,000 connection attempts, but
this is extremely unlikely to happen in full aggregate.
However, in the no-consensus scenario today, the directory authorities
would already experience 2*C/9 or 444,444 connection attempts. (Tor
currently tries 2 authorities, before delaying the next attempt.) The
10-retry scheme, 3 of which go to authorities, increases their total
maximum load to about 666,666 connection attempts, but again this is
unlikely to be reached in aggregate. Additionally, with this scheme,
even if the dirauths are taken down by this load, the dir mirrors
should be able to survive it.
Implementation Notes: Code Modifications
The implementation of the bootstrap process is unfortunately mixed
in with many types of directory activity.
The process starts in update_consensus_networkstatus_downloads(),
which initiates a single directory connection through
directory_get_from_dirserver(). Depending on bootstrap state,
a single directory server is selected and a connection is
eventually made through directory_initiate_command_rend().
There appear to be a few options for altering this code to retry multiple
simultaneous connections. Without refactoring, one approach would be to
set mirror and authority retry helper function timers in
directory_initiate_command_routerstatus() from
directory_get_from_dirserver() if the purpose is
DIR_PURPOSE_FETCH_CONSENSUS and the only directory servers available
are the authorities and the fallback dir mirrors. (That is, there is no
valid consensus.) The retry helper function would check the list of
pending connections and, if it is 10 or greater, skip the connection
attempt, and leave the retry time constant.
The code in directory_initiate_command_rend() would then need to be
altered to maintain a list of the dircons created for this purpose as
well as avoid immediately queuing the directory_send_command() request
for the DIR_PURPOSE_FETCH_CONSENSUS purpose. A flag would need to be set
on the dircon to be checked in connection_dir_finished_connecting().
The function connection_dir_finished_connecting() would need to be
altered to examine the list of pending dircons, determine if this one is
the first to complete, and if so, then call directory_send_command() to
download the consensus and close the other pending dircons.
connection_dir_finished_connecting() would also cancel both timers.
Reliability Analysis
We make the pessimistic assumptions that 50% of connections to directory
mirrors fail, and that 20% of connections to authorities fail. (Actual
figures depend on relay churn, age of the fallback list, and authority
uptime.)
We expect the first 10 connection retry times to be:
Mirror: 0s 0.5s 1s 2s 4s 8s 16s
Auth: 5s 10s 20s
Success: 50% 75% 87% 94% 97% 99.4% 99.7% 99.94% 99.97% 99.99%
97% of clients succeed while only using directory mirrors.
2.4% of clients succeed on their first auth connection.
0.24% of clients succeed after one more mirror and auth connection.
0.05% of clients succeed after two more mirror and auth connections.
0.01% of clients remain, but in this scenario, 3 authorities are down,
so the client is most likely blocked from the Tor network.
The current implementation makes 1 or 2 authority connections within the
first second, depending on exactly how the first connection fails. Under
the 20% authority failure assumption, these clients would have a success
rate of either 80% or 96% within a few seconds. The scheme above has a
similar success rate in the first few seconds, while spreading the load
among a larger number of directory mirrors. In addition, if all the
authorities are blocked, current clients will inevitably fail, as they
do not have a list of directory mirrors.
|