aboutsummaryrefslogtreecommitdiff
path: root/path-spec.txt
blob: fde40f4299d1ee2ead35cd24f90eda4a14fcc198 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
$Id$

                           Tor Path Specification

                              Roger Dingledine
                               Nick Mathewson

Note: This is an attempt to specify Tor as currently implemented.  Future
versions of Tor will implement improved algorithms.

This document tries to cover how Tor chooses to build circuits and assign
streams to circuits.  Other implementations MAY take other approaches, but
implementors should be aware of the anonymity and load-balancing implications
of their choices.

                      THIS SPEC ISN'T DONE OR CORRECT.
I'm just copying in relevant info so far.  The starred points are things we
should cover, but not an exhaustive list.  -NM

1. General operation

   Tor begins building circuits as soon as it has enough directory
   information to do so (see section 5.1 of dir-spec.txt).  Some circuits are
   built preemptively because we expect to need them later (for user
   traffic), and some are build because of immediate need (for user traffic
   that no current circuit can handle, for testing the network or our
   availability, and so on).

   When a client application creates a new stream (by opening a SOCKS
   connection or launching a resolve request), we attach it to an appropriate
   open (or in-progress) circuit if one exists, and launch a new circuit only
   if no current circuit can handle the request.  We rotate circuits over
   time to avoid some profiling attacks.

   To build a circuit, we choose all the nodes we want to use, and then
   construct the circuit.  Sometimes, when we want a circuit that ends at a
   given hop, and we have an appropriate unused circuit, we "cannibalize" the
   existing circuit and extend it to the new terminus.

   These processes are described in more detail below.

   This document describes Tor's automatic path selection logic only; path
   selection can be overridden by a controller (with the EXTENDCIRCUIT and
   ATTACHSTREAM commands).  Paths constructed through these means will
   violate some constraints given below.

1b. Types of circuits.

* Stable / Ordinary
* Internal / Exit

   XXXX

1c. Terminology

   A "path" is an ordered sequence of nodes, not yet built as a circuit.

   A "clean" circuit is one that has not yet been used for any traffic.

   A "stable" node is one that we believe to have the 'Stable' flag set on
   the basis of our current directory information.  A "stable" circuit is one
   that consists entirely of "stable" nodes.

   A "fast" or "stable" node is one that we believe to have the 'Fast' or
   'Stable' flag set on the basis of our current directory information.  A
   "fast" or "stable" circuit is one consisting only of "fast" or "stable"
   nodes.

   A "request" is a client-side connection or DNS resolve that needs to be
   served by a circuit.

   A "pending" circuit is one that we have started to build, but which has
   not yet completed.

   A circuit or path "supports" a request if it is okay to use the
   circuit/path to fulfill the request, according to the rules given below.
   A circuit or path "might support" a request if some aspect of the request
   is unknown (usually its target IP), but we believe the path probably
   supports the request according to the rules given below.

2. Building circuits

2.1. When we build.

2.1.1. When clients build circuits

   When running as a client, Tor tries to maintain at least 3 clean circuits,
   so that new streams can be handled quickly.  To increase the likelihood of
   success, Tor tries to predict what exit nodes will be useful by choosing
   from among nodes that support the ports we have used in the recent past.
   (see 2.4).  [XXXX describe in detail how predicted ports work.]

   Additionally, when a client request exists that no circuit (built or
   pending) might support, we cannibalize an existing circuit (2.1.4) or
   create a new circuit to support the request.  We do so by picking a
   request at random, building or cannibalizing a circuit to support it, and
   repeating until every unattached request might be supported by a pending
   or built circuit.

   XXXX when long idle, we build nothing.

2.1.2. When servers build circuits

   XXXX

2.1.3. When authorities build circuits

   XXXX

2.1.4. Hidden-service circuits

   See section 4 below.

2.1.4. Cannibalizing circuits

   When Tor has a request (either an unattached stream or unattached resolve
   request) that no current circuit can support, it looks for an existing
   clean circuit to cannibalize.  If it finds one, it tries to extend it
   another hop to an exit node that might support the stream.  [Must be
   internal???]

   If no circuit exists, or is currently being built, along a path that
   might support a stream, we begin building a new circuit that might support
   the stream.

   [XXXX always? really?]

2.2. Path selection and constraints

   We choose the path for each new circuit before we build it.  We choose the
   exit node first, followed by the other nodes in the circuit.  All paths
   we generate obey the following constraints:
     - We do not choose the same router twice for the same path.
     - We do not choose any router in the same family as another in the same
       circuit.
     - We do not choose any router in the same /16 subnet as another in the
       same circuit.
     - We don't choose any non-running or non-valid router unless we have
       been configured to do so.
     - If we're using Guard nodes, the first node must be a Guard (see 5
       below)
     - XXXX Choosing the length

   When choosing among multiple candidates for a path element, we choose
   a given router with probability proportional to its advertised bandwidth
   [the smaller of the 'rate' and 'observed' arguments to the "bandwidth"
   element in its descriptor].  If a router's advertised bandwidth is greater
   than MAX_BELIEVEABLE_BANDWIDTH (1.5 MB/sec), we clip to that value.

   (XXXX We should do something to shift traffic away from exit nodes.)

   Additionally, we may be building circuits with one or more requests in
   mind.  Each kind of request puts certain constraints on paths:

     - All service-side introduction circuits and all rendezvous paths
       should be Stable.
     - All connection requests for connections that we think will need to
       stay open a long time require Stable circuits.  Currently, Tor decides
       this by examining the request's target port, and comparing it to a
       list of "long-lived" ports. (Default: 21, 22, 706, 1863, 5050, 5190,
       5222, 5223, 6667, 8300, 8888.)
     - DNS resolves require an exit node whose exit policy is not equivalent
       to "reject *:*".
     - Reverse DNS resolves require a version of Tor with advertised eventdns
       support, running 0.1.2.1-alpha-dev or later.
     - All connection requests require an exit node whose exit policy
       supports their target address and port (if known), or which "might
       support it" (if the address isn't known).  See 2.2.1.
     - Rules for Fast? XXXXX

2.2.1. Choosing an exit

   If we know what IP we want to resolve, we can trivially tell whether a
   given router will support it by simulating its declared exit policy.

   Because we often connect to addresses of the form hostname:port, we do not
   always know the target IP address when we select an exit node.  In these
   cases, we need to pick an exit node that "might support" connections to a
   given address port with an unknown address.  An exit node "might support"
   such a connection if any clause that accepts any connections to that port
   precedes all clauses (if any) that reject all connections to that port.

2.2.2. User configuration

   Users can alter the default behavior for path selection with configuration
   options.

   - If "ExitNodes" is provided, then every request requires an exit node on
     the ExitNodes list.  (If a request is supported by no nodes on that list,
     and StrictExitNodes is false, then Tor treats that request as if
     ExitNodes were not provided.)

   - "EntryNodes" and "StrictEntryNodes" behave analagously.

   - If a user tries to connect to or resolve a hostname of the form
     <target>.<servername>.exit, the request is rewritten to a request for
     <target>, and the request is only supported by the exit whose nickname
     or fingerprint is <servername>.

2.3. Handling failure

   If an attempt to extend a circuit fails (either because the first create
   failed or a subsequent extend failed) then the circuit is torn down and is
   no longer pending.  (XXXX really?)  Requests that might have been
   supported by the pending circuit thus become unsupported, and a new
   circuit needs to be constructed.

   If we fail to being a circuit with an EXITPOLICY error, we decide that the
   exit node's exit policy is not correctly advertised, so we treat the exit
   node as if it were a non-exit until we retrieve a fresh descriptor for it.

   XXXX

2.4. Tracking "predicted" ports

   A Tor client tracks how much time has passed since it last received a
   request for a connection on each port.  (For the purposes of this section,
   requests for hostname resolves are considered requests to a separate
   port).  Tor forgets about ports that haven't been used for an hour
   [PREDICTED_CIRCS_RELEVANCE_TIME].

   The ports that have been used in the last hour are considered "predicted",
   and Tor will try to maintain a clean circuits to them as described in 2.1.

   For bootstrapping purposes, port 80 is treated as used at startup time.

   Tor clients SHOULD NOT store predicted ports to a persistent medium.

3. Attaching streams to circuits

   When a circuit that might support a request is built, Tor tries to attach
   the request's stream to the circuit and sends a BEGIN or RESOLVE relay
   cell as appropriate.  If the request completes unsuccessfully, Tor
   considers the reason given in the CLOSE relay cell. [XXX yes, and?]


   After a request has remained unattached for [XXXX retries? interval?], Tor
   abandons the attempt and signals an error to the client as appropriate
   (e.g., by closing the SOCKS connection).

   XXX Timeouts and when Tor auto-retries.
    * What stream-end-reasons are appropriate for retrying.

   XXX What if no reply to BEGIN/RESOLVE?

4. Hidden-service related circuits

  XXX Tracking expected hidden service use (client-side and hidserv-side)

5. Guard nodes

  XXX writeme

6. Testing circuits




(From some emails by arma)

Hi folks,

I've gotten the codebase to the point that I'm going to start trying
to make helper nodes work well. With luck they will be on by default in
the final 0.1.1.x release.

For background on helper nodes, read
http://wiki.noreply.org/noreply/TheOnionRouter/TorFAQ#RestrictedEntry

First order of business: the phrase "helper node" sucks. We always have
to define it after we say it to somebody. Nick likes the phrase "contact
node", because they are your point-of-contact into the network. That is
better than phrases like "bridge node". The phrase "fixed entry node"
doesn't seem to work with non-math people, because they wonder what was
broken about it. I'm sort of partial to the phrase "entry node" or maybe
"restricted entry node". In any case, if you have ideas on names, please
mail me off-list and I'll collate them.

Right now the code exists to pick helper nodes, store our choices to
disk, and use them for our entry nodes. But there are three topics
to tackle before I'm comfortable turning them on by default. First,
how to handle churn: since Tor nodes are not always up, and sometimes
disappear forever, we need a plan for replacing missing helpers in a
safe way. Second, we need a way to distinguish "the network is down"
from "all my helpers are down", also in a safe way. Lastly, we need to
examine the situation where a client picks three crummy helper nodes
and is forever doomed to a lousy Tor experience. Here's my plan:

How to handle churn.
  - Keep track of whether you have ever actually established a
    connection to each helper. Any helper node in your list that you've
    never used is ok to drop immediately. Also, we don't save that
    one to disk.
  - If all our helpers are down, we need more helper nodes: add a new
    one to the *end*of our list. Only remove dead ones when they have
    been gone for a very long time (months).
  - Pick from the first n (by default 3) helper nodes in your list
    that are up (according to the network-statuses) and reachable
    (according to your local firewall config).
    - This means that order matters when writing/reading them to disk.

How to deal with network down.
  - While all helpers are down/unreachable and there are no established
    or on-the-way testing circuits, launch a testing circuit. (Do this
    periodically in the same way we try to establish normal circuits
    when things are working normally.)
    (Testing circuits are a special type of circuit, that streams won't
    attach to by accident.)
  - When a testing circuit succeeds, mark all helpers up and hold
    the testing circuit open.
  - If a connection to a helper succeeds, close all testing circuits.
    Else mark that helper down and try another.
  - If the last helper is marked down and we already have a testing
    circuit established, then add the first hop of that testing circuit
    to the end of our helper node list, close that testing circuit,
    and go back to square one. (Actually, rather than closing the
    testing circuit, can we get away with converting it to a normal
    circuit and beginning to use it immediately?)

How to pick non-sucky helpers.
  - When we're picking a new helper nodes, don't use ones which aren't
    reachable according to our local ReachableAddresses configuration.
  (There's an attack here: if I pick my helper nodes in a very
   restrictive environment, say "ReachableAddresses 18.0.0.0/255.0.0.0:*",
   then somebody watching me use the network from another location will
   guess where I first joined the network. But let's ignore it for now.)
  - Right now we choose new helpers just like we'd choose any entry
    node: they must be "stable" (claim >1day uptime) and "fast" (advertise
    >10kB capacity). In 0.1.1.11-alpha, clients let dirservers define
    "stable" and "fast" however they like, and they just believe them.
    So the next step is to make them a function of the current network:
    e.g. line up all the 'up' nodes in order and declare the top
    three-quarter to be stable, fast, etc, as long as they meet some
    minimum too.
  - If that's not sufficient (it won't be), dirservers should introduce
    a new status flag: in additional to "stable" and "fast", we should
    also describe certain nodes as "entry", meaning they are suitable
    to be chosen as a helper. The first difference would be that we'd
    demand the top half rather than the top three-quarters. Another
    requirement would be to look at "mean time between returning" to
    ensure that these nodes spend most of their time available. (Up for
    two days straight, once a month, is not good enough.)
  - Lastly, we need a function, given our current set of helpers and a
    directory of the rest of the network, that decides when our helper
    set has become "too crummy" and we need to add more. For example,
    this could be based on currently advertised capacity of each of
    our helpers, and it would also be based on the user's preferences
    of speed vs. security.

***

Lasse wrote:
> I am a bit concerned with performance if we are to have e.g. two out of
> three helper nodes down or unreachable. How often should Tor check if
> they are back up and running?

Right now Tor believes a threshold of directory servers when deciding
whether each server is up. When Tor observes a server to be down
(connection failed or building the first hop of the circuit failed),
it marks it as down and doesn't try it again, until it gets a new
network-status from somebody, at which point it takes a new concensus
and marks the appropriate servers as up.

According to sec 5.1 of dir-spec.txt, the client will try to fetch a new
network-status at least every 30 minutes, and more often in certain cases.

With the proposed scheme, we'll also mark all our helpers as up shortly
after the last one is marked down.

> When should there be
> added an extra node to the helper node list? This is kind of an
> important threshold?

I agree, this is an important question. I don't have a good answer yet. Is
it terrible, anonymity-wise, to add a new helper every time only one of
your helpers is up? Notice that I say add rather than replace -- so you'd
only use this fourth helper when one of your main three helpers is down,
and if three of your four are down, you'd add a fifth, but only use it
when two of the first four are down, etc.

In fact, this may be smarter than just picking a random node for your
testing circuit, because if your network goes up and down a lot, then
eventually you have a chance of using any entry node in the network for
your testing circuit.

We have a design choice here. Do we only try to use helpers for the
connections that will have streams on them (revealing our communication
partners), or do we also want to restrict the overall set of nodes that
we'll connect to, to discourage people from enumerating all Tor clients?

I'm increasingly of the belief that we want to hide our presence too,
based on the fact that Steven and George and others keep coming up with
attacks that start with "Assuming we know the set of users".

If so, then here's a revised "How to deal with network down" section:

  1) When a helper is marked down or the helper list shrinks, and as
     a result the total number of helpers that are either (up and
     reachable) or (reachable but never connected to) is <= 1, then pick
     a new helper and add it to the end of the list.
     [We count nodes that have never been connected to, since otherwise
      we might keep on adding new nodes before trying any of them. By
      "reachable" I mean "is allowed by ReachableAddresses".]
  2) When you fail to connect to a helper that has never been connected
     to, you remove him from the list right then (and the above rule
     might kick in).
  3) When you succeed at connecting to a helper that you've never
     connected to before, mark all reachable helpers earlier in the list
     as up, and close that circuit.
     [We close the circuit, since if the other helpers are now up, we
      prefer to use them for circuits that will reveal communication
      partners.]

This certainly seems simpler. Are there holes that I'm missing?

> If running from a laptop you will meet different firewall settings, so
> how should Helper Nodes settings keep up with moving from an open
> ReachableAddresses to a FascistFirewall setting after the helper nodes
> have been selected?

I added the word "reachable" to three places in the above list, and I
believe that totally solves this question.

And as a bonus, it leads to an answer to Nick's attack ("If I pick
my helper nodes all on 18.0.0.0:*, then I move, you'll know where I
bootstrapped") -- the answer is to pick your original three helper nodes
without regard for reachability. Then the above algorithm will add some
more that are reachable for you, and if you move somewhere, it's more
likely (though not certain) that some of the originals will become useful.
Is that smart or just complex?

> What happens if(when?) performance of the third node is bad?

My above solution solves this a little bit, in that we always try to
have two nodes available. But what if they are both up but bad? I'm not
sure. As my previous mail said, we need some function, given our list
of helpers and the network directory, that will tell us when we're in a
bad situation. I can imagine some simple versions of this function --
for example, when both our working helpers are in the bottom half of
the nodes, ranked by capacity.

But the hard part: what's the remedy when we decide there's something
to fix? Do we add a third, and now we have two crummy ones and a new
one? Or do we drop one or both of the bad ones?

Perhaps we believe the latest claim from the network-status concensus,
and we count a helper the dirservers believe is crummy as "not worth
trying" (equivalent to "not reachable under our current ReachableAddresses
config") -- and then the above algorithm would end up adding good ones,
but we'd go back to the originals if they resume being acceptable? That's
an appealing design. I wonder if it will cause the typical Tor user to
have a helper node list that comprises most of the network, though. I'm
ok with this.

> Another point you might want to keep in mind, is the possibility to
> reuse the code in order to add a second layer helper node (meaning node
> number two) to "protect" the first layer (node number one) helper nodes.
> These nodes should be tied to each of the first layer nodes. E.g. there
> is one helper node list, as described in your mail, for each of the
> first layer nodes, following their create/destroy.

True. Does that require us to add a fourth hop to our path length,
since the first hop is from a limited set, the second hop is from a
limited set, and the third hop might also be constrained because, say,
we're asking for an unusual exit port?

> Another of the things might worth adding to the to do list is
> localization of server (helper) nodes. Making it possible to pick
> countries/regions where you do (not) want your helper nodes located. (As
> in "HelperNodesLocated us,!eu" etc.) I know this requires the use of
> external data and may not be worth it, but it _could_ be integrated at
> the directory servers only -- adding a list of node IP's and e.g. a
> country/region code to the directory and thus reduce the overhead. (?)
> Maybe extending the Family-term?

I think we are heading towards doing path selection based on geography,
but I don't have a good sense yet of how that will actually turn out --
that is, with what mechanism Tor clients will learn the information they
need. But this seems to be something that is orthogonal to the rest of
this discussion, so I look forward to having somebody else solve it for
us, and fitting it in when it's ready. :)

> And I would like to keep an option to pick the first X helper nodes
> myself and then let Tor extend this list if these nodes are down (like
> EntryNodes in current code). Even if this opens up for some new types of
> "relationship" attacks.

Good idea. Here's how I'd like to name these:

The "EntryNodes" config option is a list of seed helper nodes. When we
read EntryNodes, any node listed in entrynodes but not in the current
helper node list gets *pre*pended to the helper node list.

The "NumEntryNodes" config option (currently called NumHelperNodes)
specifies the number of up, reachable, good-enough helper nodes that
will make up the pool of possible choices for first hop, counted from
the front of the helper node list until we have enough.

The "UseEntryNodes" config option (currently called UseHelperNodes)
tells us to turn on all this helper node behavior. If you set EntryNodes,
then this option is implied.

The "StrictEntryNodes" config option, provided for backward compatibility
and for debugging, means a) we replace the helper node list with the
current EntryNodes list, and b) whenever we would do an operation that
alters the helper node list, we don't. (Yes, this means that if all the
helper nodes are down, we lose until we mark them up again. But this is
how it behaves now.)

> I am sure my next point has been asked before, but what about testing
> the current speed of the connections when looking for new helper nodes,
> not only testing the connectivity? I know this might contribute to a lot
> of overhead in the network, but if this only occur e.g. when using
> helper nodes as a Hidden Service it might not have that large an impact,
> but could help availability for the services?

If we're just going to be testing them when we're first picking them,
then it seems we can do the same thing by letting the directory servers
test them. This has the added benefit that all the (behaving) clients
use the same data, so they don't end up partitioned by a node that
(for example) performs selectively for his victims.

Another idea would be to periodically keep track of what speeds you get
through your helpers, and make decisions from this. The reason we haven't
done this yet is because there are a lot of variables -- perhaps the
web site is slow, perhaps some other node in the path is slow, perhaps
your local network is slow briefly, perhaps you got unlucky, etc.  I
believe that over time (assuming the user has roughly the same browsing
habits) all of these would average out and you'd get a usable answer,
but I don't have a good sense of how long it would take to converge,
so I don't know whether this would be worthwhile.

> BTW. I feel confortable with all the terms helper/entry/contact nodes,
> but I think you (the developers) should just pick one and stay with it
> to avoid confusion.

I think I'm going to try to co-opt the term 'Entry' node for this
purpose. We're going to have to keep referring to helper nodes for the
research community for a while though, so they realize that Tor does
more than just let users ask for certain entry nodes.



============================================================
Some stuff that worries me about entry guards. 2006 Jun, Nickm.

1. It is unlikely for two users to have the same set of entry guards.

2. Observing a user is sufficient to learn its entry guards.

3. So, as we move around, we leak our