aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorNick Mathewson <nickm@torproject.org>2023-10-14 14:56:07 -0400
committerNick Mathewson <nickm@torproject.org>2023-10-14 14:56:07 -0400
commit343216cfa4a9eaa4f8d63e5eb0e109dffc33a618 (patch)
tree5cedbfd2f1cb747e9b70071724b1ab8107fc90cd
parent2f357f50a0775cc684169e83d21e8e87c97bfc90 (diff)
downloadtorspec-343216cfa4a9eaa4f8d63e5eb0e109dffc33a618.tar.gz
torspec-343216cfa4a9eaa4f8d63e5eb0e109dffc33a618.zip
Fix some conversation formatting in bridgdb spec.
-rw-r--r--spec/bridgedb-spec.md108
1 files changed, 61 insertions, 47 deletions
diff --git a/spec/bridgedb-spec.md b/spec/bridgedb-spec.md
index 8bce610..0ce607d 100644
--- a/spec/bridgedb-spec.md
+++ b/spec/bridgedb-spec.md
@@ -195,8 +195,9 @@ should not be blocked.
is how the HTTPS distributor works.
The goal is to avoid handing out all the bridges to users in a similar
IP space and time.
-# Someone else should look at proposals/ideas/old/xxx-bridge-disbursement
-# to see if this section is missing relevant pieces from it. -KL
+
+> Someone else should look at proposals/ideas/old/xxx-bridge-disbursement
+> to see if this section is missing relevant pieces from it. -KL
BridgeDB fixes the set of bridges to be returned for a defined time
period.
@@ -204,19 +205,23 @@ should not be blocked.
as the same IP address and returns the same set of bridges. From here on,
this non-unique address will be referred to as the IP address's 'area'.
BridgeDB divides the IP address space equally into a small number of
-# Note, changed term from "areas" to "disjoint clusters" -MF
+
+> Note, changed term from "areas" to "disjoint clusters" -MF
+
disjoint clusters (typically 4) and returns different results for requests
coming from addresses that are placed into different clusters.
-# I found that BridgeDB is not strict in returning only bridges for a
-# given area. If a ring is empty, it considers the next one. Is this
-# expected behavior? -KL
-#
-# This does not appear to be the case, anymore. If a ring is empty, then
-# BridgeDB simply returns an empty set of bridges. -MF
-#
-# I also found that BridgeDB does not make the assignment to areas
-# persistent in the database. So, if we change the number of rings, it
-# will assign bridges to other rings. I assume this is okay? -KL
+
+> I found that BridgeDB is not strict in returning only bridges for a
+> given area. If a ring is empty, it considers the next one. Is this
+> expected behavior? -KL
+>
+> This does not appear to be the case, anymore. If a ring is empty, then
+> BridgeDB simply returns an empty set of bridges. -MF
+>
+> I also found that BridgeDB does not make the assignment to areas
+> persistent in the database. So, if we change the number of rings, it
+> will assign bridges to other rings. I assume this is okay? -KL
+
BridgeDB maintains a list of proxy IP addresses and returns the same
set of bridges to requests coming from these IP addresses.
The bridges returned to proxy IP addresses do not come from the same
@@ -299,14 +304,17 @@ total." To do this, BridgeDB combines to the results:
this is how the email distributor works.
The goal is to bootstrap based on one or more popular email service's
sybil prevention algorithms.
-# Someone else should look at proposals/ideas/old/xxx-bridge-disbursement
-# to see if this section is missing relevant pieces from it. -KL
+
+> Someone else should look at proposals/ideas/old/xxx-bridge-disbursement
+> to see if this section is missing relevant pieces from it. -KL
BridgeDB rejects email addresses containing other characters than the
ones that RFC2822 allows.
BridgeDB may be configured to reject email addresses containing other
characters it might not process correctly.
-# I don't think we do this, is it worthwhile? -MF
+
+> I don't think we do this, is it worthwhile? -MF
+
BridgeDB rejects email addresses coming from other domains than a
configured set of permitted domains.
BridgeDB normalizes email addresses by removing "." characters and by
@@ -318,27 +326,32 @@ total." To do this, BridgeDB combines to the results:
BridgeDB does not return a new set of bridges to the same email address
until a given time period (typically a few hours) has passed.
-# Why don't we fix the bridges we give out for a global 3-hour time period
-# like we do for IP addresses? This way we could avoid storing email
-# addresses. -KL
-# The 3-hour value is probably much too short anyway. If we take longer
-# time values, then people get new bridges when bridges show up, as
-# opposed to then we decide to reset the bridges we give them. (Yes, this
-# problem exists for the IP distributor). -NM
-# I'm afraid I don't fully understand what you mean here. Can you
-# elaborate? -KL
-#
-# Assuming an average churn rate, if we use short time periods, then a
-# requestor will receive new bridges based on rate-limiting and will (likely)
-# eventually work their way around the ring; eventually exhausting all bridges
-# available to them from this distributor. If we use a longer time period,
-# then each time the period expires there will be more bridges in the ring
-# thus reducing the likelihood of all bridges being blocked and increasing
-# the time and effort required to enumerate all bridges. (This is my
-# understanding, not from Nick) -MF
-# Also, we presently need the cache to prevent replays and because if a user
-# sent multiple requests with different criteria in each then we would leak
-# additional bridges otherwise. -MF
+
+> Why don't we fix the bridges we give out for a global 3-hour time period
+> like we do for IP addresses? This way we could avoid storing email
+> addresses. -KL
+>
+> The 3-hour value is probably much too short anyway. If we take longer
+> time values, then people get new bridges when bridges show up, as
+> opposed to then we decide to reset the bridges we give them. (Yes, this
+> problem exists for the IP distributor). -NM
+>
+> I'm afraid I don't fully understand what you mean here. Can you
+> elaborate? -KL
+>
+> Assuming an average churn rate, if we use short time periods, then a
+> requestor will receive new bridges based on rate-limiting and will (likely)
+> eventually work their way around the ring; eventually exhausting all bridges
+> available to them from this distributor. If we use a longer time period,
+> then each time the period expires there will be more bridges in the ring
+> thus reducing the likelihood of all bridges being blocked and increasing
+> the time and effort required to enumerate all bridges. (This is my
+> understanding, not from Nick) -MF
+>
+> Also, we presently need the cache to prevent replays and because if a user
+> sent multiple requests with different criteria in each then we would leak
+> additional bridges otherwise. -MF
+
BridgeDB can be configured to include bridge fingerprints in replies
along with bridge IP addresses and OR ports.
BridgeDB can be configured to sign all replies using a PGP signing key.
@@ -368,7 +381,7 @@ proceeds as follows:
# Selecting unallocated bridges to be stored in file buckets
-# Kaner should have a look at this section. -NM
+> Kaner should have a look at this section. -NM
```text
BridgeDB can be configured to reserve a subset of bridges and not give
@@ -382,15 +395,16 @@ proceeds as follows:
returned to the reserved set of bridges.
If a bridge stops running, BridgeDB replaces it with another bridge
from the reserved set of bridges.
-# I'm not sure if there's a design bug in file buckets. What happens if
-# we add a bridge X to file bucket A, and X goes offline? We would add
-# another bridge Y to file bucket A. OK, but what if A comes back? We
-# cannot put it back in file bucket A, because it's full. Are we going to
-# add it to a different file bucket? Doesn't that mean that most bridges
-# will be contained in most file buckets over time? -KL
-#
-# This should be handled the same as if the file bucket is reduced in size.
-# If X returns, then it should be added to the appropriate distributor. -MF
+
+> I'm not sure if there's a design bug in file buckets. What happens if
+> we add a bridge X to file bucket A, and X goes offline? We would add
+> another bridge Y to file bucket A. OK, but what if A comes back? We
+> cannot put it back in file bucket A, because it's full. Are we going to
+> add it to a different file bucket? Doesn't that mean that most bridges
+> will be contained in most file buckets over time? -KL
+>
+> This should be handled the same as if the file bucket is reduced in size.
+> If X returns, then it should be added to the appropriate distributor. -MF
```
<a id="bridgedb-spec.txt-7"></a>