aboutsummaryrefslogtreecommitdiff
path: root/spec/hspow-spec
diff options
context:
space:
mode:
authorMicah Elizabeth Scott <beth@torproject.org>2023-11-08 21:00:05 -0800
committerMicah Elizabeth Scott <beth@torproject.org>2023-11-09 14:16:27 -0800
commit3781ff1039e8d6b48f11e6c5e2e99222ea25d75d (patch)
tree3baf16dc268a8f28292a4e2c628ab83d0ec31b18 /spec/hspow-spec
parent95d3aa57c11c651237b69bd3d848f59a10d498ef (diff)
downloadtorspec-3781ff1039e8d6b48f11e6c5e2e99222ea25d75d.tar.gz
torspec-3781ff1039e8d6b48f11e6c5e2e99222ea25d75d.zip
Structural and formatting edits for hspow-spec/common-protocol
Formatting and structural edits to try and make the common-protocol section go together. Ended up repurposing parts of 'overview' for an introduction here, and deleting the separate overview section.
Diffstat (limited to 'spec/hspow-spec')
-rw-r--r--spec/hspow-spec/common-protocol.md310
-rw-r--r--spec/hspow-spec/overview.md68
2 files changed, 142 insertions, 236 deletions
diff --git a/spec/hspow-spec/common-protocol.md b/spec/hspow-spec/common-protocol.md
index 0aa9df5..844d87f 100644
--- a/spec/hspow-spec/common-protocol.md
+++ b/spec/hspow-spec/common-protocol.md
@@ -1,227 +1,201 @@
+# Common protocol
+
+We have made an effort to split the design of the proof-of-work subsystem into an algorithm-specific piece that can be upgraded, and a core protocol that provides queueing and effort adjustment.
+
+Currently there is only one versioned subprotocol defined:
+- [Version 1, Equi-X and Blake2b](./v1-equix.md)
+
+## Overview
+
```text
+ +----------------------------------+
+ | Onion Service |
+ +-------+ INTRO1 +-----------+ INTRO2 +--------+ |
+ |Client |-------->|Intro Point|------->| PoW |-----------+ |
+ +-------+ +-----------+ |Verifier| | |
+ +--------+ | |
+ | | |
+ | | |
+ | +----------v---------+ |
+ | |Intro Priority Queue| |
+ +---------+--------------------+---+
+ | | |
+ Rendezvous | | |
+ circuits | | |
+ v v v
+```
+
+The proof-of-work scheme specified in this document takes place during the [introduction phase of the onion service protocol](../rend-spec/introduction-protocol.md).
+
+The system described in this proposal is not meant to be on all the time, and it can be entirely disabled for services that do not experience DoS attacks.
+
+When the subsystem is enabled, suggested effort is continuously adjusted and the computational puzzle can be bypassed entirely when the effort reaches zero.
+In these cases, the proof-of-work subsystem can be dormant but still provide the necessary parameters for clients to voluntarily provide effort in order to get better placement in the priority queue.
+
+The protocol involves the following major steps:
+
+1. Service encodes PoW parameters in descriptor: `pow-params` in the [second layer plaintext format](../rend-spec/hsdesc-encrypt.md#second-layer-plaintext).
+2. Client fetches descriptor and begins solving. Currently this must use the [`v1` solver algorithm](../hspow-spec/v1-equix.md#client-solver).
+3. Client finishes solving and sends results using the [proof-of-work extension to INTRODUCE1](../rend-spec/introduction-protocol.md#INTRO1_POW_EXT).
+4. Service verifies the proof and queues an introduction based on proven effort. This currently uses the [`v1` verify algorithm](../hspow-spec/v1-equix.md#service-verify) only.
+5. Requests are continuously drained from the queue, highest effort first, subject to multiple constraints on speed. See below for more on [handling queued requests](#handling-queue).
+
+## Replay protection {#replay-protection}
-3. Protocol specification
+The service MUST NOT accept introduction requests with the same (seed, nonce) tuple.
+For this reason a replay protection mechanism must be employed.
-3.4.1.1. Replay protection [REPLAY_PROTECTION]
+The simplest way is to use a hash table to check whether a (seed, nonce) tuple has been used before for the active duration of a seed.
+Depending on how long a seed stays active this might be a viable solution with reasonable memory/time overhead.
- The service MUST NOT accept introduction requests with the same (seed, nonce)
- tuple. For this reason a replay protection mechanism must be employed.
+If there is a worry that we might get too many introductions during the lifetime of a seed, we can use a Bloom filter or similar as our replay cache mechanism. A probabilistic filter means that we will potentially flag some connections as replays even if they are not, with this false positive probability increasing as the number of entries increase. With the right parameter tuning this probability should be negligible, and dropped requests will be retried by the client.
- The simplest way is to use a simple hash table to check whether a (seed,
- nonce) tuple has been used before for the active duration of a
- seed. Depending on how long a seed stays active this might be a viable
- solution with reasonable memory/time overhead.
+## The introduction queue {#intro-queue}
- If there is a worry that we might get too many introductions during the
- lifetime of a seed, we can use a Bloom filter as our replay cache
- mechanism. The probabilistic nature of Bloom filters means that sometimes we
- will flag some connections as replays even if they are not; with this false
- positive probability increasing as the number of entries increase. However,
- with the right parameter tuning this probability should be negligible and
- well handled by clients.
+When proof-of-work is enabled for a service, that service diverts all incoming introduction requests to a priority queue system rather than handling them immediately.
- {TODO: Design and specify a suitable bloom filter for this purpose.}
+### Adding introductions to the introduction queue {#add-queue}
-3.4.2. The Introduction Queue [INTRO_QUEUE]
+When PoW is enabled and an introduction request includes a verified proof, the service queues each request in a data structure sorted by effort. Requests including no proof at all MUST be assigned an effort of zero. Requests with a proof that fails to verify MUST be rejected and not enqueued.
-3.4.2.1. Adding introductions to the introduction queue [ADD_QUEUE]
+Services MUST check whether the queue is overfull when adding to it, not just when processing requests.
+Floods of low-effort and zero-effort introductions need to be efficiently discarded when the queue is growing faster than it's draining.
- When PoW is enabled and a verified introduction comes through, the service
- instead of jumping straight into rendezvous, queues it and prioritizes it
- based on how much effort was devoted by the client to PoW. This means that
- introduction requests with high effort should be prioritized over those with
- low effort.
+The C implementation chooses a maximum number of queued items based on its configured dequeue rate limit multiplied by the circuit timeout.
+In effect, items past this threshold are expected not to be reachable by the time they will timeout.
+When this limit is exceeded, the queue experiences a mass trim event where the lowest effort half of all items are discarded.
- To do so, the service maintains an "introduction priority queue" data
- structure. Each element in that priority queue is an introduction request,
- and its priority is the effort put into its PoW:
+### Handling queued introductions {#handling-queue}
- When a verified introduction comes through, the service uses its included
- effort commitment value to place each request into the right position of the
- priority_queue: The bigger the effort, the more priority it gets in the
- queue. If two elements have the same effort, the older one has priority over
- the newer one.
+When deciding which introduction request to consider next, the service chooses the highest available effort. When efforts are equivalent, the oldest queued request is chosen.
-3.4.2.2. Handling introductions from the introduction queue [HANDLE_QUEUE]
+The service should handle introductions only by pulling from the introduction queue.
+We call this part of introduction handling the "bottom half" because most of the computation happens in this stage.
- The service should handle introductions by pulling from the introduction
- queue. We call this part of introduction handling the "bottom half" because
- most of the computation happens in this stage. For a description of how we
- expect such a system to work in Tor, see [TOR_SCHEDULER] section.
+For more on how we expect such a system to work in Tor, see the [scheduler analysis and discussion](./analysis-discussion.md#tor-scheduler) section.
-3.4.3. PoW effort estimation [EFFORT_ESTIMATION]
+## Effort control {#effort-control}
-3.4.3.1. High-level description of the effort estimation process
+### Overall strategy for effort determination {#effort-strategy}
- The service starts with a default suggested-effort value of 0, which keeps
- the PoW defenses dormant until we notice signs of overload.
+Denial-of-service is a dynamic problem where the attacker's capabilities constantly change, and hence we want our proof-of-work system to be dynamic and not stuck with a static difficulty setting.
+Instead of forcing clients to go below a static target configured by the service operator, we ask clients to "bid" using their PoW effort.
+Effectively, a client gets higher priority the higher effort they put into their proof-of-work.
+Clients automatically increase their bid when retrying, and services regularly offer a suggested starting point based on the recent queue status.
- The overall process of determining effort can be thought of as a set of
- multiple coupled feedback loops. Clients perform their own effort
- adjustments via [CLIENT_TIMEOUT] atop a base effort suggested by the service.
- That suggestion incorporates the service's control adjustments atop a base
- effort calculated using a sum of currently-queued client effort.
+[Motivated users](./motivation.md#user-profiles) can spend a high amount of effort in their PoW computation, which should guarantee access to the service given reasonable adversary models.
- Each feedback loop has an opportunity to cover different time scales. Clients
- can make adjustments at every single circuit creation request, whereas
- services are limited by the extra load that frequent updates would place on
- HSDir nodes.
+An effective effort estimation algorithm will improve reachability and UX by suggesting values that reduce overall service load to tolerable values while also leaving users with a tolerable overall delay.
- In the combined client/service system these client-side increases are
- expected to provide the most effective quick response to an emerging DoS
- attack. After early clients increase the effort using [CLIENT_TIMEOUT],
- later clients will benefit from the service detecting this increased queued
- effort and offering a larger suggested_effort.
+The service starts with a default suggested-effort value of 0, which keeps the PoW defenses dormant until we notice signs of queue overload.
- Effort increases and decreases both have an intrinsic cost. Increasing effort
- will make the service more expensive to contact, and decreasing effort makes
- new requests likely to become backlogged behind older requests. The steady
- state condition is preferable to either of these side-effects, but ultimately
- it's expected that the control loop always oscillates to some degree.
+The entire process of determining effort can be thought of as a set of multiple coupled feedback loops.
+Clients perform their own effort adjustments via [timeout retry](#client-timeout) atop a base effort suggested by the service.
+That suggestion incorporates the service's control adjustments atop a base effort calculated using a sum of currently-queued client effort.
-3.4.3.2. Service-side effort estimation
+Each feedback loop has an opportunity to cover different time scales.
+Clients can make adjustments at every single circuit creation request, whereas services are limited by the extra load that frequent updates would place on HSDir nodes.
- Services keep an internal effort estimation which updates on a regular
- periodic timer in response to measurements made on the queueing behavior
- in the previous period. These internal effort changes can optionally trigger
- client-visible suggested_effort changes when the difference is great enough
- to warrant republishing to the HSDir.
+In the combined client/service system these client-side increases are expected to provide the most effective quick response to an emerging DoS attack.
+After early clients increase the effort using timeouts, later clients benefit from the service detecting this increased queued effort and publishing a larger suggested effort.
- This evaluation and update period is referred to as HS_UPDATE_PERIOD.
- The service side effort estimation takes inspiration from TCP congestion
- control's additive increase / multiplicative decrease approach, but unlike
- a typical AIMD this algorithm is fixed-rate and doesn't update immediately
- in response to events.
+Effort increases and decreases both have a cost.
+Increasing effort will make the service more expensive to contact,
+and decreasing effort makes new requests likely to become backlogged behind older requests.
+The steady state condition is preferable to either of these side-effects, but ultimately it's expected that the control loop always oscillates to some degree.
- {TODO: HS_UPDATE_PERIOD is hardcoded to 300 (5 minutes) currently, but it
- should be configurable in some way. Is it more appropriate to use the
- service's torrc here or a consensus parameter?}
+### Service-side effort estimation {#service-effort}
-3.4.3.3. Per-period service state
+Services keep an internal effort estimation which updates on a regular periodic timer in response to measurements made on the queueing behavior in the previous period.
+These internal effort changes can optionally trigger client-visible [descriptor changes](#service-effort-update) when the difference is great enough to warrant republication to the [HSDir](../rend-spec/hsdesc.md).
- During each update period, the service maintains some state:
+This evaluation and update period is referred to as `HS_UPDATE_PERIOD`.
+The service-side effort estimation takes inspiration from TCP congestion control's additive increase / multiplicative decrease approach, but unlike a typical AIMD this algorithm is fixed-rate and doesn't update immediately in response to events.
- 1. TOTAL_EFFORT, a sum of all effort values for rendezvous requests that
- were successfully validated and enqueued.
+TODO: `HS_UPDATE_PERIOD` is hardcoded to 300 (5 minutes) currently, but it should be configurable in some way.
+Is it more appropriate to use the service's torrc here or a consensus parameter?
- 2. REND_HANDLED, a count of rendezvous requests that were actually
- launched. Requests that made it to dequeueing but were too old to launch
- by then are not included.
-
- 3. HAD_QUEUE, a flag which is set if at any time in the update period we
- saw the priority queue filled with more than a minimum amount of work,
- greater than we would expect to process in approximately 1/4 second
- using the configured dequeue rate.
+#### Per-period service state {#service-effort-periodic}
- 4. MAX_TRIMMED_EFFORT, the largest observed single request effort that we
- discarded during the period. Requests are discarded either due to age
- (timeout) or during culling events that discard the bottom half of the
- entire queue when it's too full.
+During each update period, the service maintains some state:
-3.4.3.4. Service AIMD conditions
+1. `TOTAL_EFFORT`, a sum of all effort values for rendezvous requests that were successfully validated and enqueued.
+2. `REND_HANDLED`, a count of rendezvous requests that were actually launched. Requests that made it to dequeueing but were too old to launch by then are not included.
+3. `HAD_QUEUE`, a flag which is set if at any time in the update period we saw the priority queue filled with more than a minimum amount of work, greater than we would expect to process in approximately 1/4 second using the configured dequeue rate.
+4. `MAX_TRIMMED_EFFORT`, the largest observed single request effort that we discarded during the period. Requests are discarded either due to age (timeout) or during culling events that discard the bottom half of the entire queue when it's too full.
- At the end of each period, the service may decide to increase effort,
- decrease effort, or make no changes, based on these accumulated state values:
+#### Service AIMD conditions {#service-effort-aimd}
- 1. If MAX_TRIMMED_EFFORT > our previous internal suggested_effort,
- always INCREASE. Requests that follow our latest advice are being
- dropped.
+At the end of each period, the service may decide to increase effort, decrease effort, or make no changes, based on these accumulated state values:
- 2. If the HAD_QUEUE flag was set and the queue still contains at least
- one item with effort >= our previous internal suggested_effort,
- INCREASE. Even if we haven't yet reached the point of dropping requests,
- this signal indicates that the our latest suggestion isn't high enough
- and requests will build up in the queue.
+1. If `MAX_TRIMMED_EFFORT` > our previous internal suggested_effort, always INCREASE.
+ Requests that follow our latest advice are being dropped.
+2. If the `HAD_QUEUE` flag was set and the queue still contains at least one item with effort >= our previous internal suggested_effort, INCREASE.
+ Even if we haven't yet reached the point of dropping requests, this signal indicates that our latest suggestion isn't high enough and requests will build up in the queue.
+3. If neither condition 1 or 2 are taking place and the queue is below a level we would expect to process in approximately 1/4 second, choose to DECREASE.
+4. If none of these conditions match, the suggested effort is unchanged.
- 3. If neither condition (1) or (2) are taking place and the queue is below
- a level we would expect to process in approximately 1/4 second, choose
- to DECREASE.
+When we INCREASE, the internal suggested_effort is increased to either its previous value + 1, or (`TOTAL_EFFORT` / `REND_HANDLED`), whichever is larger.
- 4. If none of these conditions match, the suggested effort is unchanged.
+When we DECREASE, the internal suggested_effort is scaled by 2/3rds.
- When we INCREASE, the internal suggested_effort is increased to either its
- previous value + 1, or (TOTAL_EFFORT / REND_HANDLED), whichever is larger.
+Over time, this will continue to decrease our effort suggestion any time the service is fully processing its request queue.
+If the queue stays empty, the effort suggestion decreases to zero and clients should no longer submit a proof-of-work solution with their first connection attempt.
- When we DECREASE, the internal suggested_effort is scaled by 2/3rds.
+It's worth noting that the suggested-effort is not a hard limit to the efforts that are accepted by the service, and it's only meant to serve as a guideline for clients to reduce the number of unsuccessful requests that get to the service.
+When [adding requests to the queue](#add-queue), services do accept valid solutions with efforts lower than the published `suggested-effort`.
- Over time, this will continue to decrease our effort suggestion any time the
- service is fully processing its request queue. If the queue stays empty, the
- effort suggestion decreases to zero and clients should no longer submit a
- proof-of-work solution with their first connection attempt.
+#### Updating descriptor with new suggested effort {#service-effort-update}
- It's worth noting that the suggested-effort is not a hard limit to the
- efforts that are accepted by the service, and it's only meant to serve as a
- guideline for clients to reduce the number of unsuccessful requests that get
- to the service. The service still adds requests with lower effort than
- suggested-effort to the priority queue in [ADD_QUEUE].
+The service descriptors may be updated for multiple reasons including introduction point rotation common to all v3 onion services, scheduled seed rotations like the one described for [`v1` parameters](./v1-equix.md#parameter-descriptor), and updates to the effort suggestion.
+Even though the internal effort estimate updates on a regular timer, we avoid propagating those changes into the descriptor and the HSDir hosts unless there is a significant change.
-3.4.3.5. Updating descriptor with new suggested effort
+If the PoW params otherwise match but the seed has changed by less than 15 percent, services SHOULD NOT upload a new descriptor.
- The service descriptors may be updated for multiple reasons including
- introduction point rotation common to all v3 onion services, the scheduled
- seed rotations described in [DESC_POW], and updates to the effort suggestion.
- Even though the internal effort estimate updates on a regular timer, we avoid
- propagating those changes into the descriptor and the HSDir hosts unless
- there is a significant change.
+### Client-side effort estimation {#client-effort}
- If the PoW params otherwise match but the seed has changed by less than 15
- percent, services SHOULD NOT upload a new descriptor.
+Clients are responsible for making their own effort adjustments in response to connection trouble, to allow the system a chance to react before the service has published new effort values.
+This is an important tool to uphold UX expectations without relying on excessively frequent updates through the HSDir.
-4. Client behavior [CLIENT_BEHAVIOR]
+#### Failure ambiguity {#client-failure-ambiguity}
- This proposal introduces a bunch of new ways where a legitimate client can
- fail to reach the onion service.
+The first challenge in reacting to failure, in our case, is to even accurately and quickly understand when a failure has occurred.
- Furthermore, there is currently no end-to-end way for the onion service to
- inform the client that the introduction failed. The INTRO_ACK cell is not
- end-to-end (it's from the introduction point to the client) and hence it does
- not allow the service to inform the client that the rendezvous is never gonna
- occur.
+This proposal introduces a bunch of new ways where a legitimate client can fail to reach the onion service.
+Furthermore, there is currently no end-to-end way for the onion service to inform the client that the introduction failed.
+The INTRO_ACK cell is not end-to-end (it's from the introduction point to the client) and hence it does not allow the service to inform the client that the rendezvous is never gonna occur.
- From the client's perspective there's no way to attribute this failure to
- the service itself rather than the introduction point, so error accounting
- is performed separately for each introduction-point. Existing mechanisms
- will discard an introduction point that's required too many retries.
+From the client's perspective there's no way to attribute this failure to the service itself rather than the introduction point, so error accounting is performed separately for each introduction-point.
+Prior mechanisms will discard an introduction point that's required too many retries.
-4.1. Clients handling timeouts [CLIENT_TIMEOUT]
+#### Clients handling timeouts {#client-timeout}
- Alice can fail to reach the onion service if her introduction request gets
- trimmed off the priority queue in [HANDLE_QUEUE], or if the service does not
- get through its priority queue in time and the connection times out.
+Alice can fail to reach the onion service if her introduction request gets trimmed off the priority queue when [enqueueing new requests](#add-queue), or if the service does not get through its priority queue in time and the connection times out.
- This section presents a heuristic method for the client getting service even
- in such scenarios.
+This section presents a heuristic method for the client getting service even in such scenarios.
- If the rendezvous request times out, the client SHOULD fetch a new descriptor
- for the service to make sure that it's using the right suggested-effort for
- the PoW and the right PoW seed. If the fetched descriptor includes a new
- suggested effort or seed, it should first retry the request with these
- parameters.
+If the rendezvous request times out, the client SHOULD fetch a new descriptor for the service to make sure that it's using the right suggested-effort for the PoW and the right PoW seed.
+If the fetched descriptor includes a new suggested effort or seed, it should first retry the request with these parameters.
- {TODO: This is not actually implemented yet, but we should do it. How often
- should clients at most try to fetch new descriptors? Determined by a
- consensus parameter? This change will also allow clients to retry
- effectively in cases where the service has just been reconfigured to
- enable PoW defenses.}
+TODO: This is not actually implemented yet, but we should do it.
+How often should clients at most try to fetch new descriptors?
+Determined by a consensus parameter?
+This change will also allow clients to retry effectively in cases where the service has just been reconfigured to enable PoW defenses.
- Every time the client retries the connection, it will count these failures
- per-introduction-point. These counts of previous retries are combined with
- the service's suggested_effort when calculating the actual effort to spend
- on any individual request to a service that advertises PoW support, even
- when the currently advertised suggested_effort is zero.
+Every time the client retries the connection, it will count these failures per-introduction-point. These counts of previous retries are combined with the service's `suggested_effort` when calculating the actual effort to spend on any individual request to a service that advertises PoW support, even when the currently advertised `suggested_effort` is zero.
- On each retry, the client modifies its solver effort:
+On each retry, the client modifies its solver effort:
- 1. If the effort is below (CLIENT_POW_EFFORT_DOUBLE_UNTIL = 1000)
- it will be doubled.
+1. If the effort is below `CLIENT_POW_EFFORT_DOUBLE_UNTIL` (= 1000) it will be doubled.
+2. Otherwise, multiply the effort by `CLIENT_POW_RETRY_MULTIPLIER` (= 1.5).
+3. Constrain the effort to no less than `CLIENT_MIN_RETRY_POW_EFFORT` (= 8). Note that this limit is specific to retries only. Clients may use a lower effort for their first connection attempt.
+3. Apply the maximum effort limit [described below](#client-limits).
- 2. Otherwise, multiply the effort by (CLIENT_POW_RETRY_MULTIPLIER = 1.5).
+#### Client-imposed effort limits {#client-limits}
- 3. Constrain the new effort to be at least
- (CLIENT_MIN_RETRY_POW_EFFORT = 8) and no greater than
- (CLIENT_MAX_POW_EFFORT = 10000)
+There isn't a practical upper limit on effort defined by the protocol itself, but clients may choose a maximum effort limit to enforce.
+It may be desirable to do this in some cases to improve responsiveness, but the main reason for this limit currently is as a workaround for weak cancellation support in our implementation.
- {TODO: These hardcoded limits should be replaced by timed limits and/or
- an unlimited solver with robust cancellation. This is issue tor#40787}
+Effort values used for both initial connections and retries are currently limited to no greater than `CLIENT_MAX_POW_EFFORT` (= 10000).
-``` \ No newline at end of file
+TODO: This hardcoded limit should be replaced by timed limits and/or an unlimited solver with robust cancellation. This is [issue 40787](https://gitlab.torproject.org/tpo/core/tor/-/issues/40787) in C tor.
diff --git a/spec/hspow-spec/overview.md b/spec/hspow-spec/overview.md
deleted file mode 100644
index cb4ea7e..0000000
--- a/spec/hspow-spec/overview.md
+++ /dev/null
@@ -1,68 +0,0 @@
-```text
-
-2. System Overview
-
-2.1. Tor protocol overview
-
- +----------------------------------+
- | Onion Service |
- +-------+ INTRO1 +-----------+ INTRO2 +--------+ |
- |Client |-------->|Intro Point|------->| PoW |-----------+ |
- +-------+ +-----------+ |Verifier| | |
- +--------+ | |
- | | |
- | | |
- | +----------v---------+ |
- | |Intro Priority Queue| |
- +---------+--------------------+---+
- | | |
- Rendezvous | | |
- circuits | | |
- v v v
-
-
-
- The proof-of-work scheme specified in this proposal takes place during the
- introduction phase of the onion service protocol.
-
- The system described in this proposal is not meant to be on all the time, and
- it can be entirely disabled for services that do not experience DoS attacks.
-
- When the subsystem is enabled, suggested effort is continuously adjusted and
- the computational puzzle can be bypassed entirely when the effort reaches
- zero. In these cases, the proof-of-work subsystem can be dormant but still
- provide the necessary parameters for clients to voluntarily provide effort
- in order to get better placement in the priority queue.
-
- The protocol involves the following major steps:
-
- 1) Service encodes PoW parameters in descriptor [DESC_POW]
- 2) Client fetches descriptor and computes PoW [CLIENT_POW]
- 3) Client completes PoW and sends results in INTRO1 cell [INTRO1_POW]
- 4) Service verifies PoW and queues introduction based on PoW effort
- [SERVICE_VERIFY]
- 5) Requests are continuously drained from the queue, highest effort first,
- subject to multiple constraints on speed [HANDLE_QUEUE]
-
-2.2. Proof-of-work overview
-
-2.2.2. Dynamic PoW
-
- DoS is a dynamic problem where the attacker's capabilities constantly change,
- and hence we want our proof-of-work system to be dynamic and not stuck with a
- static difficulty setting. Hence, instead of forcing clients to go below a
- static target like in Bitcoin to be successful, we ask clients to "bid" using
- their PoW effort. Effectively, a client gets higher priority the higher
- effort they put into their proof-of-work. This is similar to how
- proof-of-stake works but instead of staking coins, you stake work.
-
- The benefit here is that legitimate clients who really care about getting
- access can spend a big amount of effort into their PoW computation, which
- should guarantee access to the service given reasonable adversary models. See
- [PARAM_TUNING] for more details about these guarantees and tradeoffs.
-
- As a way to improve reachability and UX, the service tries to estimate the
- effort needed for clients to get access at any given time and places it in
- the descriptor. See [EFFORT_ESTIMATION] for more details.
-
-```