aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorale <ale@incal.net>2018-09-02 11:19:53 +0100
committerale <ale@incal.net>2018-09-02 11:19:53 +0100
commitbbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c (patch)
tree8eb252c0c94d8307a3f81f8a6f1eb00208e3b486
parent59f3725ff8c81dca1f1305da34e877c0316d4152 (diff)
downloadcrawl-bbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c.tar.gz
crawl-bbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c.zip
Fix typo
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index 3e4d973..38f7bc3 100644
--- a/README.md
+++ b/README.md
@@ -62,7 +62,7 @@ Like most crawlers, this one has a number of limitations:
* it completely ignores *robots.txt*. You can make such policy
decisions yourself by turning the robots.txt into a list of patterns
- to be used with *--exclude-file*.
+ to be used with *--exclude-from-file*.
* it does not embed a Javascript engine, so Javascript-rendered
elements will not be detected.
* CSS parsing is limited (uses regular expressions), so some *url()*