diff options
author | ale <ale@incal.net> | 2018-09-02 11:19:53 +0100 |
---|---|---|
committer | ale <ale@incal.net> | 2018-09-02 11:19:53 +0100 |
commit | bbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c (patch) | |
tree | 8eb252c0c94d8307a3f81f8a6f1eb00208e3b486 | |
parent | 59f3725ff8c81dca1f1305da34e877c0316d4152 (diff) | |
download | crawl-bbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c.tar.gz crawl-bbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c.zip |
Fix typo
-rw-r--r-- | README.md | 2 |
1 files changed, 1 insertions, 1 deletions
@@ -62,7 +62,7 @@ Like most crawlers, this one has a number of limitations: * it completely ignores *robots.txt*. You can make such policy decisions yourself by turning the robots.txt into a list of patterns - to be used with *--exclude-file*. + to be used with *--exclude-from-file*. * it does not embed a Javascript engine, so Javascript-rendered elements will not be detected. * CSS parsing is limited (uses regular expressions), so some *url()* |