Broad Crawls

    In addition to this “focused crawl”, there is another common type of crawlingwhich covers a large (potentially unlimited) number of domains, and is onlylimited by time or other arbitrary constraint, rather than stopping when thedomain was crawled to completion or when there are no more requests to perform.These are called “broad crawls” and is the typical crawlers employed by searchengines.

    These are some common properties often found in broad crawls:

    • they crawl many domains (often, unbounded) instead of a specific set of sites
    • they don’t necessarily crawl domains to completion, because it would beimpractical (or impossible) to do so, and instead limit the crawl by time ornumber of pages crawled
    • they crawl many domains concurrently, which allows them to achieve fastercrawl speeds by not being limited by any particular site constraint (each siteis crawled slowly to respect politeness, but many sites are crawled inparallel)

    As said above, Scrapy default settings are optimized for focused crawls, notbroad crawls. However, due to its asynchronous architecture, Scrapy is verywell suited for performing fast broad crawls. This page summarizes some thingsyou need to keep in mind when using Scrapy for doing broad crawls, along withconcrete suggestions of Scrapy settings to tune in order to achieve anefficient broad crawl.

    Scrapy’s default scheduler priority queue is .It works best during single-domain crawl. It does not work well with crawlingmany different domains in parallel

    To apply the recommended priority queue use:

    Increase concurrency

    Concurrency is the number of requests that are processed in parallel. There isa global limit () and an additional limit thatcan be set either per domain (CONCURRENT_REQUESTS_PER_DOMAIN) or perIP ().

    Note

    The scheduler priority queue recommended for broad crawls does not support.

    The default global concurrency limit in Scrapy is not suitable for crawlingmany different domains in parallel, so you will want to increase it. How muchto increase it will depend on how much CPU and memory you crawler will haveavailable.

    A good starting point is 100:

    1. CONCURRENT_REQUESTS = 100

    Increasing concurrency also increases memory usage. If memory usage is aconcern, you might need to lower your global concurrency limit accordingly.

    Increase Twisted IO thread pool maximum size

    Currently Scrapy does DNS resolution in a blocking way with usage of threadpool. With higher concurrency levels the crawling could be slow or even failhitting DNS resolver timeouts. Possible solution to increase the number ofthreads handling DNS queries. The DNS queue will be processed faster speedingup establishing of connection and crawling overall.

    To increase maximum thread pool size use:

    1. REACTOR_THREADPOOL_MAXSIZE = 20

    Setup your own DNS

    If you have multiple crawling processes and single central DNS, it can actlike DoS attack on the DNS server resulting to slow down of entire network oreven blocking your machines. To avoid this setup your own DNS server withlocal cache and upstream to some large DNS like OpenDNS or Verizon.

    When doing broad crawls you are often only interested in the crawl rates youget and any errors found. These stats are reported by Scrapy when using the log level. In order to save CPU (and log storage requirements) youshould not use DEBUG log level when preforming large broad crawls inproduction. Using DEBUG level when developing your (broad) crawler may befine though.

    To set the log level use:

    Disable cookies

    Disable cookies unless you really need. Cookies are often not needed whendoing broad crawls (search engine crawlers ignore them), and they improveperformance by saving some CPU cycles and reducing the memory footprint of yourScrapy crawler.

    To disable cookies use:

    1. COOKIES_ENABLED = False

    Disable retries

    Retrying failed HTTP requests can slow down the crawls substantially, speciallywhen sites causes are very slow (or fail) to respond, thus causing a timeouterror which gets retried many times, unnecessarily, preventing crawler capacityto be reused for other domains.

    To disable retries use:

    1. RETRY_ENABLED = False

    Reduce download timeout

    To reduce the download timeout use:

    Consider disabling redirects, unless you are interested in following them. Whendoing broad crawls it’s common to save redirects and resolve them whenrevisiting the site at a later crawl. This also help to keep the number ofrequest constant per crawl batch, otherwise redirect loops may cause thecrawler to dedicate too many resources on any specific domain.

    To disable redirects use:

    1. REDIRECT_ENABLED = False

    Enable crawling of “Ajax Crawlable Pages”

    Some pages (up to 1%, based on empirical data from year 2013) declarethemselves as ajax crawlable. This means they provide plain HTMLversion of content that is usually available only via AJAX.Pages can indicate it in two ways:

    • by using in URL - this is the default way;
    1. AJAXCRAWL_ENABLED = True

    When doing broad crawls it’s common to crawl a lot of “index” web pages;AjaxCrawlMiddleware helps to crawl them correctly.It is turned OFF by default because it has some performance overhead,and enabling it for focused crawls doesn’t make much sense.

    Crawl in BFO order

    Scrapy crawls in DFO order by default.

    In broad crawls, however, page crawling tends to be faster than pageprocessing. As a result, unprocessed early requests stay in memory until thefinal depth is reached, which can significantly increase memory usage.

    instead to save memory.

    Be mindful of memory leaks

    If your broad crawl shows a high memory usage, in addition to and lowering concurrency you should .

    If the crawl is exceeding the system’s capabilities, you might want to tryinstalling a specific Twisted reactor, via the TWISTED_REACTOR setting.