![]() The new converter logic assumes that the crawl data is ordered where the domain record comes first, and then a sequence of document records. This is true for the new parquet format, but not for the old zstd/gson format. To make the new converter compatible with the old format, a specialized reader is introduced that scans for the domain record before running through the sequence of document records; and presenting them in the new order. This is slower than just reading the file beginning to end, so in order to retain performance when this ordering isn't necessary, a CompatibilityLevel flag is added to CrawledDomainReader, permitting the caller to decide how compatible the data needs to be. Down the line when all the old data is purged, this should be removed, as it amounts to technical debt. |
||
---|---|---|
.. | ||
src | ||
build.gradle | ||
readme.md |
Crawling Process
The crawling process downloads HTML and saves them into per-domain snapshots. The crawler seeks out HTML documents, and ignores other types of documents, such as PDFs. Crawling is done on a domain-by-domain basis, and the crawler does not follow links to other domains within a single job.
Robots Rules
A significant part of the crawler is dealing with robots.txt
and similar, rate limiting headers; especially when these
are not served in a standard way (which is very common). RFC9390 as well
as Google's Robots.txt Specifications are good references.
Re-crawling
The crawler can use old crawl data to avoid re-downloading documents that have not changed. This is done by
comparing the old and new documents using the HTTP If-Modified-Since
and If-None-Match
headers. If a large
proportion of the documents have not changed, the crawler falls into a mode where it only randomly samples a few
documents from each domain, to avoid wasting time and resources on domains that have not changed.
Sitemaps and rss-feeds
On top of organic links, the crawler can use sitemaps and rss-feeds to discover new documents.
Central Classes
- CrawlerMain orchestrates the crawling.
- CrawlerRetreiver visits known addresses from a domain and downloads each document.
- HttpFetcher fetches URLs.