MarginaliaSearch/code/processes
Viktor Lofgren ecb5eedeae (crawler, EXPERIMENT) Disable content type probing and use Accept header instead
There's reason to think this may speed up crawling quite significantly, and the benefits of the probing aren't quite there.
2024-09-30 14:53:01 +02:00
..
converting-process (converter) Fix NPE bugs in converter due to the reintroduction of CrawledDocument.headers 2024-09-25 12:18:56 +02:00
crawling-process (crawler, EXPERIMENT) Disable content type probing and use Accept header instead 2024-09-30 14:53:01 +02:00
index-constructor-process (index) Experimental initial integration of document spans into index 2024-07-30 12:01:53 +02:00
loading-process (loader) Fix it so that the loader doesn't explode if it sees an invalid URL 2024-09-12 11:36:00 +02:00
process-mq-api (wip) Extract and encode spans data 2024-07-27 11:44:13 +02:00
test-data (build) Java 22 and its consequences has been a disaster for Marginalia Search 2024-04-24 13:54:04 +02:00
website-adjacencies-calculator (build) Fix dependency churn from testcontainers 2024-08-25 10:35:48 +02:00
readme.md (doc) Fix outdated links in documentation 2024-09-22 13:56:17 +02:00

Processes

1. Crawl Process

The crawling-process fetches website contents, temporarily saving them as WARC files, and then re-converts them into parquet models. Both are described in crawling-process/model.

2. Converting Process

The converting-process reads crawl data from the crawling step and processes them, extracting keywords and metadata and saves them as parquet files described in converting-process/model.

3. Loading Process

The loading-process reads the processed data.

It has creates an index journal, a link database, and loads domains and domain-links into the MariaDB database.

4. Index Construction Process

The index-construction-process constructs indices from the data generated by the loader.

Overview

Schematically the crawling and loading process looks like this:

    +-----------+  
    |  CRAWLING |  Fetch each URL and 
    |    STEP   |  output to file
    +-----------+
          |
    //========================\\
    ||  Parquet:              || Crawl
    ||  Status, HTML[], ...   || Files
    ||  Status, HTML[], ...   ||
    ||  Status, HTML[], ...   ||
    ||     ...                ||
    \\========================//
          |
    +------------+
    | CONVERTING |  Analyze HTML and 
    |    STEP    |  extract keywords 
    +------------+  features, links, URLs
          |
    //==================\\
    || Slop   :         ||  Processed
    ||  Documents[]     ||  Files
    ||  Domains[]       ||
    ||  Links[]         ||  
    \\==================//
          |
    +------------+ Insert domains into mariadb
    |  LOADING   | Insert URLs, titles in link DB
    |    STEP    | Insert keywords in Index
    +------------+    
          |
    +------------+
    | CONSTRUCT  | Make the data searchable
    |   INDEX    | 
    +------------+