MarginaliaSearch/code/processes/crawling-process/model/test/nu/marginalia
Viktor Lofgren b510b7feb8 Spike for storing crawl data in slop instead of parquet
This seems to reduce RAM overhead to 100s of MB (from ~2 GB), as well as roughly double the read speeds.  On disk size is virtually identical.
2024-12-15 15:49:47 +01:00
..
crawling (minor) Fix accidental commit errors 2024-09-23 18:03:09 +02:00
slop Spike for storing crawl data in slop instead of parquet 2024-12-15 15:49:47 +01:00