The pickle files used for caching the datagrepper queries add up to about 2.6gb uncompressed, or 180MB xz compressed. It'd be nice to distribute this to people so they don't have to go through the long process of recreating locally to get started. I'm open to ideas.
Some problems:
Some ideas:
What do you think?
I guess Git LFS (Large File Storage) could be used in this case
Possibly! Although as it's set up right now, it saves one cache file per query type per week, so there are thousands of small-ish (a few megabytes each) files rather than one big one.
Login to comment on this ticket.