Hi!
Great question! I don't crawl reddit, but this applies to other large sites as well. reddit themselves they have at this very moment banned the ip range where I host my Yacy at (Hetzner). I just looked up from my index that I do have 257k pages indexed from reddit through teddit I used to run, this is from before reddit api-enshittification, going to delete those right now.
And the way how the crawling is done is you define crawling depth, which limits how much content is crawled from the site.
- 0 crawling depth = only the page you send Yacy to, nothing more.
- 1 crawling depth = all the links on the page you send Yacy to
- 2 crawling depth = all links on the page you send Yacy to, and all links on the pages crawled..
- 3 ...
- n ...
... etc.
I have my tampermonkey scripts set to only crawling depth of 1 at the moment (Just set them to 2 actually, kinda curious how much more I will be crawling), I've manually crawled some local news sites as a curiosity at the beginning. And my database is currently relatively small, only around ~86.38 gigabytes according to Yacy. This stores aproximately 2.6 million documents in Yacy's Solr.
--
Yacy has tons of options for crawling, so you can customize how much it crawls and even filter out overly large sites with maximum number of documents set when you send Yacy there.
Large picture of Yacy's interface for starting a crawl.
--
The tampermonkey script I've been talking about in these posts, it's very simple script: https://github.com/JeremyRand/YaCyIndexerGreasemonkey
Hit me up if you guys have more questions! I'm by no means an expert on Yacy, but I will do my best to answer.
WeeChat! I've been using WeeChat for like 13 years now and I love it. I used to use irssi back in the days and mIRC before irssi existed.. Some bitchX experiments may have happened at some point during the shell boom too.
This remote interface is the biggest selling point to me even today, it's amazing: WeeChat-Android.