Robots.txt is a lot like email in that it was built for a far simpler time.
It would be better if the server could detect bots and send them down a rabbit hole rather than trusting randos to abide by the rules.
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, KBin, etc).
If you wanted to get help with moderating your own community then head over to [email protected]!
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration), Search Lemmy
Robots.txt is a lot like email in that it was built for a far simpler time.
It would be better if the server could detect bots and send them down a rabbit hole rather than trusting randos to abide by the rules.
It was built for the living, free internet.
For all ita dark corners, it was better than what we have now.
It would be better if the server could detect bots and send them down a rabbit hole
Already possible: Nepenthes.
ANY SITE THIS SOFTWARE IS APPLIED TO WILL LIKELY DISAPPEAR FROM ALL SEARCH RESULTS.
I’m sold
Because of AI bots ignoring robots.txt (especially when you don't explicitly mention their user-agent and rather use a * wildcard) more and more people are implementing exactly that and I wouldn't be surprised if that is what triggered the need to implement robots.txt support for FediDB.
Forced to use https://lemmy.fediverse.observer/list to see which instances are the most active
This looks more accurate than fedidb TBH. The initial serge from reddit back in 2023. The slow fall of active members. I personally think the reason the number of users drops so much is because certain instances turn off the ability for outside crawlers to get their user info.
lol FediDB isn't a crawler, though. It makes API calls.
They do have a dedicated "Crawler" page.
And they do mention there that they use a website crawler for their Developer Tools and Network features.
Maybe the definition of the term "crawler" has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.
Fedidb isn't doing anything like that so I'm a bit bemused by this whole thing.
Did someone complain? Or why stop?
No idea honestly. If anyone knows, let us know! I dont think its necessarily a bad thing, If their crawler was being too aggressive, then it can accidentally DDOS smaller servers. Im hoping that is what they are doing and respecting the robot.txt that some sites have.
Gotosocial has a setting in development that is designed to baffle bots that don't respect robots.txt. FediDB didn't know about that feature and thought gotosocial was trying to inflate their stats.
In the arguments that went back and forth between the devs of the apps involved, it turns out that FediDB was ignoring robots.txt. ie, it was badly behaved
Interesting! Is this over a Git issue somewhere? That could explain quite a bit.
Yep!
I think it's just one HTTP request to the nodeinfo API endpoint once a day or so. Can't really be an issue regarding load on the instances.
It's not about the impact it's about consent.
True. Question here is: if you run a federated service... Is that enough to assume you consent to federation? I'd say yes. And those Mastodon crawlers and statistics pages are part of the broader ecosystem of the Fediverse. But yeah, we can disagree here. It's now going to get solved technically.
I still wonder what these mentioned scrapers and crawlers do. And the reasoning for the people to be part of the Fediverse but at the same time not be a public part of the Fediverse in another sense... But I guess they do other things on GoToSocial than I do here on Lemmy.
Why invent implied consent when complicit explicit has been the standard in robots.txt for ages now?
Legally speaking there's nothing they can do. But this is about consent, not legality. So why use implied?
I guess because it's in the specification? Or absent from it? But I'm not sure. Reading the ActivityPub specification is complicated, because you also need to read ActivityStreams and lots of other references. And I frequently miss stuff that is somehow in there.
But generally we aren't Reddit where someone just says, no we prohibit third party use and everyone needs to use our app by our standards. The whole point of the Fediverse and ActivityPub is to interconnect. And to connect people across platforms. And it doen't even make lots of assumptions. The developers aren't forced to implement a Facebook clone. Or do something like Mastodon or GoToSocial does or likes. They're relatively free to come up with new ideas and adopt things to their liking and use-cases. That's what makes us great and diverse.
I -personally- see a public API endpoint as an invitation to use it. And that's kind of opposed to the consent thing. But I mean, why publish something in the first place, unless it comes with consent?
But with that said... We need some consensus in some areas. There are use cases where things arent obvious from the start. I'm just sad that everyone is ao agitated and seems to just escalate. I'm not sure if they tried talking to each other nicely. I suppose it's not a big deal to just implement the robots.txt and everyone can be happy. Without it needing some drama to get there.
Robots.txt started I'm 1994.
It's been a consensus for decades.
Why throw it out and replace it with imied consent to scrape?
That's why I said legally there's nothing they can do. If people want to scrape it they can and will.
This is strictly about consent. Just because you can doesn't mean you should yes?
I guess I haven't read a convincing argument yet why robots.txt should be ignored.
It's been a consensus for decades
Let's see about that.
Wikipedia lists http://www.robotstxt.org/ as the official homepage of robots.txt and the "Robots Exclusion Protocol". In the FAQ at http://www.robotstxt.org/faq.html the first entry is "What is a WWW robot?" http://www.robotstxt.org/faq/what.html. It says:
A robot is a program that automatically traverses the Web's hypertext structure by retrieving a document, and recursively retrieving all documents that are referenced.
That's not FediDB. That's not even nodeinfo.
I just think you're making it way more simple than it is... Why not implement 20 other standards that have been around for 30 years? Why not make software perfect and without issues? Why not anticipate what other people will do with your public API endpoints in the future? Why not all have the same opinions?
There could be many reasons. They forgot, they didn't bother, they didn't consider themselves to be the same as a commercial Google or Yandex crawler... That's why I keep pushing for information and refuse to give a simple answer. Could be an honest mistake. Could be honest and correct to do it and the other side is wrong, since it's not a crawler alike Google or the AI copyright thieves... Could be done maliciously. In my opinion, it's likely that it just hadn't been an issue before, the situation changed and now it is. And we're getting a solution after some pushing. Seems at least FediDB took it offline and they're working on robots.txt support. They did not refuse to do it. So it's fine. And I can't comment on why it hadn't been in place. I'm not involved with that project and the history of it's development.
And keep in mind, Fediverse discoverability tools aren't the same as a content stealing bot. They're there to aid the users. And part of the platform in the broader picture. Mastodon for example isn't very useful unless it provides a few additional tools, so you can actually find people and connect with them. So it'd be wrong to just apply the exact same standards to it like some AI training crawler or Google. There is a lot of nuance to it. And did people in 1994 anticipate our current world and provide robots.txt with the nuanced distinctions so it's just straightforward and easy to implement? I think we agree that it's wrong to violate the other user's demands/wishes now that the're well known. Other than that, I just think it's not very clear who's at fault here, if any.
Plus, I'd argue it isn't even clear whether robots.txt applies to a statistics page. Or a part of a microblogging platform. Those certainly don't crawl any content. Or it's part of what the platform is designed to do. The term "crawler" isn't well defined in RFC 9309. Maybe it's debatable whether that even applies.
You can consent to a federation interface without consenting to having a bot crawl all your endpoints.
Just because something is available on the internet it doesn't mean all uses are legitimate - this is effectively the same problem as AI training with stolen content.
Yes. I wholeheartedly agree. Not every use is legitimate. But I'd really need to know what exactly happeded and the whole story to judge here. I'd say if it were a proper crawler, they'd need to read the robots.txt. That's accepted consensus. But is that what's happened here?
And I mean the whole thing with consensus and arbitrary use cases is just complicated. I have a website, and a Fediverse instance. Now you visit it. Is this legitimate? We'd need to factor in why I put it there. And what you're doing with that information. If it's my blog, it's obviously there for you to read it... Or is it...!? Would you call me and ask for permission before reading it? ...That is implied consent. I'd argue this is how the internet works. At least generally speaking. And most of the times it's super easy to tell what's right and what is wrong. But sometimes it isn't.
its too bad too with the recent reddit activity.