Developer of PieFed, a sibling of Lemmy & Mbin.

  • 135 Posts
  • 861 Comments
Joined 1 year ago
cake
Cake day: January 4th, 2024

help-circle




  • Maybe the definition of the term “crawler” has changed but crawling used to mean downloading a web page, parsing the links and then downloading all those links, parsing those pages, etc etc until the whole site has been downloaded. If there were links going to other sites found in that corpus then the same process repeats for those. Obviously this could cause heavy load, hence robots.txt.

    Fedidb isn’t doing anything like that so I’m a bit bemused by this whole thing.