As far as I know, the website doesn’t have an API but I just download the HTML and format the result with a simple Python script, it makes around 10 to 20 requests, one for each series I’m following at each time.
You can use the cache feature in curl/wget so it does not download the same css, html, twice. Also, can ignore JavaScript, and image files to save on unnecessary requests.
I would reduce the frequency to once every two days to further reduce the impact.
That might/might not be much.
Depends upon the site, I’d say.
e.g. If it’s something like Netflix, I wouldn’t think much, because they have the means to serve the requests.
But for some PeerTube instance, even a single request seems to be too heavy for them. So if that server does not respond to my request, I usually wait for an hour or so before refreshing the page.
If the site is getting slowed at times (regardless of whether it is when you scrape), you might want to not scrape at all.
Probably not a good idea to download the whole site, but then that depends upon the site.
As far as I know, the website doesn’t have an API but I just download the HTML and format the result with a simple Python script, it makes around 10 to 20 requests, one for each series I’m following at each time.
You can use the cache feature in curl/wget so it does not download the same css, html, twice. Also, can ignore JavaScript, and image files to save on unnecessary requests.
I would reduce the frequency to once every two days to further reduce the impact.
That might/might not be much.
Depends upon the site, I’d say.
e.g. If it’s something like Netflix, I wouldn’t think much, because they have the means to serve the requests.
But for some PeerTube instance, even a single request seems to be too heavy for them. So if that server does not respond to my request, I usually wait for an hour or so before refreshing the page.