Come essere gentili e rallentare le richieste

Web scraping in R

Timo Grossenbacher

Instructor

Non farlo a casa!

library(httr)
while(TRUE){
  print(Sys.time())
  response <- 
    GET("https://httpbin.org")
  print(status_code(response))
}
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:18 CEST"
[1] 200
...
Web scraping in R

Un modo migliore per richiedere dati ai siti web

while(TRUE){
  # Aspetta un secondo
  # ...
  print(Sys.time())
  response <- 
    GET("https://httpbin.org")
  print(status_code(response))
}
[1] "2020-06-20 10:36:06 CEST"
[1] 200
[1] "2020-06-20 10:36:07 CEST"
[1] 200
[1] "2020-06-20 10:36:08 CEST"
[1] 200
[1] "2020-06-20 10:36:09 CEST"
[1] 200
[1] "2020-06-20 10:36:10 CEST"
[1] 200
[1] "2020-06-20 10:36:11 CEST"
[1] 200
...
Web scraping in R

Un approccio tidy al throttling

Throttling di una funzione = inserire un ritardo tra le chiamate

library(httr)
library(purrr)
throttled_GET <- slowly(
  ~ GET("https://httbin.org"),

rate = rate_delay(3))
while(TRUE){ print(Sys.time()) response <- throttled_GET() print(status_code(response)) }
[1] "2020-06-20 10:53:44 CEST"
[1] 200
[1] "2020-06-20 10:53:47 CEST"
[1] 200
[1] "2020-06-20 10:53:50 CEST"
[1] 200
[1] "2020-06-20 10:53:53 CEST"
[1] 200
[1] "2020-06-20 10:53:56 CEST"
[1] 200
...
Web scraping in R

Interrogare URL personalizzati in una funzione con throttling

library(httr)
library(purrr)
throttled_GET <-
    # invece di GET("https://...")
    slowly(~ GET(.), rate = rate_delay(3))

while(TRUE){ print(Sys.time()) response <- throttled_GET("https://wikipedia.org") print(status_code(response)) }
[1] "2020-06-20 10:53:44 CEST"
[1] 200
[1] "2020-06-20 10:53:47 CEST"
[1] 200
[1] "2020-06-20 10:53:50 CEST"
[1] 200
[1] "2020-06-20 10:53:53 CEST"
[1] 200
[1] "2020-06-20 10:53:56 CEST"
[1] 200
...
Web scraping in R

Iterare su un elenco di URL

library(httr)
url_list <- c("https://httbin.org/anything/1",
              "https://httbin.org/anything/2",
              "https://httbin.org/anything/3")

for(url in url_list){
  response <- throttled_GET(url)
  print(status_code(response))
}       
[1] 200
[1] 200
[1] 200
library(httr)
url_list <- c("https://wikipedia.org/wiki/K2",
              "https://wikipedia.org/wiki/\
    Mount_Everest")

for(url in url_list){
  response <- throttled_GET(url)
  print(status_code(response))
}
[1] 200
[1] 200
Web scraping in R

Applichiamolo a un esempio reale!

Web scraping in R

Preparing Video For Download...