Hoe je verzoeken vriendelijker maakt en vertraagt

Webscraping in R

Timo Grossenbacher

Instructor

Niet thuis proberen!

library(httr)
while(TRUE){
  print(Sys.time())
  response <- 
    GET("https://httpbin.org")
  print(status_code(response))
}
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:17 CEST"
[1] 200
[1] "2020-06-20 10:31:18 CEST"
[1] 200
...
Webscraping in R

Een vriendelijkere manier om data van websites op te vragen

while(TRUE){
  # Wacht één seconde
  # ...
  print(Sys.time())
  response <- 
    GET("https://httpbin.org")
  print(status_code(response))
}
[1] "2020-06-20 10:36:06 CEST"
[1] 200
[1] "2020-06-20 10:36:07 CEST"
[1] 200
[1] "2020-06-20 10:36:08 CEST"
[1] 200
[1] "2020-06-20 10:36:09 CEST"
[1] 200
[1] "2020-06-20 10:36:10 CEST"
[1] 200
[1] "2020-06-20 10:36:11 CEST"
[1] 200
...
Webscraping in R

Een nette aanpak voor throttling

Throttling van een functie = tijdsvertraging tussen aanroepen

library(httr)
library(purrr)
throttled_GET <- slowly(
  ~ GET("https://httbin.org"),

rate = rate_delay(3))
while(TRUE){ print(Sys.time()) response <- throttled_GET() print(status_code(response)) }
[1] "2020-06-20 10:53:44 CEST"
[1] 200
[1] "2020-06-20 10:53:47 CEST"
[1] 200
[1] "2020-06-20 10:53:50 CEST"
[1] 200
[1] "2020-06-20 10:53:53 CEST"
[1] 200
[1] "2020-06-20 10:53:56 CEST"
[1] 200
...
Webscraping in R

Aangepaste URL's opvragen in een gethrottlede functie

library(httr)
library(purrr)
throttled_GET <-
    # in plaats van GET("https://...")
    slowly(~ GET(.), rate = rate_delay(3))

while(TRUE){ print(Sys.time()) response <- throttled_GET("https://wikipedia.org") print(status_code(response)) }
[1] "2020-06-20 10:53:44 CEST"
[1] 200
[1] "2020-06-20 10:53:47 CEST"
[1] 200
[1] "2020-06-20 10:53:50 CEST"
[1] 200
[1] "2020-06-20 10:53:53 CEST"
[1] 200
[1] "2020-06-20 10:53:56 CEST"
[1] 200
...
Webscraping in R

Lussen over een lijst met URL's

library(httr)
url_list <- c("https://httbin.org/anything/1",
              "https://httbin.org/anything/2",
              "https://httbin.org/anything/3")

for(url in url_list){
  response <- throttled_GET(url)
  print(status_code(response))
}       
[1] 200
[1] 200
[1] 200
library(httr)
url_list <- c("https://wikipedia.org/wiki/K2",
              "https://wikipedia.org/wiki/\
    Mount_Everest")

for(url in url_list){
  response <- throttled_GET(url)
  print(status_code(response))
}
[1] 200
[1] 200
Webscraping in R

Laten we dit toepassen op een echt voorbeeld!

Webscraping in R

Preparing Video For Download...