--- date: '2018-09-09T00:00:00+08:00' draft: no linktitle: Chapter 6 Scrapping menu: r-programming: parent: Technical Analysis with R toc: no type: docs ---

Scrapping

We will cover two common ways to extract files that is not from database directly:

  1. pdf files, and
  2. html files.

PDF files

We will cover how to scrap data from:

  1. extract pdf files stored in offline
  2. download pdf file and extract
  3. massive download and extact

Offline pdf file

We need to install and load pdftools package to do the extraction.

install.packages("pdftools")
library(pdftools)

To read pdf as textfile, use pdf_text().

txt <- pdf_text("path/file.pdf")

Then we can extract a particular page.

test <- txt[49] #page 49

The pdf file contains a table.

To extract rows into list, we use the function scan.

rows<-scan(textConnection(test), 
           what="character", sep = "\n")

Then we can delimit rows into cells.

row =unlist(strsplit(rows[1]," \\s+ "))

Online pdf file

First we download a pdf file from the web. We use the function download.file.

Import the pdf file and then extract P.49 where it has a table. Then we scan to separate text file into rows.

Then we loop over the rows (starting from row 7) for the following operations: 1. split each row that is separated by space \\s+ using strsplit, 2. unlist the result to make it a vector, and (3) store the third cells if it is not empty.

link <- paste0(
  "http://www.singstat.gov.sg/docs/",
  "default-source/default-document-library/",
  "publications/publications_and_papers/",
  "cop2010/census_2010_release3/",
  "cop2010sr3.pdf")
download.file(link, "census2010_3.pdf", mode = "wb")

txt <- pdf_text("census2010_3.pdf")
test <- txt[49]  #P.49
rows<-scan(textConnection(test), what="character",
           sep = "\n")  

name<-c()
total<-c()

for (i in 7:length(rows)){
  row = unlist(strsplit(rows[i]," \\s+ "))
  if (!is.na(row[3])){
     name <- c(name, row[2])
     total<- c(total,
               as.numeric((gsub(",", "", row[3]))))
  }
}

Scrapping through massive download

We will use the RCurl package to download a large of csv files. Very often, we need to download a lot of csv files from the website. Luckily csv files are stored on the website with structured url paths.

For example, suppose that we want to download the all historical weather data of Singapore airport. We go to the website http://www.weather.gov.sg/climate-historical-daily/. Then we can see from the bottom that the links for download csv file is http://www.weather.gov.sg/files/dailydata/DAILYDATA_S24_201712.csv.

Hence, we will use getURL to get the file and the use textConnection to read the csv file directly.

install.packages("RCurl")
library(RCurl)
link<-paste0("http://www.weather.gov.sg/files/",
             "dailydata/DAILYDATA_S24_201712.csv")
x <- getURL(URL)
df<-read.csv(textConnection(x))

However, very often, we want to download more months. Then we can use loop. By guessing and checking, we know that S24 refers to Changi airport, 2017 is the year and 12 is December. To download the whole year of data, then we have to download all 12-month data, and at each time the link dynamically changes and the data is combined each round:

site<-"http://www.weather.gov.sg/files/dailydata/"
months <- c("01","02","03","04","05","06",
            "07","08","09","10","11","12")
df <-data.frame()
for (month in months){
  filename <-paste0("DAILYDATA_S24_2017",month,".csv")
  link<-paste0(site,filename)
  x <- getURL(link)
  temp<-read.csv(textConnection(x))
  df <-rbind(df,temp)
}

Alternatively, we can download each months as a separate csv file into a single folder and then combine all csv files at the end. This is particularly useful when the csv files are huge.

The following codes first download all the csv files into a temp folder and then combine all csv files in that folder. To combine all csv files in the folder, we need to obtain the path of all files using list.file'' where the optionfull.names’‘is set to be TRUE to also get the directory path. Then we need to have a list of csv files by using lapply with the import function fread. Finally, we use ``rbindlist’’ to combine all data in the list.

site<-"http://www.weather.gov.sg/files/dailydata/"
months <- c("01","02","03","04","05","06",
            "07","08","09","10","11","12")
df <-data.frame()
# Download data
for (month in months){
  filename <-paste0("DAILYDATA_S24_2017",month,".csv")
  link<-paste0(site,filename)
  x <- getURL(link)
  temp<-read.csv(textConnection(x))
  write.csv(temp,paste0("./temp/",filename), 
            row.names=FALSE)
}
# Combine data
library(data.table)
folder<-"./temp/"
csv.list <- list.files(pattern = "\\.csv$")
lst <- lapply(csv.list, fread)
df <- rbindlist(lst,fill=TRUE)

Scrapping from Web

We will use the rvest package to scrap directly from the web. However, it is sometimes convenient to know what to extract using some minor tools. We will use SelectorGadget from Chrome browser.

With the keyword SelectorGadget, use internet search engine to download and install the file. The program is easy to use. The first click will select area and then subsequent click will include or exclude elements.

To install and load the rvest package, we use the following code:

install.packages("rvest")
library(rvest)

Wikipedia Table

We will do two scrapping exercises:

  1. scarp from Wikipedia table, and
  2. scrap from an unfriendly website.

The following code extracts the student t’s distribution table from Wikipedia. Using the SelectorGadget, we can see that the table is called .wikitable. Then we will extract that using html_nodes() and then we parse the html data into a dataframe using html_table().

link <-paste0("https://en.wikipedia.org/wiki/",
              "Student%27s_t-distribution")
webpage <- read_html(link)
data <- html_nodes(webpage,".wikitable")
table<- html_table(data[[1]],header = FALSE)

Other Websites

To scarp from unstructural data, then we need to find what is the selector using the SelectorGadget. Then we can read the data as text.

link<-paste0("http://www.fas.nus.edu.sg/ecs/",
             "people/staff.html")
webpage <- read_html(link)
data <- html_nodes(webpage,"br+ table td")
content <-html_text(data)

Then we can transform dataset into dataframe.

df = data.frame(matrix(content,ncol=5,byrow=T),
                stringsAsFactors=FALSE)
colnames(df)<-df[1,]
df[-1,]
##                           Title                       Name       Tel
## 2                       Manager    Ms PAK Ming Foon, Ginny 6516 3956
## 3             Assistant Manager          Ms WOON Swee Yoke 6516 6027
## 4                       Manager              Ms Nicky KHEH 6516 4878
## 5             Assistant Manager                 Ms LI Jing 6516 8909
## 6             Assistant Manager            Ms NEO Seok Min 6516 3941
## 7  Management Assistant Officer           Ms CHEE Lee Kuen 6516 3942
## 8  Management Assistant Officer Ms Fatimah AHMAD\r\n\t\t\t\t   6516 3950
## 9  Management Assistant Officer           Ms Salinah ZUBER 6516 3958
## 10 Management Assistant Officer            Ms Diana ISMAIL 6516 6013
## 11 Management Assistant Officer          Mdm TAN Leng Choo 6516 1304
##      Email                        Main Area
## 2  ecspmfg                    Undergraduate
## 3   ecswsy                    Undergraduate
## 4   ecsklc                         Graduate
## 5   ecslij   Graduate (Master of Economics)
## 6   ecssec        Head's Personal Assistant
## 7   ecsclk                      Timetabling
## 8    ecsfa Undergraduate (levels 1000-2000)
## 9    ecssz Undergraduate (levels 3000-4000)
## 10   ecsdi            Graduate (Coursework)
## 11  ecstlc              Graduate (Research)
row.names(df) <- NULL
head(df[2:3], n=3)
##                      Name       Tel
## 1                    Name       Tel
## 2 Ms PAK Ming Foon, Ginny 6516 3956
## 3       Ms WOON Swee Yoke 6516 6027