Categories

## Implementing a model as an R package

In our research group we often have people creating statistical models that end up in publications but, most of the time, the practical implementation of those models is lacking. I mean, we have a bunch of barely functioning code that is very difficult to use in a reliable way in operations of the breeding programs. I was very keen on continue using one of the models in our research, enough to rewrite and document the model fitting, and then create another package for using the.model in operations.

Unfortunately, neither the data nor the model are mine to give away, so I can’t share them (yet). But I hope these notes will help you in you are in the same boat and need to use your models (or ‘you’ are in fact future me, who tend to forget how or why I wrote code in a specific way).

## A basic motivational example

Let’s start with a simple example: linear regression. We want to predict a response using a predictor variable, and then we can predict the response for new values of the predictor contained in new_data with:

my_model <- lm(response ~ predictor, data = my_data)
predictions <- predict(my_model, newdat = new_data)

# Saving the model object
save(my_model, 'model_file.Rda')


The model coefficients needed to predict new values are stored in the my_model object. If we want to use the model elsewhere, we can save the object as an .Rda file, in this case model_file.Rda.

We can later read the model file in, say, a different project and get new predictions using:

load('model_file.Rda')
more_predictions <- predict(my_model, newdat = yet_another_new_data)


## Near-infrared Spectroscopy

Near-infrared spectroscopy is the stuff of CSI and other crime shows. We measure the reflection at different wavelengths and run a regression analysis using what we want to predict as Y in the model. The number of predictors (wavelengths) is much larger than in the previous example—1,296 for the NIR machine we are using—so it is not unusual to have more predictors than observations. NIR spectra are often trained using pls() (from the pls package) with help from functions from the prospectr package.

I could still use the save/load approach from the motivational example to store and reuse the model object created with pls but, instead, I wanted to implement the model, plus some auxiliary functions, in a package to make the functions easier to use in our lab.

I had two issues/struggles/learning opportunities that I needed to sort out to get this package working:

### 1. How to automatically load the model object when attaching the package?

Normally, datasets and other objects go in the data folder, where they are made available to the user. Instead, I wanted to make the object internally available. The solution turned out to be quite straightforward: save the model object to a file called sysdata.rda (either in the R or data folders of the package). This file is automatically loaded when we run library(package_name). We just need to create that file with something like:

save(my_model, 'sysdata.rda')


### 2. How to make predict.pls work in the package?

I was struggling to use the predict function, as in my head it was being provided by the pls package. However, pls is only extending the predict function, which comes with the default R installation but is part of the stats package. At the end, sorted it out with the following Imports, Depends and LazyData in the DESCRIPTION file:

Imports: prospectr,
stats
Depends: pls
Encoding: UTF-8
LazyData: Yes


Now it is possible to use predict, just remember to specify the package where it is coming from, as in:

stats::predict(my_model, ncomp = n_components,
newdata = spectra, interval = 'confidence')


Nothing groundbreaking, I know, but spent a bit of time sorting out that couple of annoyances before everything fell into place. Right now we are using the models in a much easier and reproducible way.

Categories

## Reading a folder with many small files

One of the tools we use in our research is NIR (Near-Infrared Spectroscopy), which we apply to thousands of samples to predict their chemical composition. Each NIR spectrum is contained in a CSV text file with two numerical columns: wavelength and reflectance. All files have the same number of rows (1296 in our case), which corresponds to the number of wavelengths assessed by the spectrometer. One last thing: the sample ID is encoded in the file name.

As an example, file A1-4-999-H-L.0000.csv’s contents look like:

8994.82461,0.26393
8990.96748,0.26391
8987.11035,0.26388
8983.25322,0.26402
8979.39609,0.26417
...

Once the contents of all the files are stored in a single matrix, one can apply a bunch of algorithms to build a model, and then use the model to predict chemical composition for new observations. I am not concerned about that process in this blog post, but only about reading thousands of small files from R, without relying on calls to the operating system to join the small files and read a single large file.

As I see it, I want to:

• give R a folder name,
• get a list of all the file names in that folder,
• iterate over that list and keep only the second column for each of the files.
• join the elements of the list.

I can use list.files() to get the names of all files in a folder. Rather than using a explicit loop, it’s easier to use lapply() to iterate over the list of names and apply the read.csv() function to all of them. I want a matrix, but lapply() creates a list, so I joined all the elements of the list using do.call() to bind the rows using rbind().

spectra_folder <- 'avery_raw_spectra'

# Read all files and keep second column only for each of them. Then join all rows
spectra_list <- list.files(path = folder, full.names = TRUE)
raw_spectra <- lapply(spectra_list,

raw_spectra <- do.call(rbind, raw_spectra)


There are many ways to test performance, for example using the microbenchmark package. Instead, I'm using something rather basic, almost cute, the Sys.time() function:

start <- Sys.time()
end <- Sys.time()
end - start


This takes about 12 seconds in my laptop (when reading over 6,000 files). I was curious to see if it would be dramatically faster with data.table, so I replaced read.csv() with fread() and joined the elements of the list using rbindlist().

library(data.table)

spectra_list <- list.files(path = folder, full.names = TRUE)
raw_spectra <-  lapply(spectra_list,

raw_spectra <- rbindlist(raw_spectra)
}


Using the same basic timing as before this takes around 10 seconds in my laptop

I have the impression that packages like data.table and readr have been optimized for reading larg(ish) files, so they won't necessarily help much in this reading-many-small-files type of problem. Instead, I tested going back to even more basic R functions (scan), and adding more information about the types of data I was reading. Essentially, going back even closer to base R.

read_folder_scan <- function(folder, prefix = 'F') {
# Read all files and keep second column only for each of them. Then join all rows
spectra_list <- list.files(path = folder, full.names = TRUE)
raw_spectra <- lapply(spectra_list,
function(x) matrix(scan(x, what = list(NULL, double()),
sep = ',', quiet = TRUE)[[2]], nrow = 1))
raw_spectra <- do.call(rbind, raw_spectra)
}


Timing this new version takes only 4 seconds, not adding any additional dependencies. Any of these versions is faster than the original code that was growing a data frame with rbind() one iteration at the time.

Categories

## From character to numeric pedigrees

In quantitative genetic analyses we often use a pedigree to represent the relatedness between individuals, so this is accounted in the analyses, because the observations are not independent of each other. Often this pedigree contains alphanumeric labels, and most software can cope with that.

Sometimes, though, we want to use numeric identities because we would like to make the data available to third parties (other researchers, publication), and there is commercial sensitivity about them. Or just want to use a piece of software that can’t deal with character identities.

Last night put together an El quicko* function to numberify identities, which returns a list with a numeric version of the pedigree and a key to then go back to the old identities.

numberify <- function(pedigree) {
ped_key <- with(pedigree, unique(c(as.character(mother), as.character(father), as.character(tree_id))))
numeric_pedigree <- pedigree %>%
mutate(tree_id = as.integer(factor(tree_id, levels = ped_key)),
mother = as.integer(factor(mother, levels = ped_key)),
father = as.integer(factor(father, levels = ped_key)))

return(list(ped = numeric_pedigree, key = ped_key))
}

new_ped <- numberify(old_ped)

old_id <- new_ped$key[new_ped$ped$tree_id]  * It could be generalized to extract the names of the 3 fields, etc. Categories ## Reducing friction in R to avoid Excel When you have students working in a project there is always an element of quality control. Some times the results just make sense, while others we are suspicious about something going wrong. This means going back to check the whole analysis process: can we retrace all the steps in a calculation (going back to data collection) and see if there is anything funny going on? So we sat with the student and started running code (in RStudio, of course) and I noticed something interesting: there was a lot of redundancy, pieces of code that didn’t do anything or were weirdly placed. These are typical signs of code copied from several sources, which together with the presence of setwd() showed unfamiliarity with R and RStudio (we have a mix of students with a broad range of R skills). But the part that really caught my eye was that the script read many Near Infrared spectra files, column bound them together with the sample ID (which was 4 numbers separated by hyphens) and saved the 45 MB file to a CSV file. Then the student opened the file and split the sample ID into 4 columns, deleted the top row, saved the file and read it again into R to continue the process. The friction point which forced the student to drop to Excel—the first of many not easily reproducible parts—was variable splitting. The loop for reading the files and some condition testing was hard to follow too. If one knows R well, any of these steps is relatively simple, but if one doesn’t know it, the copy and pasting from many different sources begins, often with inconsistent programming approaches. Here is where I think the tidyverse brings something important to the table: consistency, more meaningful naming of functions and good documentation. For example, doing: nir %>% separate(sample_id, c('block', 'tree', 'family', 'side'), sep = '-')  is probably the easiest way of dealing with separating the contents of a single variable. When working with several collaborators (colleagues, students, etc) the easiest way to reduce friction is to convince/drag/supplicate everyone to adopt a common language. Within the R world, the tidyverse is the closest thing we have to a lingua franca of research collaboration. ‘But isn’t R a lingua franca already?’ you may ask. The problem is that programming in base R is often too weird for normal people, and too many people just give up before feeling they can do anything useful in R (particularly if they are proficient in Excel). Even if you are an old dog (like me) I think it pays to change to a subset of R that is more learnable. And once someone gets hooked, the transition to adding non-tidyverse functions is more bearable. Categories ## Keeping track of research If you search for data analysis workflows for research there are lots of blog posts on using R + databases + git, etc. While in some cases I may end up working with a combination like that, it’s much more likely that reality is closer to a bunch of emailed Excel or CSV files. Some may argue that one should move the whole group of collaborators to work the right way. In practice, well, not everyone has the interest and/or the time to do so. In one of our collaborations we are dealing with a trial established in 2009 and I was tracking a field coding mistake (as in happening outdoors, doing field work, assigning codes to trees), so I had to backtrack where the errors were introduced. After checking emails from three collaborators, I think I put together the story and found the correct code values in a couple of files going back two years. The new analysis lives in an RStudio project with the following characteristics: 1. Folder in Dropbox, so it’s copied in several locations and it’s easy to share. 2. Excel or CSV files with their original names (warts and all), errors, etc. Resist the temptation to rename the files to sane names, so it’s easier to track back the history of the project. 3. R code 4. Important part: text file (Markdown if you want) documenting the names of the data files, who & when they sent it to me. Very low tech but, hey, it works. Categories ## Calculating parliament seats allocation and quotients I was having a conversation about dropping the minimum threshold (currently 5% of the vote) for political parties to get representation in Parliament. The obvious question is how would seat allocation change, which of course involved a calculation. There is a calculator in the Electoral Commission website, but trying to understand how things work (and therefore coding) is my thing, and the Electoral Commission has a handy explanation of the Sainte-Laguë allocation formula used in New Zealand. So I had to write my own seat allocation function: allocate_seats <- function(votes) { parties <- names(votes) denom <- seq(1, 121, 2) quotients <- vapply(denom, FUN = function(x) votes/x, FUN.VALUE = rep(1, length(votes))) quotients <- t(quotients) colnames(quotients) <- parties priority <- rank(-quotients) seat_ranking <- matrix(priority, nrow = nrow(quotients), ncol = ncol(quotients)) seat_ranking <- ifelse(seat_ranking <= 120, seat_ranking, NA) colnames(seat_ranking) <- parties return(list(quotients = quotients, ranking = seat_ranking)) }  Testing it with the preliminary election results (that is, no including special votes) gives: votes2017 <- c(998813, 776556, 162988, 126995, 10959, 48018, 23456) names(votes2017) <- c('National', 'Labour', 'NZ First', 'Green', 'ACT', 'Opportunities', 'Māori') seats2017 <- allocate_seats(votes2017) seats2017$ranking

#      National Labour NZ First Green ACT Opportunities Māori
# [1,]        1      2        6     9  98            22    46
# [2,]        3      4       19    26  NA            67    NA
# [3,]        5      7       33    42  NA           112    NA
# [4,]        8     11       47    59  NA            NA    NA
# [5,]       10     13       60    77  NA            NA    NA
# [6,]       12     15       73    93  NA            NA    NA
# [7,]       14     17       86   110  NA            NA    NA
# [8,]       16     21      100    NA  NA            NA    NA
# [9,]       18     24      113    NA  NA            NA    NA
#[10,]       20     27       NA    NA  NA            NA    NA
# ...


In our current setup The Opportunities and Māori parties did not reach the minimum threshold (nor won an electorate as ACT violating the spirit of the system), so did not get any seats. Those 4 seats that would have gone to minor parties under no threshold ended up going to National and Labour (2 each). It sucks.

Categories

## Collecting results of the New Zealand General Elections

I was reading an article about the results of our latest elections where I was having a look at the spatial pattern for votes in my city.

I was wondering how would I go over obtaining the data for something like that and went to the Electoral Commission, which has this neat page with links to CSV files with results at the voting place level. The CSV files have results for each of the candidates in the first few rows (which I didn’t care about) and at the party level later in the file.

As I could see it I needed to:

1. Read the Electoral Commission website and extract the table that contains the links to all CSV files.
2. Read each of the files and i- extract the electorate name, ii- skipping all the candidates votes, followed by iii-reading the party vote.
3. Remove sub-totals and other junk from the files.
5. Use the data for whatever else I wanted (exam question anyone?).

So I first loaded the needed packages and read the list of CSV files:

library(magrittr)
library(tidyverse)
library(rvest)
library(stringr)
library(ggmap)

# Extract list of CSV file names containing voting place data
voting_place <- 'http://www.electionresults.govt.nz/electionresults_2017_preliminary/voting-place-statistics.html'

election17 %>%
html_nodes('table') %>% html_nodes('a') %>%
html_attr('href') %>% str_subset('csv') %>%
paste('http://www.electionresults.govt.nz/electionresults_2017_preliminary','/',., sep = '') -> voting_place_list


Then wrote a couple of functions to, first, read the whole file, get the electorate name and, second, detect where the party vote starts to keep from that line onwards. Rather than explicitly looping over the list of CSV file names, I used map_dfr from the purrr package to extract the data and join all the results by row.

get_electorate <- function(row) {
row %>% str_split(pattern = ',') %>%
unlist() %>% .[1] %>% str_split(pattern = '-') %>%
unlist() %>% .[1] %>% str_trim() -> elect
return(elect)
}

extract_party_vote <- function(file_name) {

electorate <- get_electorate(all_records[1])

start_party <- grep('Party Vote', all_records)
party_records <- all_records[start_party:length(all_records)]
party_records_df <- read.table(text = party_records, sep = ',',
fill = TRUE, header = TRUE, quote = '"',
stringsAsFactors = FALSE)
party_records_df$electorate <- electorate return(party_records_df) } # Download files and create dataframe vote_by_place <- map_dfr(voting_place_list, extract_party_vote)  Cleaning the data and summarising by voting place (as one can vote for several electorates in a single place) is fairly straightforward. I appended the string Mobile to mobile teams that visited places like retirement homes, hospitals, prisons, etc: # Remove TOTAL and empty records vote_by_place %>% filter(address != '') %>% mutate(neighbourhood = ifelse(neighbourhood == '', paste(electorate, 'Mobile'), neighbourhood)) %>% group_by(neighbourhood, address) %>% summarise_at(vars(ACT.New.Zealand:Informal.Party.Votes), sum, na.rm = TRUE) -> clean_vote_by_place  Geolocation is the not-working-very-well part right now. First, I had problems with Google (beyond the 1,000 places limit for the query). Then I went for using the Data Science Kit as the source but, even excluding the mobile places, it was a bit hit and miss for geolocation, particularly as the format of some address (like corner of X and Y) is not the best for a search. In addition, either of the two sources for geolocation work really slowly and may produce a lot of output. Using sink() could be a good idea to not end up with output for roughly 3,000 queries. I did try the mutate_geocode() function, but didn't work out properly. # Geolocate voting places get_geoloc <- function(record) { address <- paste(record$address, 'New Zealand', sep = ', ')
`