# Archive for the ‘lattice’ Category

When working in research projects I tend to fit several, sometimes quite a few, alternative models. This model fitting is informed by theoretical considerations (e.g. quantitative genetics, experimental design we used, our understanding of the process under study, etc.) but also by visual inspection of the data. Trellis graphics—where subsets of data are plotted in different ‘panels’ defined by one or more factors—are extremely useful to generate research hypotheses.

There are two packages in R that have good support for trellis graphics: lattice and ggplot2. Lattice is the oldest, while ggplot2 is probably more consistent (implementing a grammar of graphics) and popular with the cool kids and the data visualization crowd. However, lattice is also quite fast, while ggplot2 can be slow as a dog (certainly way slower than my dog).

Tree-breeding progeny trials often have between 1,000 and 12,000 individuals, and analyses commonly include several trials. Thus, it is not unusual to have tens of thousands or even hundreds of thousand of records that will be involved in the analysis. Add to this situation that I am impatient and you will understand that differences on speed can make a big difference to my mental health. But how different is the speed? We can simulate some correlated data (following the explanation in this post) and build a simple scatterplot faceted by site; let’s say 60,000 observations in 6 sites (10,000 per site).

library(lattice)
library(ggplot2)

# number of observations to simulate
nobs = 60000
sites = 6

# Using a correlation matrix (let's assume that all variables
# have unit variance
M = matrix(c(1, 0.7,
0.7, 1), nrow=2, ncol=2)

# Cholesky decomposition
L = chol(M)
nvars = dim(L)[1]

# Random variables that follow an M correlation matrix
r = t(L) %*% matrix(rnorm(nvars*nobs), nrow=nvars, ncol=nobs)
r = t(r)

rdata = as.data.frame(r)
names(rdata) = c('x', 'y')
rdata$site = factor(rep(1:sites, each = nobs/sites)) # Plotting with lattice xyplot(y ~ x | site, data = rdata, layout = c(3, 2), type=c('p','smooth')) # Plotting with ggplot2 qplot(x, y, facets = ~ site, geom=c('point', 'smooth'), data = rdata) + facet_wrap(~site) The timing was done surrounding the graph calls (either xyplot() or qplot()) by system.time(print()), so the graph is sent to the screen and the operation is timed. In summary, in this simple call ggplot2 takes a bit over double the time than lattice. The more layers you add to the graph the slower it gets. The two plots are below. We could improve both plots and make them look more similar to each other, but I want to avoid introducing more distractions in the code. Nevertheless, I do like the flexibility of ggplot2, so I support most of my exploratory data analysis using lattice but when I have to create the final pretty plots for publications in journals I go back to ggplot2. I subscribe to Frank Harrell’s Philosophy of Biostatistics, which includes ‘use excellent graphics, liberally’. Switching between packages let me deal with both abundance of graphics and impatience. This is R pitfall #2: plots inside a function (and system.time() is a function) have to be surrounded by print() or they won’t be sent to the screen. Pitfall #1 is here. This is simple example code to display side-by-side lattice plots or ggplot2 plots, using the mtcars dataset that comes with any R installation. We will display a scatterplot of miles per US gallon (mpg) on car weight (wt) next to another scatterplot of the same data, but using different colors by number of engine cylinders (cyl, treated as factor) and adding a smooth line (under the type option). library(lattice) data(mtcars) # Create first plot and assign it to variable p1 = xyplot(mpg ~ wt, data = mtcars, xlab = 'Car weight', ylab = 'Mileage') # Create second plot and assign it to variable p2 = xyplot(mpg ~ wt, group= factor(cyl), data = mtcars, xlab = 'Car weight', ylab = 'Miles/gallon', type=c('p', 'smooth')) # print calls print.lattice, which takes a position # argument. print(p1, position = c(0, 0, 0.5, 1), more = TRUE) print(p2, position = c(0.5, 0, 1, 1)) According to the documentation, position is a vector of 4 numbers, typically c(xmin, ymin, xmax, ymax) that give the lower-left and upper-right corners of a rectangle in which the Trellis plot of x is to be positioned. The coordinate system for this rectangle is [0-1] in both the x and y directions. That is, the first print() sets position to occupy the left part of the graph with full height, as well as to avoid refreshing the graph when displaying the new plot (more = TRUE). The second print() uses the right part of the graph with full height. In the case of ggplot2, the code is not that different: library(ggplot2) # Create first plot and assign it to variable p1 = qplot(wt, mpg, data = mtcars, xlab = 'Car weight', ylab = 'Mileage') # Create second plot and assign it to variable p2 = qplot(wt, mpg, color = factor(cyl), data = mtcars, geom = c('point', 'smooth'), xlab = 'Car weight', ylab = 'Mileage') # Define grid layout to locate plots and print each graph pushViewport(viewport(layout = grid.layout(1, 2))) print(p1, vp = viewport(layout.pos.row = 1, layout.pos.col = 1)) print(p2, vp = viewport(layout.pos.row = 1, layout.pos.col = 2)) More details on ggplot’s notation can be found here. In a previous post I explained how to create all possible combinations of the levels of two factors using expand.grid(). Another use for this function is to create a regular grid for two variables to create a levelplot or a contour plot. For example, let’s say that we have fitted a multiple linear regression to predict wood stiffness (stiff, the response) using basic density (bd) and a measure of microfibril angle (t) as explanatory variables. The regression equation could be something like stiffness = 3.439 + 0.009 bd - 0.052 t. In our dataset bd had a range of 300 to 700 kg m-3, while t had a range from 50 to 70. We will use the levelplot() function that is part of the lattice package of graphical functions, create a grid for both explanatory variables (every 10 for bd and every 1 for t), predict values of stiffness for all combinations of bd and t, and plot the results. library(lattice) # Create grid of data wood = expand.grid(bd = seq(300, 700, 10), t = seq(50, 70, 1)) wood$stiffness = with(wood, 3.439 + 0.009*bd - 0.052*t)

levelplot(stiffness ~ bd*t, data = wood,
xlab='Basic density', ylab='T')

This code creates a graph like this. Simple.