Categories

## Covariance structures

In most mixed linear model packages (e.g. asreml, ed lme4, nlme, etc) one needs to specify only the model equation (the bit that looks like y ~ factors...) when fitting simple models. We explicitly say nothing about the covariances that complete the model specification. This is because most linear mixed model packages assume that, in absence of any additional information, the covariance structure is the product of a scalar (a variance component) by a design matrix. For example, the residual covariance matrix in simple models is R = I σe2, or the additive genetic variance matrix is G = A σa2 (where A is the numerator relationship matrix), or the covariance matrix for a random effect f with incidence matrix Z is ZZ σf2.

However, there are several situations when analyses require a more complex covariance structure, usually a direct sum or a Kronecker product of two or more matrices. For example, an analysis of data from several sites might consider different error variances for each site, that is R = Σd Ri, where Σd represents a direct sum and Ri is the residual matrix for site i.

Other example of a more complex covariance structure is a multivariate analysis in a single site (so the same individual is assessed for two or more traits), where both the residual and additive genetic covariance matrices are constructed as the product of two matrices. For example, R = IR0, where I is an identity matrix of size number of observations, ⊗ is the Kronecker product (do not confuse with a plain matrix multiplication) and R0 is the error covariance matrix for the traits involved in the analysis. Similarly, G = AG0 where all the matrices are as previously defined and G0 is the additive covariance matrix for the traits.

Some structures are easier to understand (at least for me) if we express a covariance matrix (M) as the product of a correlation matrix (C) pre- and post-multiplied by a diagonal matrix (D) containing standard deviations for each of the traits (M = D C D). That is:

$$M = left [ egin{array}{cccc} v_{11}& c_{12}& c_{13}& c_{14} \ c_{21}& v_{22}& c_{23}& c_{24} \ c_{31}& c_{32}& v_{33}& c_{34} \ c_{41}& c_{42}& c_{43}& v_{44} end{array} ight ]$$ $$C = left [ egin{array}{cccc} 1& r_{12}& r_{13}& r_{14} \ r_{21}& 1& r_{23}& r_{24} \ r_{31}& r_{32}& 1& r_{34} \ r_{41}& r_{42}& r_{43}& 1 end{array} ight ]$$ $$D = left [ egin{array}{cccc} s_{11}& 0& 0& 0 \ 0& s_{22}& 0& 0 \ 0& 0& s_{33}& 0 \ 0& 0& 0& s_{44} end{array} ight ]$$

where the v are variances, the r correlations and the s standard deviations.

If we do not impose any restriction on M, apart from being positive definite (p.d.), we are talking about an unstructured matrix (us() in asreml-R parlance). Thus, M or C can take any value (as long as it is p.d.) as it is usual when analyzing multiple trait problems.

There are cases when the order of assessment or the spatial location of the experimental units create patterns of variation, which are reflected by the covariance matrix. For example, the breeding value of an individual i observed at time j (aij) is a function of genes involved in expression at time j – k (aij-k), plus the effect of genes acting in the new measurement (αj), which are considered independent of the past measurement aij = ρjk aij-k + αj, where ρjk is the additive genetic correlation between measures j and k.

Rather than using a different correlation for each pair of ages, it is possible to postulate mechanisms which model the correlations. For example, an autoregressive model (ar() in asreml-R lingo), where the correlation between measurements j and k is r|j-k|. In this model M = D CAR D, where CAR (for equally spaced assessments) is:

$$C_{AR} = left [ egin{array}{cccc} 1 & r^{|t_2-t_1|} & ldots & r^{|t_m-t_1|}\ r^{|t_2-t_1|} & 1 & ldots & r^{|t_m-t_2|}\ vdots & vdots & ddots & vdots \ r^{|t_m-t_1|} & r^{|t_m-t_2|} & ldots & 1 end{array} ight ]$$

Assuming three different autocorrelation coefficients (0.95 solid line, 0.90 dashed line and 0.85 dotted line) we can get very different patterns with a few extra units of lag, as shown in the following graph:

A model including this structure will certainly be more parsimonious (economic on terms of number of parameters) than one using an unstructured approach. Looking at the previous pattern it is a lot easier to understand why they are called ‘structures’. A similar situation is considered in spatial analysis, where the ‘independent errors’ assumption of typical analyses is relaxed. A common spatial model will consider the presence of autocorrelated residuals in both directions (rows and columns). Here the level of autocorrelation will depend on distance between trees rather than on time. We can get an idea of how separable processes look like using this code:

# Separable row col autoregressive process
car2 = function(dim, rhor, rhoc) {
M = diag(dim)
rhor^(row(M) - 1) * rhoc^(col(M) - 1)
}

library(lattice)
levelplot(car2(20, 0.95, 0.85))


This correlation matrix can then be multiplied by a spatial residual variance to obtain the covariance and we can add up a spatially independent residual variance.

Much more detail on code notation for covariance structures can be found, for example, in the ASReml-R User Guide (PDF, chapter 4), for nlme in Pinheiro and Bates’s Mixed-effects models in S and S-plus (link to Google Books, chapter 5.3) and in Bates’s draft book for lme4 in chapter 4.

Categories

## Longitudinal analysis: autocorrelation makes a difference

Back to posting after a long weekend and more than enough rugby coverage to last a few years. Anyway, back to linear models, where we usually assume normality, independence and homogeneous variances. In most statistics courses we live in a fantasy world where we meet all of the assumptions, but in real life—and trees and forests are no exceptions—there are plenty of occasions when we can badly deviate from one or more assumptions. In this post I present a simple example, where we have a number of clones (genetically identical copies of a tree), which had between 2 and 4 cores extracted, and each core was assessed for acoustic velocity (we care about it because it is inversely related to longitudinal shrinkage and its square is proportional to wood stiffness) every two millimeters. This small dataset is only a pilot for a much larger study currently underway.

At this stage I will ignore any relationship between the clones and focus on the core assessements. Let’s think for a moment; we have five replicates (which restrict the randomization) and four clones (A, B, C and D). We have (mostly) 2 to 4 cores (cylindrical pieces of wood covering from tree pith to cambium) within each tree, and we have longitudinal assessments for each core. I would have the expectation that, at least, successive assessments for each core are not independent; that is, assessments that are closer together are more similar than those that are farther apart. How does the data look like? The trellis plot shows trees using a Clone:Rep notation:

library(lattice)
xyplot(velocity ~ distance | Tree, group=Core,
data=cd, type=c('l'))


Incidentally, cores from Clone C in replicate four were damaged, so I dropped them from this example (real life is unbalanced as well!). Just in case, distance is in mm from the tree pith and velocity in m/s. Now we will fit an analysis that totally ignores any relationship between the successive assessments:

library(nlme)
lm1a = lme(ACV ~ Clone*Distance,
random = ~ 1|Rep/Tree/Core, data=cd)
summary(lm1a)

Linear mixed-effects model fit by REML
Data: cd
AIC      BIC   logLik
34456.8 34526.28 -17216.4

Random effects:
Formula: ~1 | Rep
(Intercept)
StdDev:    120.3721

Formula: ~1 | Tree %in% Rep
(Intercept)
StdDev:    77.69231

Formula: ~1 | Core %in% Tree %in% Rep
(Intercept) Residual
StdDev:    264.6254 285.9208

Fixed effects: ACV ~ Clone * Distance
Value Std.Error   DF  t-value p-value
(Intercept)       3274.654 102.66291 2379 31.89715  0.0000
CloneB             537.829 127.93871   11  4.20380  0.0015
CloneC             209.945 137.10691   11  1.53125  0.1539
CloneD             293.840 124.08420   11  2.36807  0.0373
Distance            14.220   0.28607 2379 49.70873  0.0000
CloneB:distance     -0.748   0.44852 2379 -1.66660  0.0957
CloneC:distance     -0.140   0.45274 2379 -0.30977  0.7568
CloneD:distance      3.091   0.47002 2379  6.57573  0.0000

anova(lm1a)
numDF denDF  F-value p-value
(Intercept)        1  2379 3847.011  <.0001
Clone              3    11    4.054  0.0363
distance           1  2379 7689.144  <.0001
Clone:distance     3  2379   22.468  <.0001


Incidentally, our assessment setup looks like this. The nice thing of having good technicians (Nigel made the tool frame), collaborating with other departments (Electrical Engineering, Michael and students designed the electronics and software for signal processing) and other universities (Uni of Auckland, where Paul—who cored the trees and ran the machine—works) is that one gets involved in really cool projects.

What happens if we actually allow for an autoregressive process?

lm1b = lme(velocity ~ Clone*distance,
random = ~ 1|Rep/Tree/Core, data = cd,
correlation = corCAR1(value = 0.8,
form = ~ distance | Rep/Tree/Core))
summary(lm1b)

Linear mixed-effects model fit by REML
Data: ncd
AIC      BIC    logLik
29843.45 29918.72 -14908.73

Random effects:
Formula: ~1 | Rep
(Intercept)
StdDev:     60.8209

Formula: ~1 | Tree %in% Rep
(Intercept)
StdDev:    125.3225

Formula: ~1 | Core %in% Tree %in% Rep
(Intercept) Residual
StdDev:   0.3674224 405.2818

Correlation Structure: Continuous AR(1)
Formula: ~distance | Rep/Tree/Core
Parameter estimate(s):
Phi
0.9803545
Fixed effects: velocity ~ Clone * distance
Value Std.Error   DF   t-value p-value
(Intercept)     3297.517 127.98953 2379 25.763960  0.0000
CloneB           377.290 183.16795   11  2.059804  0.0639
CloneC           174.986 195.21327   11  0.896383  0.3892
CloneD           317.581 178.01710   11  1.783994  0.1020
distance          15.209   1.26593 2379 12.013979  0.0000
CloneB:distance    0.931   1.94652 2379  0.478342  0.6325
CloneC:distance   -0.678   2.00308 2379 -0.338629  0.7349
CloneD:distance    2.677   1.95269 2379  1.371135  0.1705

anova(lm1b)
numDF denDF  F-value p-value
(Intercept)        1  2379 5676.580  <.0001
Clone              3    11    2.483  0.1152
distance           1  2379  492.957  <.0001
Clone:distance     3  2379    0.963  0.4094


In ASReml-R this would look like (for the same results, but many times faster):

as1a = asreml(velocity ~ Clone*distance,
random = ~ Rep + Tree/Core,
data = cd)
summary(as1a)
anova(as1a)

# I need to sort out my code for ar(1) and update to


Oops! What happened to the significance of Clone and its interaction with distance? The perils of ignoring the independence assumption. But, wait, isn't an AR(1) process too simplistic to model the autocorrelation (as pointed out by D.J. Keenan when criticizing IPCC's models and discussing Richard Mueller's new BEST project models)? In this case, probably not, as we have a mostly increasing response, where we have a clue of the processes driving the change and with far less noise than climate data.

Could we improve upon this model? Sure! We could add heterogeneous variances, explore non-linearities, take into account the genetic relationship between the trees, run the whole thing in asreml (so it is faster), etc. Nevertheless, at this point you can get an idea of some of the issues (or should I call them niceties?) involved in the analysis of experiments.

Categories

## Spatial correlation in designed experiments

Last Wednesday I had a meeting with the folks of the New Zealand Drylands Forest Initiative in Blenheim. In addition to sitting in a conference room and having nice sandwiches we went to visit one of our progeny trials at Cravens. Plantation forestry trials are usually laid out following a rectangular lattice defined by rows and columns. The trial follows an incomplete block design with 148 blocks and is testing 60 Eucalyptus bosistoana families. A quick look at survival shows an interesting trend: the bottom of the trial was much more affected by frost than the top.

setwd('~/Dropbox/quantumforest')
library(ggplot2) # for qplot
library(asreml)

qplot(col, row, fill = Surv, geom='tile', data = cravens)


We have the trial assessed for tree height (ht) at 2 years of age, where a plot also shows some spatial trends, with better growth on the top-left corner; this is the trait that I will analyze here.

Tree height can be expressed as a function of an overall constant, random block effects, random family effects and a normally distributed residual (this is our base model, m1). I will then take into account the position of the trees (expressed as rows and columns within the trial) to fit spatially correlated residuals (m2)—using separable rows and columns processes—to finally add a spatially independent residual to the mix (m3).

# Base model (non-spatial)
m1 = asreml(ht ~ 1, random = ~ Block + Family, data = cravens)
summary(m1)$loglik [1] -8834.071 summary(m1)$varcomp

gamma component std.error   z.ratio constraint
Block!Block.var   0.52606739 1227.4058 168.84681  7.269345   Positive
Family!Family.var 0.06257139  145.9898  42.20675  3.458921   Positive
R!variance        1.00000000 2333.1722  78.32733 29.787459   Positive


m1 represents a traditional family model with only the overall constant as the only fixed effect and a diagonal residual matrix (identity times the residual variance). In m2 I am modeling the R matrix as the Kronecker product of two separable autoregressive processes (one in rows and one in columns) times a spatial residual.

# Spatial model (without non-spatial residual)
m2 = asreml(ht ~ 1, random = ~ Block + Family,
rcov = ~ ar1(row):ar1(col),
data = cravens[order(cravens$row, cravens$col),])
summary(m2)$loglik [1] -8782.112 summary(m2)$varcomp

gamma    component    std.error   z.ratio
Block!Block.var   0.42232303 1025.9588216 157.00799442  6.534437
Family!Family.var 0.05848639  142.0822874  40.20346956  3.534080
R!variance        1.00000000 2429.3224308  88.83064209 27.347798
R!row.cor         0.09915320    0.0991532   0.02981808  3.325271
R!col.cor         0.28044024    0.2804402   0.02605972 10.761445

constraint
Block!Block.var         Positive
Family!Family.var       Positive
R!variance              Positive
R!row.cor          Unconstrained
R!col.cor          Unconstrained


Adding two parameters to the model results in improving the log-likelihood from -8834.071 to -8782.112, a difference of 51.959, but with what appears to be a low correlation in both directions. In m3 I am adding an spatially independent residual (using the keyword units), improving log-likelihood from -8782.112 to -8660.411: not too shabby.

m3 = asreml(ht ~ 1, random = ~ Block + Family + units,
rcov = ~ ar1(row):ar1(col),
data = cravens[order(cravens$row, cravens$col),])
summary(m3)$loglik [1] -8660.411 summary(m3)$varcomp

gamma    component    std.error    z.ratio
Block!Block.var   3.069864e-07 4.155431e-04 6.676718e-05   6.223763
Family!Family.var 1.205382e-01 1.631630e+02 4.327885e+01   3.770040
units!units.var   1.354166e+00 1.833027e+03 7.085134e+01  25.871456
R!variance        1.000000e+00 1.353621e+03 2.174923e+02   6.223763
R!row.cor         7.814065e-01 7.814065e-01 4.355976e-02  17.938724
R!col.cor         9.529984e-01 9.529984e-01 9.100529e-03 104.719017
constraint
Block!Block.var        Boundary
Family!Family.var      Positive
units!units.var        Positive
R!variance             Positive
R!row.cor         Unconstrained
R!col.cor         Unconstrained


Allowing for the independent residual (in addition to the spatial one) has permitted higher autocorrelations (particularly in columns), while the Block variance has gone very close to zero. Most of the Block variation is being captured now through the residual matrix (in the rows and columns autoregressive correlations), but we are going to keep it in the model, as it represents a restriction to the randomization process. In addition to log-likelihood (or AIC) we can get graphical descriptions of the fit by plotting the empirical semi-variogram as in plot(variogram(m3)):

Categories

## Large applications of linear mixed models

In a previous post I summarily described our options for (generalized to varying degrees) linear mixed models from a frequentist point of view: nlme, lme4 and ASReml-R, followed by a quick example for a split-plot experiment.

But who is really happy with a toy example? We can show a slightly more complicated example assuming that we have a simple situation in breeding: a number of half-sib trials (so we have progeny that share one parent in common), each of them established following a randomized complete block design, analyzed using a ‘family model’. That is, the response variable (dbh: tree stem diameter assessed at breast height—1.3m from ground level) can be expressed as a function of an overall mean, fixed site effects, random block effects (within each site), random family effects and a random site-family interaction. The latter provides an indication of genotype by environment interaction.

setwd('~/Dropbox/quantumforest')

# Removing all full-siblings and dropping Trial levels
# that are not used anymore
trees = subset(trees, Father == 0)
trees$Trial = trees$Trial[drop = TRUE]

# Which let me with 189,976 trees in 31 trials
xtabs(~Trial, data = trees)

# First run the analysis with lme4
library(lme4)
m1 = lmer(dbh ~ Trial + (1|Trial:Block) + (1|Mother) +
(1|Trial:Mother), data = trees)

# Now run the analysis with ASReml-R
library(asreml)
m2 = asreml(dbh ~ Trial, random = ~Trial:Block + Mother +
Trial:Mother, data = trees)


First I should make clear that this model has several drawbacks:

• It assumes that we have homogeneous variances for all trials, as well as identical correlation between trials. That is, that we are assuming compound symmetry, which is very rarely the case in multi-environment progeny trials.
• It also assumes that all families are unrelated to each other, which in this case I know it is not true, as a quick look at the pedigree will show that several mothers are related.
• We assume only one type of families (half-sibs), which is true just because I got rid of all the full-sib families and clonal material, to keep things simple in this example.

Just to make the point, a quick look at the data using ggplot2—with some jittering and alpha transparency to better display a large number of points—shows differences in both mean and variance among sites.

library(ggplot2)
ggplot(aes(x=jitter(as.numeric(Trial)), y=dbh), data = trees) +
geom_point(colour=alpha('black', 0.05)) +
scale_x_continuous('Trial')


Anyway, let’s keep on going with this simplistic model. In my computer, a two year old iMac, ASReml-R takes roughly 9 seconds, while lme4 takes around 65 seconds, obtaining similar results.

# First lme4 (and rounding off the variances)
summary(m1)

Random effects:
Groups       Name        Variance Std.Dev.
Trial:Mother (Intercept)  26.457   5.1436
Mother       (Intercept)  32.817   5.7286
Trial:Block  (Intercept)  77.390   8.7972
Residual                 892.391  29.8729

# And now ASReml-R
# Round off part of the variance components table
round(summary(m2)$varcomp[,1:4], 3) gamma component std.error z.ratio Trial:Block!Trial.var 0.087 77.399 4.598 16.832 Mother!Mother.var 0.037 32.819 1.641 20.002 Trial:Mother!Trial.var 0.030 26.459 1.241 21.328 R!variance 1.000 892.389 2.958 301.672  The residual matrix of this model is a, fairly large, diagonal matrix (residual variance times an identity matrix). At this point we can relax this assumption, adding a bit more complexity to the model so we can highlight some syntax. Residuals in one trial should have nothing to do with residuals in another trial, which could be hundreds of kilometers away. I will then allow for a new matrix of residuals, which is the direct sum of trial-specific diagonal matrices. In ASReml-R we can do so by specifying a diagonal matrix at each trial with rcov = ~ at(Trial):units: m2b = asreml(dbh ~ Trial, random = ~Trial:Block + Mother + Trial:Mother, data = trees, rcov = ~ at(Trial):units) # Again, extracting and rounding variance components round(summary(m2b)$varcomp[,1:4], 3)
gamma component std.error z.ratio
Trial:Block!Trial.var    77.650    77.650     4.602  16.874
Mother!Mother.var        30.241    30.241     1.512  20.006
Trial:Mother!Trial.var   22.435    22.435     1.118  20.065
Trial_1!variance       1176.893  1176.893    18.798  62.606
Trial_2!variance       1093.409  1093.409    13.946  78.403
Trial_3!variance        983.924   983.924    12.061  81.581
...
Trial_29!variance      2104.867  2104.867    55.821  37.708
Trial_30!variance       520.932   520.932    16.372  31.819
Trial_31!variance       936.785   936.785    31.211  30.015


There is a big improvement of log-Likelihood from m2 (-744452.1) to m2b (-738011.6) for 30 additional variances. At this stage, we can also start thinking of heterogeneous variances for blocks, with a small change to the code:

m2c =  asreml(dbh ~ Trial, random = ~at(Trial):Block + Mother +
Trial:Mother, data = wood,
rcov = ~ at(Trial):units)
round(summary(m2c)$varcomp[,1:4], 3) # Which adds another 30 variances (one for each Trial) gamma component std.error z.ratio at(Trial, 1):Block!Block.var 2.473 2.473 2.268 1.091 at(Trial, 2):Block!Block.var 95.911 95.911 68.124 1.408 at(Trial, 3):Block!Block.var 1.147 1.147 1.064 1.079 ... Mother!Mother.var 30.243 30.243 1.512 20.008 Trial:Mother!Trial.var 22.428 22.428 1.118 20.062 Trial_1!variance 1176.891 1176.891 18.798 62.607 Trial_2!variance 1093.415 1093.415 13.946 78.403 Trial_3!variance 983.926 983.926 12.061 81.581 ...  At this point we could do extra modeling at the site level by including any other experimental design features, allowing for spatially correlated residuals, etc. I will cover some of these issues in future posts, as this one is getting a bit too long. However, we could gain even more by expressing the problem in a multivariate fashion, where the performance of Mothers in each trial would be considered a different trait. This would push us towards some large problems, which require modeling covariance structures so we have a change of achieving convergence during our lifetime. Disclaimer: I am emphasizing the use of ASReml-R because 1.- I am most familiar with it than with nlme or lme4 (although I tend to use nlme for small projects), and 2.- There are plenty of examples for the other packages but not many for ASReml in the R world. I know some of the people involved in the development of ASReml-R but otherwise I have no commercial links with VSN (the guys that market ASReml and Genstat). There are other packages that I have not had the chance to investigate yet, like hglm. Categories ## Linear mixed models in R A substantial part of my job has little to do with statistics; nevertheless, a large proportion of the statistical side of things relates to applications of linear mixed models. The bulk of my use of mixed models relates to the analysis of experiments that have a genetic structure. ## A brief history of time At the beginning (1992-1995) I would use SAS (first proc glm, later proc mixed), but things started getting painfully slow and limiting if one wanted to move into animal model BLUP. At that time (1995-1996), I moved to DFREML (by Karen Meyer, now replaced by WOMBAT) and AIREML (by Dave Johnson, now defunct—the program I mean), which were designed for the analysis of animal breeding progeny trials, so it was a hassle to deal with experimental design features. At the end of 1996 (or was it the beginning of 1997?) I started playing with ASReml (programed by Arthur Gilmour mostly based on theoretical work by Robin Thompson and Brian Cullis). I was still using SAS for data preparation, but all my analyses went through ASReml (for which I wrote the cookbook), which was orders of magnitude faster than SAS (and could deal with much bigger problems). Around 1999, I started playing with R (prompted by a suggestion from Rod Ball), but I didn’t really use R/S+ often enough until 2003. At the end of 2005 I started using OS X and quickly realized that using a virtual machine or dual booting was not really worth it, so I dropped SAS and totally relied on R in 2009. ## Options As for many other problems, there are several packages in R that let you deal with linear mixed models from a frequentist (REML) point of view. I will only mention nlme (Non-Linear Mixed Effects), lme4 (Linear Mixed Effects) and asreml (average spatial reml). There are also several options for Bayesian approaches, but that will be another post. nlme is the most mature one and comes by default with any R installation. In addition to fitting hierarchical generalized linear mixed models it also allows fitting non-linear mixed models following a Gaussian distribution (my explanation wasn’t very clear, thanks to ucfagls below for pointing this out). Its main advantages are, in my humble opinion, the ability to fit fairly complex hierarchical models using linear or non-linear approaches, a good variety of variance and correlation structures, and access to several distributions and link functions for generalized models. In my opinion, its main drawbacks are i- fitting cross-classified random factors is a pain, ii- it can be slow and may struggle with lots of data, iii- it does not deal with pedigrees by default and iv- it does not deal with multivariate data. lme4 is a project led by Douglas Bates (one of the co-authors of nlme), looking at modernizing the code and making room for trying new ideas. On the positive side, it seems to be a bit faster than nlme and it deals a lot better with cross-classified random factors. Drawbacks: similar to nlme’s, but dropping point i- and adding that it doesn’t deal with covariance and correlation structures yet. It is possible to fit pedigrees using the pedigreemm package, but I find the combination a bit flimsy. ASReml-R is, unsurprisingly, an R package interface to ASReml. On the plus side it i- deals well with cross-classified random effects, ii- copes very well with pedigrees, iii- can work with fairly large datasets, iv-can run multivariate analyses and v- covers a large number of covariance and correlation structures. Main drawbacks are i- limited functionality for non-Gaussian responses, ii- it does not cover non-linear models and iii- it is non-free (as in beer and speech). The last drawback is relative; it is possible to freely use asreml for academic purposes (and there is also a version for developing countries). Besides researchers, the main users of ASReml/ASReml-R are breeding companies. All these three packages are available for Windows, Linux and OS X. ## A (very) simple example I will use a traditional dataset to show examples of the notation for the three packages: Yates’ variety and nitrogen split-plot experiment. We can get the dataset from the MASS package, after which it is a good idea to rename the variables using meaningful names. In addition, I will follow Bill Venables’s excellent advice and create additional variables for main plot and subplots, as it is confusing to use the same factor for two purposes (e.g. variety as treatment and main plot). Incidentally, if you haven’t read Bill’s post go and read it; it is one of the best explanations I have ever seen for a split-plot analysis. library(MASS) data(oats) names(oats) = c('block', 'variety', 'nitrogen', 'yield') oats$mainplot = oats$variety oats$subplot = oats$nitrogen summary(oats) block variety nitrogen yield mainplot I :12 Golden.rain:24 0.0cwt:18 Min. : 53.0 Golden.rain:24 II :12 Marvellous :24 0.2cwt:18 1st Qu.: 86.0 Marvellous :24 III:12 Victory :24 0.4cwt:18 Median :102.5 Victory :24 IV :12 0.6cwt:18 Mean :104.0 V :12 3rd Qu.:121.2 VI :12 Max. :174.0 subplot 0.0cwt:18 0.2cwt:18 0.4cwt:18 0.6cwt:18  The nlme code for this analysis is fairly simple: response on the left-hand side of the tilde, followed by the fixed effects (variety, nitrogen and their interaction). Then there is the specification of the random effects (which also uses a tilde) and the data set containing all the data. Notice that 1|block/mainplot is fitting block and mainplot within block. There is no reference to subplot as there is a single assessment for each subplot, which ends up being used at the residual level. library(nlme) m1.nlme = lme(yield ~ variety*nitrogen, random = ~ 1|block/mainplot, data = oats) summary(m1.nlme) Linear mixed-effects model fit by REML Data: oats AIC BIC logLik 559.0285 590.4437 -264.5143 Random effects: Formula: ~1 | block (Intercept) StdDev: 14.64496 Formula: ~1 | mainplot %in% block (Intercept) Residual StdDev: 10.29863 13.30727 Fixed effects: yield ~ variety * nitrogen Value Std.Error DF t-value p-value (Intercept) 80.00000 9.106958 45 8.784492 0.0000 varietyMarvellous 6.66667 9.715028 10 0.686222 0.5082 varietyVictory -8.50000 9.715028 10 -0.874933 0.4021 nitrogen0.2cwt 18.50000 7.682957 45 2.407927 0.0202 nitrogen0.4cwt 34.66667 7.682957 45 4.512152 0.0000 nitrogen0.6cwt 44.83333 7.682957 45 5.835427 0.0000 varietyMarvellous:nitrogen0.2cwt 3.33333 10.865342 45 0.306786 0.7604 varietyVictory:nitrogen0.2cwt -0.33333 10.865342 45 -0.030679 0.9757 varietyMarvellous:nitrogen0.4cwt -4.16667 10.865342 45 -0.383482 0.7032 varietyVictory:nitrogen0.4cwt 4.66667 10.865342 45 0.429500 0.6696 varietyMarvellous:nitrogen0.6cwt -4.66667 10.865342 45 -0.429500 0.6696 varietyVictory:nitrogen0.6cwt 2.16667 10.865342 45 0.199411 0.8428 anova(m1.nlme) numDF denDF F-value p-value (Intercept) 1 45 245.14299 <.0001 variety 2 10 1.48534 0.2724 nitrogen 3 45 37.68562 <.0001 variety:nitrogen 6 45 0.30282 0.9322  The syntax for lme4 is not that dissimilar, with random effects specified using a (1|something here) syntax. One difference between the two packages is that nlme reports standard deviations instead of variances for the random effects. library(lme4) m1.lme4 = lmer(yield ~ variety*nitrogen + (1|block/mainplot), data = oats) summary(m1.lme4) Linear mixed model fit by REML Formula: yield ~ variety * nitrogen + (1 | block/mainplot) Data: oats AIC BIC logLik deviance REMLdev 559 593.2 -264.5 595.9 529 Random effects: Groups Name Variance Std.Dev. mainplot:block (Intercept) 106.06 10.299 block (Intercept) 214.48 14.645 Residual 177.08 13.307 Number of obs: 72, groups: mainplot:block, 18; block, 6 Fixed effects: Estimate Std. Error t value (Intercept) 80.0000 9.1064 8.785 varietyMarvellous 6.6667 9.7150 0.686 varietyVictory -8.5000 9.7150 -0.875 nitrogen0.2cwt 18.5000 7.6830 2.408 nitrogen0.4cwt 34.6667 7.6830 4.512 nitrogen0.6cwt 44.8333 7.6830 5.835 varietyMarvellous:nitrogen0.2cwt 3.3333 10.8653 0.307 varietyVictory:nitrogen0.2cwt -0.3333 10.8653 -0.031 varietyMarvellous:nitrogen0.4cwt -4.1667 10.8653 -0.383 varietyVictory:nitrogen0.4cwt 4.6667 10.8653 0.430 varietyMarvellous:nitrogen0.6cwt -4.6667 10.8653 -0.430 varietyVictory:nitrogen0.6cwt 2.1667 10.8653 0.199 anova(m1.lme4) Analysis of Variance Table Df Sum Sq Mean Sq F value variety 2 526.1 263.0 1.4853 nitrogen 3 20020.5 6673.5 37.6856 variety:nitrogen 6 321.7 53.6 0.3028  For this type of problem, the notation for asreml is also very similar, particularly when compared to nlme. library(asreml) m1.asreml = asreml(yield ~ variety*nitrogen, random = ~ block/mainplot, data = oats) summary(m1.asreml)$varcomp

gamma component std.error  z.ratio constraint
block!block.var          1.2111647  214.4771 168.83404 1.270343   Positive
block:mainplot!block.var 0.5989373  106.0618  67.87553 1.562593   Positive
R!variance               1.0000000  177.0833  37.33244 4.743416   Positive

wald(m1.asreml, denDF = 'algebraic')

$Wald Df denDF F.inc Pr (Intercept) 1 5 245.1000 1.931825e-05 variety 2 10 1.4850 2.723869e-01 nitrogen 3 45 37.6900 2.457710e-12 variety:nitrogen 6 45 0.3028 9.321988e-01$stratumVariances
df  Variance block block:mainplot R!variance
block           5 3175.0556    12              4          1
block:mainplot 10  601.3306     0              4          1
R!variance     45  177.0833     0              0          1


In this simple example one pretty much gets the same results, independently of the package used (which is certainly comforting). I will soon cover another simple model, but with much larger dataset, to highlight some performance differences between the packages.