## Split-plot 1: How does a linear mixed model look like?

I like statistics and I struggle with statistics. Often times I get frustrated when I don’t understand and I really struggled to make sense of Krushke’s Bayesian analysis of a split-plot, particularly because ‘it didn’t look like’ a split-plot to me.

Additionally, I have made a few posts discussing linear mixed models using several different packages to fit them. At no point I have shown what are the calculations behind the scenes. So, I decided to combine my frustration and an explanation to myself in a couple of posts. This is number one and the follow up is Split-plot 2: let’s throw in some spatial effects.

## How do linear mixed models look like

Linear mixed models, models that combine so-called fixed and random effects, are often represented using matrix notation as:

$$\boldsymbol{y} = \boldsymbol{X b} + \boldsymbol{Z a} + \boldsymbol{e}$$

where $$\boldsymbol{y}$$ represents the response variable vector, $$\boldsymbol{X} \mbox{ and } \boldsymbol{Z}$$ are incidence matrices relating the response to the fixed ($$\boldsymbol{b}$$ e.g. overall mean, treatment effects, etc) and random ($$\boldsymbol{a}$$, e.g. family effects, additive genetic effects and a host of experimental design effects like blocks, rows, columns, plots, etc), and last the random residuals ($$\boldsymbol{e}$$).

The previous model equation still requires some assumptions about the distribution of the random effects. A very common set of assumptions is to say that the residuals are iid (identical and independently distributed) normal (so $$\boldsymbol{R} = \sigma^2_e \boldsymbol{I}$$) and that the random effects are independent of each other, so $$\boldsymbol{G} = \boldsymbol{B} \bigoplus \boldsymbol{M}$$ is a direct sum of the variance of blocks ($$\boldsymbol{B} = \sigma^2_B \boldsymbol{I}$$) and main plots ($$\boldsymbol{M} = \sigma^2_M \boldsymbol{I}$$).

An interesting feature of this matrix formulation is that we can express all sort of models by choosing different assumptions for our covariance matrices (using different covariance structures). Do you have longitudinal data (units assessed repeated times)? Is there spatial correlation? Account for this in the $$\boldsymbol{R}$$ matrix. Random effects are correlated (e.g. maternal and additive genetic effects in genetics)? Account for this in the $$\boldsymbol{G}$$ matrix. Multivariate response? Deal with unstructured $$\boldsymbol{R}$$ and $$\boldsymbol{G}$$, or model the correlation structure using different constraints (and the you’ll need asreml).

By the way, the history of linear mixed models is strongly related to applications of quantitative genetics for the prediction of breeding values, particularly in dairy cattle. Charles Henderson developed what is now called Henderson’s Mixed Model Equations to simultaneously estimate fixed effects and predict random genetic effects:

$$\left [ \begin{array}{cc} \boldsymbol{X}’ \boldsymbol{R}^{-1} \boldsymbol{X} & \boldsymbol{X}’ \boldsymbol{R}^{-1} \boldsymbol{Z} \\ \boldsymbol{Z}’ \boldsymbol{R}^{-1} \boldsymbol{X} & \boldsymbol{Z}’ \boldsymbol{R}^{-1} \boldsymbol{Z} + \boldsymbol{G}^{-1} \end{array} \right ] \left [ \begin{array}{c} \boldsymbol{b} \\ \boldsymbol{a} \end{array} \right ] = \left [ \begin{array}{c} \boldsymbol{X}’ \boldsymbol{R}^{-1} \boldsymbol{y} \\ \boldsymbol{Z}’ \boldsymbol{R}^{-1} \boldsymbol{y} \end{array} \right ]$$

The big matrix on the left-hand side of this equation is often called the $$\boldsymbol{C}$$ matrix. You could be thinking ‘What does this all mean?’ It is easier to see what is going on with a small example, but rather than starting with, say, a complete block design, we’ll go for a split-plot to start tackling my annoyance with the aforementioned blog post.

## Old school: physical split-plots

Given that I’m an unsophisticated forester and that I’d like to use data available to anyone, I’ll rely on an agricultural example (so plots are actually physical plots in the field) that goes back to Frank Yates. There are two factors (oats variety, with three levels, and fertilization, with four levels). Yates, F. (1935) Complex experiments, Journal of the Royal Statistical Society Suppl. 2, 181–247 (behind pay wall here).

Now it is time to roll up our sleeves and use some code, getting the data and fitting the same model using nlme (m1) and asreml (m2), just for the fun of it. Anyway, nlme and asreml produce exactly the same results.

We will use the oats data set that comes with MASS, although there is also an Oats data set in nlme and another version in the asreml package. (By the way, you can see a very good explanation by Bill Venables of a ‘traditional’ ANOVA analysis for a split-plot here):

Now we can move to implement the Mixed Model Equations, where probably the only gotcha is the definition of the $$\boldsymbol{Z}$$ matrix (incidence matrix for random effects), as both nlme and asreml use ‘number of levels of the factor’ for both the main and interactions effects, which involves using the contrasts.arg argument in model.matrix().

Not surprisingly, we get the same results, except that we start assuming the variance components from the previous analyses, so we can avoid implementing the code for restricted maximum likelihood estimation as well. By the way, given that $$\boldsymbol{R}^{-1}$$ is in all terms it can be factored out from the MME, leaving terms like $$\boldsymbol{X}’ \boldsymbol{X}$$, i.e. without $$\boldsymbol{R}^{-1}$$, making for simpler calculations. In fact, if you drop the $$\boldsymbol{R}^{-1}$$ it is easier to see what is going on in the different components of the $$\boldsymbol{C}$$ matrix. For example, print $$\boldsymbol{X}’ \boldsymbol{X}$$ and you’ll get the sum of observations for the overall mean and for each of the levels of the fixed effect factors. Give it a try with the other submatrices too!

I will leave it here and come back to this problem as soon as I can.

† Incidentally, a lot of the theoretical development was supported by Shayle Searle (a Kiwi statistician and Henderson’s colleague in Cornell University).