Quantum Forest

a shoebox for data analysis

Multivariate linear mixed models: livin’ la vida loca

I swear there was a point in writing an introduction to covariance structures: now we can start joining all sort of analyses using very similar notation. In a previous post I described simple (even simplistic) models for a single response variable (or ‘trait’ in quantitative geneticist speak). The R code in three R packages (asreml-R, lme4 and nlme) was quite similar and we were happy-clappy with the consistency of results across packages. The curse of the analyst/statistician/guy who dabbles in analyses is the idea that we can always fit a nicer model and—as both Bob the Builder and Obama like to say—yes, we can.

Let’s assume that we have a single trial where we have assessed our experimental units (trees in my case) for more than one response variable; for example, we measured the trees for acoustic velocity (related to stiffness) and basic density. If you don’t like trees, think of having height and body weight for people, or whatever picks your fancy. Anyway, we have our traditional randomized complete block design with random blocks, random families and that’s it. Rhetorical question: Can we simultaneously run an analysis for all responses?

Up to this point we are using the same old code, and remember that we could fit the same model using lme4, so what’s the point of this post? Well, we can now move to fit a multivariate model, where we have two responses at the same time (incidentally, below we have a plot of the two response variables, showing a correlation of ~0.2).

We can first refit the model as a multivariate analysis, assuming block-diagonal covariance matrices. The notation now includes:

  • The use of cbind() to specify the response matrix,
  • the reserved keyword trait, which creates a vector to hold the overall mean for each response,
  • at(trait), which asks ASReml-R to fit an effect (e.g. Block) at each trait, by default using a diagonal covariance matrix (σ2 I). We could also use diag(trait) for the same effect,
  • rcov = ~ units:diag(trait) specifies a different diagonal matrix for the residuals (units) of each trait.

Initially, you may not notice that the results are identical, as there is a distracting change to scientific notation for the variance components. A closer inspection shows that we have obtained the same results for both traits, but did we gain anything? Not really, as we took the defaults for covariance components (direct sum of diagonal matrices, which assumes uncorrelated traits); however, we can do better and actually tell ASReml-R to fit the correlation between traits for block and family effects as well as for residuals.

Moving from model 3 to model 4 we are adding three covariance components (one for each family, block and residuals) and improving log-likelihood by 8.5. A quick look at the output from m4 would indicate that most of that gain is coming from allowing for the covariance of residuals for the two traits, as the covariances for family and, particularly, block are more modest:

An alternative parameterization for model 4 would be to use a correlation matrix with heterogeneous variances (corgh(trait) instead of us(trait)), which would model the correlation and the variances instead of the covariance and the variances. This approach seems to be more numerically stable sometimes.

As an aside, we can estimate the between traits correlation for Blocks (probably not that interesting) and the one for Family (much more interesting, as it is an estimate of the genetic correlation between traits: 1.594152e-01 / sqrt(8.248391e + 01*2.264225e-03) = 0.37).


  1. Dear Luis,

    This post is very helpful. I tried to analyze my data using your method but I have a problem. My experiment was also RCBD including 2 blocks, and in each block, 159 wheat RILs were randomly planted. Heading date (HD), plant height (HT), grain yield (GY) etc were collected. I want to calculate the genetic correlation of these traits. My model is the same as yours:


    But I got an error:
    “Warning message:
    Abnormal termination
    Singularity in Average Information Matrix
    Results may be erroneous ”

    Could you help me solve this problem? Thank you very much!


    • Luis

      2012/02/06 at 5:08 pm

      Hi Junli,

      Sorry on the delay, but I haven’t checked this post in a while. I think that either you have poor starting values for the variance components or that your data structure leads to some variance components being negative (see this example for the latter).

      If it is a problem of starting values, try first running all three bivariate analyses (HD and HT, HD and GY, HT and GY) and use those results as starting values. Have a look at the asreml-R manual to provide starting values using the init option, as in us(trait, init=c(stv1, stv2, ..., stv6)).

      I hope this helps,


Leave a Reply

© 2015 Quantum Forest

Theme by Anders NorenUp ↑