I can’t exactly remember how I arrived to Making sense of random effects, a good post in the Distributed Ecology blog (go over there and read it). Incidentally, my working theory is that I follow Scott Chamberlain (@recology_), who follows Karthik Ram (@_inundata) who mentioned Edmund Hart’s (@DistribEcology) post. I liked the discussion, but I thought one could add to the explanation to make it a bit clearer.

The idea is that there are 9 individuals, assessed five times each—once under each of five different levels for a treatment—so we need to include individual as a random effect; after all, it is our experimental unit. The code to generate the data, plot it and fit the model is available in the post, but I redid data generation to make it a bit more R-ish and, dare I say, a tad more elegant:

# Fit linear mixed model (avoid an overall mean with -1)

m3<-lmer(size~levs-1+(1|ind),data=idf)

summary(m3)

# Skipping a few things

# AIC BIC logLik deviance REMLdev

# 93.84 106.5 -39.92 72.16 79.84

#Random effects:

# Groups Name Variance Std.Dev.

# ind (Intercept) 7.14676 2.67334

# Residual 0.10123 0.31816

#Number of obs: 45, groups: ind, 9

# Show fixed effects

fixef(m3)

# levsi1 levsi2 levsi3 levsi4 levsi5

# 5.824753 15.896714 2.029902 9.969462 12.870952

What we can do to better understand what’s going on is ‘adjust’ the score observations by the estimated fixed effects and plot those values to see what we are modeling with the random effects:

The random effects for individual or, better, the individual-level intercepts are pretty much the lines going through the middle of the points for each individual. Furthermore, the variance for ind is the variance of the random intercepts around the’adjusted’ values, which can be seen comparing the variance of random effects above (~7.15) with the result below (~7.13).

1

2

var(unlist(ranef(m3)))

#[1] 7.12707

Distributed Ecology then goes on to randomize randomize the individuals within treatment, which means that the average deviation around the adjusted means is pretty close to zero, making that variance component close to zero. I hope this explanation complements Edmund Hart’s nice post.

P.S. If you happen to be in the Southern part of South America next week, I’ll be here and we can have a chat (and a beer, of course).