Since I only have some self-taught knowledge on the subject because I work in another field I was not so sure if my intuitions reading his paper were right, but your post seems to suggest that: I had the feeling that the same criticisms Taleb points out can be used against modern “classical” breeding as well (why mutation breeding would be considered safer than GMOs escapes me for example). He sets a bar so high that can’t be reached practically but only as a bar for GMOs not for conventional breeds. I also noted that he does not really back up his assertions about the nature and risks of GMOs with sources from the field. Well maybe that makes sense if you hold contempt for the whole field, which Taleb seems to do.

]]>Question: Are the calculated breeding values estimates of the u.g effects?

]]>https://coffeehouse.dataone.org/2014/04/09/abandon-all-hope-ye-who-enter-dates-in-excel/

]]>Thank you very much by the answer.

Excuse the ignorance on the subject, but I am now initiating studies on Bayesian inference. I’d like to ask a question:

I want to define a priori information, with normal distribution for the data, in MCMCglmm package. So, what are the values that should be assigned to “V” ans “n” in argument “list (V = …. n …. =)” ??

Already thank you very much!!

]]>Luis (with s) here. While in this post I discuss the issue of overlapping matrices in the context of a diallel cross, I also point out that “With the advent of animal model BLUP, was possible to fit something like y = mu + blocks + individual (using a pedigree) + cross + error”. Fitting separate mums and dads predates the use of animal models, it is old-fashioned (we’re talking 30-40 years old) and much more complicated. I have code for the analysis of diallels in asreml here. In the case of MCMCglmm there are a few differences, starting with the pedigree terms HAS to be called animal. You could set up something like:

prior = list(R = list(V = 0.007, n = 0),

G = list(G1 = list(V = 0.002, n = 0), G2 = list(V = 0.001, n = 0), G3 = list(V = 0.001, n = 0))))

# I use high thinning to avoid autocorrelation with animal model

nvel.bayes < - MCMCglmm(nvel ~ 1,

random = ~ animal + Block + Family,

family = ‘gaussian’,

data = ncs,

pedigree = ped,

prior = prior,

verbose = FALSE,

pr = TRUE,

burnin = 10000,

nitt = 200000,

thin = 200)

ped is a data frame with the pedigree, with columns animal, Mum, Dad. This should fit the additive and family effects in a randomized complete block design. Reciprocal effects are home work.

]]>