While disposing of some garden waste in Christchurch’s EcoDrop—which processes recycling, garden waste and rubbish—I captured the sound of a bulldozer pushing rubbish with a background of people and seagulls.
I am a fan of Tim Prebble’s The Music of Sound, where he deals with field recording and the role of sound in general. Tim is running a field recording competition which requires ‘a cardboard box, a microphone, a recorder and you. Thats it! No processing allowed, submit a single take’.
I decided to go for something a bit creepy given the time of the year: getting close to Día de los Muertos. My setup was fairly simple:
- One (empty) box of Weet-Bix apricot bites.
- One JrF c-series contact microphone.
- One impedance transforming thingy.
- Two pieces of masking tape.
- One Sound Devices Mixpre-D.
- One USB cable to record in a mac air using Audacity.
- Two hundred (200) mealworm beetles (Tenebrio sp).
The setup looked like this, except that in the recording the box was leaning while the insects went in:
I was pleased with the final result, although I would have liked a more full boxy sound. I’ll have to train the beetles to behave better in the box.
Man flu kept me at home today, so I decided to do something ‘useful’ and go for a linkathon:
- Ed Yong discusses the effect of subject expectations in psychology experiments Nice Results, But What Did You Expect? At the beginning there was another article on The placebo phenomenon, and another one on The placebo defect.
- A googleVis tutorial to create Hans Rosling-type graphs from R.
- Google’s Python Class is material for an intensive 2-day course on Python.
- An opinion piece on Calculus and statistics by Daniel Kaplan, on teaching a different version of your typical introductory calculus course, so it is useful for statistics. He goes as far as teaching calculus using R. There is more information in Project MOSAIC.
- Nice graphs on what happened to Asiana Airlines flight 214. I didn’t know there was so much available data for a specific flight.
- Biased and Inefficient, Thomas Lumley’s personal statistics blog (he insists that posting 75% of Statschat is not enough to qualify as personal). You may know Thomas from the survey package (or a few others).
- If you are a postgrad student in New Zealand you can apply for a NeSI (New Zealand eScience Infrastructure) postgraduate allocation to access high performance computing facilities.
- My previous post the USA versus Western Europe comparison of GM corn was the first time that I received more traffic from Facebook than from R-bloggers. Five hundred readers in total.
Sometimes people are truthful and cruel. Here Gappy on a mission goes for the jugular:
— gappy on a mission (@gappy3000) July 8, 2013
Over and out.
A few days ago I came across Jack Heinemann and collaborators’ article (Sustainability and innovation in staple crop production in the US Midwest, Open Access) comparing the agricultural sectors of USA and Western Europe‡. While the article is titled around the word sustainability, the main comparison stems from the use of Genetically Modified crops in USA versus the absence of them in Western Europe.
I was curious about part of the results and discussion which, in a nutshell, suggest that “GM cropping systems have not contributed to yield gains, are not necessary for yield gains, and appear to be eroding yields compared to the equally modern agroecosystem of Western Europe”. The authors relied on several crops for the comparison (Maize/corn, rapeseed/canolasee P.S.6, soybean and cotton); however, I am going to focus on a single one (corn) for two reasons: 1. I can’t afford a lot of time for blog posts when I should be preparing lectures and 2. I like eating corn.
When the authors of the paper tackled corn the comparison was between the USA and Western Europe, using the United Nations definition of Western Europe (i.e. Austria, Belgium, France, Germany, Liechtenstein, Luxembourg, Monaco, Netherlands, Switzerland). Some large European corn producers like Italy are not there because of the narrow definition of Western.
I struggled with the comparison used by the authors because, in my opinion, there are potentially so many confounded effects (different industry structures, weather, varieties, etc.) that it can’t provide the proper counterfactual for GM versus non-GM crops. Anyway, I decided to have a look at the same data to see if I would reach the same conclusions. The article provides a good description of where the data came from, as well as how the analyses were performed. Small details to match exactly the results were fairly easy to figure out. I downloaded the FAO corn data (3.7 MB csv file) for all countries (so I can reuse the code and data later for lectures and assignments). I then repeated the plots using the following code:
# Default directory setwd('~/Dropbox/quantumforest') # Required packages require(ggplot2) require(labels) # Reading FAO corn data FAOcorn = read.csv('FAOcorn.csv') # Extracting Area FAOarea = subset(FAOcorn, Element == 'Area Harvested', select = c('Country', 'Year', 'Value')) names(FAOarea) = 'Area' # and production FAOprod = subset(FAOcorn, Element == 'Production', select = c('Country', 'Year', 'Value')) names(FAOprod) = 'Production' # to calculate yield in hectograms FAOarea = merge(FAOarea, FAOprod, by = c('Country', 'Year')) FAOarea$Yield = with(FAOarea, Production/Area*10000) # Subsetting only the countries of interest (and years to match paper) FAOarticle = subset(FAOarea, Country == 'United States of America' | Country == 'Western Europe') # Plot with regression lines ggplot(FAOarticle, aes(x = Year, y = Yield, color = Country)) + geom_point() + stat_smooth(method = lm, fullrange = TRUE, alpha = 0.1) + scale_y_continuous('Yield [hectograms/ha]', limits = c(0, 100000), labels = comma) + theme(legend.position="top")
I could obtain pretty much the same regression model equations as in the article by expressing the years as deviation from 1960 as in:
# Expressing year as a deviation from 1960, so results # match paper FAOarticle$NewYear = with(FAOarticle, Year - 1960) usa.lm = lm(Yield ~ NewYear, data = FAOarticle, subset = Country == 'United States of America') summary(usa.lm) #Call: #lm(formula = Yield ~ NewYear, data = FAOarticle, subset = Country == # "United States of America") # #Residuals: # Min 1Q Median 3Q Max #-18435.4 -1958.3 338.3 3663.8 10311.0 # #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 38677.34 1736.92 22.27 <2e-16 *** #NewYear 1173.83 59.28 19.80 <2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 6049 on 48 degrees of freedom #Multiple R-squared: 0.8909, Adjusted R-squared: 0.8887 weu.lm = lm(Yield ~ NewYear, data = FAOarticle, subset = Country == 'Western Europe') summary(weu.lm) #Call: #lm(formula = Yield ~ NewYear, data = FAOarticle, subset = Country == # "Western Europe") # #Residuals: # Min 1Q Median 3Q Max #-14726.6 -3205.8 346.4 4000.6 10289.5 # #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 31510.14 1665.90 18.91 <2e-16 *** #NewYear 1344.42 56.86 23.65 <2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 5802 on 48 degrees of freedom #Multiple R-squared: 0.9209, Adjusted R-squared: 0.9193 #F-statistic: 559.1 on 1 and 48 DF, p-value: < 2.2e-16
Heinemann and collaborators then point out the following:
…the slope in yield increase by year is steeper in W. Europe (y = 1344.2x + 31512, R² = 0.92084) than the United States (y = 1173.8x + 38677, R² = 0.89093) from 1961 to 2010 (Figure 1). This shows that in recent years W. Europe has had similar and even slightly higher yields than the United States despite the latter’s use of GM varieties.
However, that interpretation using all data assumes that both ‘countries’ are using GMO all the time. An interesting thing is that USA and Western Europe were in different trends already before the introduction of GM corn. We can state that because we have some idea of when GM crops were introduced in the USA. This information is collected by the US Department of Agriculture in their June survey to growers and made publicly available at the State level (GMcornPenetration.csv):
cornPenetration = read.csv('GMcornPenetration.csv') ggplot(cornPenetration, aes(x = Year, y = PerAllGM)) + geom_line() + facet_wrap(~ State) + scale_y_continuous('Percentage of GM corn') + theme(axis.text.x = theme_text(angle=90))
This graph tells us that by the year 2000 the percentage of planted corn was way below 50% in most corn producing states (in fact, it was 25% at the country level). From that time on we have a steady increase reaching over 80% for most states by 2008. Given this, it probably makes sense to assume that, at the USA level, yield reflects non-GM corn until 1999 and progressively reflects the effect of GM genotypes from 2000 onwards. This division is somewhat arbitrary, but easy to implement.
We can repeat the previous analyzes limiting the data from 1961 until, say, 1999:
usa.lm2 = lm(Yield ~ NewYear, data = FAOarticle, subset = Country == 'United States of America' & Year < 2000) summary(usa.lm2) #Call: #lm(formula = Yield ~ NewYear, data = FAOarticle, subset = Country == # "United States of America" & Year < 2000) # #Residuals: # Min 1Q Median 3Q Max #-17441 -2156 1123 3989 9878 # #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 39895.57 2084.81 19.14 < 2e-16 *** #NewYear 1094.82 90.84 12.05 2.25e-14 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 6385 on 37 degrees of freedom #Multiple R-squared: 0.797, Adjusted R-squared: 0.7915 #F-statistic: 145.2 on 1 and 37 DF, p-value: 2.245e-14 weu.lm2 = lm(Yield ~ NewYear, data = FAOarticle, subset = Country == 'Western Europe' & Year < 2000) summary(weu.lm2) #Call: #lm(formula = Yield ~ NewYear, data = FAOarticle, subset = Country == # "Western Europe" & Year < 2000) # #Residuals: # Min 1Q Median 3Q Max #-10785 -3348 -34 3504 11117 # #Coefficients: # Estimate Std. Error t value Pr(>|t|) #(Intercept) 29802.17 1813.79 16.43 <2e-16 *** #NewYear 1454.48 79.03 18.40 <2e-16 *** #--- #Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # #Residual standard error: 5555 on 37 degrees of freedom #Multiple R-squared: 0.9015, Adjusted R-squared: 0.8988 #F-statistic: 338.7 on 1 and 37 DF, p-value: < 2.2e-16
These analyses indicate that Western Europe started with a lower yield than the USA (29,802.17 vs 39,895.57 hectograms/ha) and managed to increase yield much more quickly (1,454.48 vs 1,094.82 hectograms/ha per year) before any use of GM corn by the USA. Figure 1 shows a messy picture because there are numerous factors affecting yield each year (e.g. weather has a large influence). We can take averages for each decade and see how the two ‘countries’ are performing:
# Aggregating every decade. # 2013-07-05 20:10 NZST I fixed the aggregation because it was averaging yields rather # calculating total production and area for the decade and then calculating average yield # Discussion points are totally valid FAOarticle$Decade = cut(FAOarticle$Year, breaks = seq(1959, 2019, 10), labels = paste(seq(1960, 2010, 10), 's', sep = '')) decadeProd = aggregate(Production ~ Country + Decade, data = FAOarticle, FUN = sum) decadeArea = aggregate(Area ~ Country + Decade, data = FAOarticle, FUN = sum) decadeYield = merge(decadeProd, decadeArea, by = c('Country', 'Decade')) decadeYield$Yield = with(decadeYield, Production/Area*10000) ggplot(decadeYield, aes(x = Decade, y = Yield, fill = Country)) + geom_bar(stat = 'identity', position = 'dodge') + scale_y_continuous('Yield [hectograms/ha]', expand = c(0, 0)) + theme(legend.position="top")
This last figure requires more attention. We can again see that Western Europe starts with lower yields than the USA; however, it keeps on increasing those yields faster than USA, overtaking it during the 1990s. Again, all this change happened while both USA and Western Europe were not using GM corn. The situation reverses in the 2000s, when the USA overtakes Western Europe, while the USA continuously increased the percentage of GM corn. The last bar in Figure 3 is misleading because it includes a single year (2010) and we know that yields in USA went down in 2011 and 2012, affected by a very large drought (see Figure 4).
At least when looking at corn, I can’t say (with the same data available to Heinemann) that there is no place or need for GM genotypes. I do share some of his concerns with respect to the low level of diversity present in staple crops but, in contrast to his opinion, I envision a future for agriculture that includes large-scale operations (either GM or no-GM), as well as smaller operations (including organic ones). I’d like to finish with some optimism looking further back to yield, because the USDA National Agricultural Statistics Service keeps yield statistics for corn since 1866(!) (csv file), although it uses bizarre non-metric units (bushels/acre). As a metric boy, I converted to kilograms per hectare (multiplying by 62.77 from this page) and then to hectograms (100 g) multiplying by 10.
# Reading NASS corn data NASS = read.csv('NASScorn.csv') # Conversion to sensical units (see Iowa State Extension article) # http://www.extension.iastate.edu/agdm/wholefarm/html/c6-80.html NASS$Yield = with(NASS, Value*62.77*10) # Average by decade NASS$Decade = cut(NASS$Year, breaks = seq(1859, 2019, 10), labels = paste(seq(1860, 2010, 10), 's', sep = '')) oldYield = aggregate(Yield ~ Decade, data = NASS, FUN = mean) # Plotting ggplot(oldYield, aes(x = Decade, y = Yield)) + geom_bar(stat = 'identity') + scale_y_continuous('Yield [hectograms]', expand = c(0, 0))
It is interesting to see that there was little change until the 1940s, with the advent of the Green Revolution (modern breeding techniques, fertilization, pesticides, etc.). The 2010s decade in Figure 4 includes 2010, 2011 and 2012, with the last two years reflecting extensive droughts. Drought tolerance is one of the most important traits in modern breeding programs.
‡ While Prof. Heinemann and myself work for the same university I don’t know him in person.
P.S. Did you know that Norman Borlaug (hero of mine) studied forestry?
P.S.2 Time permitting I’ll have a look at other crops later. I would have liked to test a regression with dummy variables for corn to account for pre-2000 and post-1999, but there are not yet many years to fit a decent model (considering natural variability between years). We’ll have to wait for that one.
P.S.3 I share some of Heinemann’s concerns relating to subsidies and other agricultural practices.
P.S.4 In case anyone is interested, I did write about a GM-fed pigs study not long ago.
P.S.5 2013-07-05 20:10 NZST. I updated Figures 1 and 3 to clearly express that yield was in hectograms/ha and recalculated average decade yield because it was originally averaging yields rather calculating total production and area for the decade and then calculating average yield. The discussion points I raised are still completely valid.
P.S.6 2013-07-07 11:30 NZST. The inclusion of Canadian canola does not make any sense because, as far as I know, Canada is not part of Western Europe or the US Midwest. This opens the inclusion of crops from any origin as far as the results are convenient for one’s argument.
Following my post on GM-fed pigs I received several comments, mostly through Twitter. Some people liked having access to an alternative analysis, while others replied with typical anti-GM slogans, completely ignoring that I was posting about the technical side of the paper. This post is not for the slogan crowd (who clearly are not interested in understanding), but for people that would like to know more about how one would evaluate claims from a scientific article. While I refer to the pig paper, most issues apply to any paper that uses statistics.
In general, researchers want to isolate the effect of the treatments under study (diets in this case) from any other extraneous influence. We want control over the experimental conditions, so we can separate the effects of interest from all other issues that could create differences between our experimental units (pigs in this case). What could create ‘noise’ in our results? Animals could have different genetic backgrounds (for example with different parents), they could be exposed to different environmental conditions, they could be treated differently (more kindly or harshly), etc.
Once we control those conditions as much as possible we would randomly assign animals to each of the treatments (diets). The rationale behind random allocation is easy to follow. Let’s say that we can see that some animals are healthier than others before starting the trial. If I had a pro-GM agenda, I could assign the healthiest animals to the GM-diet and we would not be able to separate the effect of the treatment from the initial difference. To avoid this, we could have many labels in a hat, shake the hat, and for each pig take a label that randomly assigns the pig to a diet so the comparison is fair.
Researchers also like to have a baseline of the conditions before the treatments are applied. This way we can ‘adjust’ the results by any pre-existing differences. For example, there could be measurable differences on health, size, etc. I normally work with trees in experiments and we routinely assess the height of the trees just planted, so we can establish a baseline.
Finally, we often have a ‘default’ treatment which represents the status quo and acts as a comparison point for the new treatments. In the GM pig case, the default is a non-GM diet and the new treatment is the GM-diet.
The paper on GM fed pigs states that they tried to have as much control as possible of the growing conditions and that they used random allocation to the two feeding treatments. I have no problems with the paper up to this point.
When doing research it is good manners to state one’s expectations before the start of the trial. Doing this provides the experimenter with guidance about how to proceed with the evaluation of the experiment:
- What are the characteristics under study? Both the response variables (in the past called ‘dependent variables’) and the predictors (old name ‘independent variables’).
- What is the magnitude of the differences between groups that we would consider practically significant? Put another way, what would be the size of the difference that one would care about? For example, if we have two groups of pigs and the difference on weight between them is 1 g (0.035 ounces for people with funny units), who cares? If the difference was 5 kg (11 pounds, funny units again) then, perhaps, we are in to something.
- What level of statistical significance we consider appropriate? Even if we assume that truly there is no difference between the two diets, we would expect to see small differences between the two groups just by chance. Big differences are more unlikely. It is common in research to state that a difference is statistically significant if the probability of observing the difference is smaller than, say, 0.05 (or 1 in 20). There is nothing sacred about the number but just a convention.
By this stage one would expect a researcher to state one or more hypotheses to be tested. For example, ‘I expect that the GM diet will increase [condition here] by [number here] percent’. One can run an experiment ignoring ‘good manners’, but (and this is a big but) an informed reader will become suspicious if suddenly one starts testing hypotheses like there is no tomorrow. Why? Because if one conducts too many tests one is bound to find statistically significant results even when there are none.
The comic below presents a brief example with jelly beans assuming that we claim significance for an event occurring with probability 0.05 (1 in 20) or less. Notice that the scientists use 20 colors of jelly beans and find that green ones ’cause’ acne. Running so many tests—without adjusting down the probability of the event that we would call significant, so p needs to be much smaller than 0.05 to be significant—results in wrong conclusions.
In the pig paper there are 8 tests in Table 2, 18 (or 15 with some value) in Table 3, 8 in Table 4 and 17 in Table 5 for a total of 49 (or 46 with some testable values). In fact one would expect to find a couple of significant results (at 0.05 or 1 in 20) by chance even if there are absolutely no differences in reality.
Add to this that many of the tests are unnecessary, because they are performing the wrong type of analysis. For example, there are four separate analyses for stomach inflammation; however, the analysis ignores the type of variable one is testing as I point out in a previous post.
This is why, if I were Monsanto, I would use the paper as evidence supporting the idea that there is no difference between the two diets:
- the paper was ‘independent’ (although the authors have a clear anti-GM bias) and
- when running proper analyses (accounting for the type of variable and the number of tests that one is running) no difference is statistically significant.
P.S. 2013-06-21. 15:38 NZST A footnote about data availability: it is becoming increasingly common to make both the data and code used to analyze the data available when publishing (part of Open Science). I would expect that a paper that is making bold claims that could have policy implications would provide those, which does not happens in this case. However, the publication does include part of the data in results Tables 3 & 4, in the counts of pigs under different classes of inflammation.
This week another ‘scary GMO cause disease’ story was doing the rounds in internet: A long-term toxicology study on pigs fed a combined genetically modified (GM) soy and GM maize diet. Andrew Kniss, a non-smokable weeds expert, mentioned in Twitter that the statistical analyses in the study appeared to be kind of dodgy.
Curious, I decided to have a quick look and I was surprised, first, by the points the authors decide to highlight in their results, second, by the pictures and captioning used in the article and, last, by the way of running the analysis. As I’m in the middle of marking assignments and exams I’ll only have a quick go at part of the analysis. As I see it, the problem can be described as ‘there is a bunch of pigs who were fed either non-GM feed or GM feed. After some time (approximately 23 weeks) they were killed and went through a CSI-like autopsy’, where part of the exam involved the following process:
- Write down the type of feed the pig had during his/her life;
- Assess the condition of the stomach and put it in one of four boxes labeled ‘Nil’, ‘Mild’, ‘Moderate’ and ‘Severe’.
All this data is summarized in Table 3 of the paper (PDF). How would I go about the analysis? As I see it, we have a categorical response variable—which can take one of four mutually exclusive values—and a categorical predictor (diet). In addition, there is a natural order to the inflammation response variable in that Severe > Moderate > Mild > Nil.
Andrew Kniss wrote a post trying to reproduce the published results. Instead, I present the first approach I would try with the data: ordinal logistic regression. Not only that, but instead of using a hippie statistical software like R†, I will use industrial-grade-business-like SAS:
/* Testing SAS web editor fitting pigs data Luis A. Apiolaza - School of Forestry */ *Reading data set; data pigs; input inflam $ diet $ count; datalines; Nil Non-GMO 4 Mild Non-GMO 31 Moderate Non-GMO 29 Severe Non-GMO 9 Nil GMO 8 Mild GMO 23 Moderate GMO 18 Severe GMO 23 ; run; *Showing data set; proc print data = pigs; run; *El quicko analysis; ods graphics on; proc logistic data = pigs order = data; freq count; class inflam (ref = "Nil") diet (ref = "Non-GMO") / param = ref; model inflam = diet / link = glogit; oddsratio diet; run;
This produces a simple table with the same data as the paper and some very non-exciting results, which are better summarized in a single graph:
/* Obs inflam diet count 1 Nil Non-GMO 4 2 Mild Non-GMO 31 3 Moderate Non-GMO 29 4 Severe Non-GMO 9 5 Nil GMO 8 6 Mild GMO 23 7 Moderate GMO 18 8 Severe GMO 23 */
The odd ratios would be 1 for no difference between the treatments. The graph shows that the confidence limits for all levels of inflammation include 1, so move on, nothing to see. In fact, GMO-fed pigs tend to have less inflammation for most disease categories.
P.S. There are many ways of running an analysis for this data set, but I’m in favor of approaches that take the whole problem in one go rather than looking at one class at the time. In an ideal situation we would have a continuous assessment for inflammation and the analysis would be a one-way ANOVA. I understand that for practical reasons one may prefer to split the response in four classes.
P.S.2 2013-06-15 I often act as a reviewer for scientific journals. In the case of this article some of my comments would have included: the analysis does not use the structure of the data properly, the photographs of the damaged organs should include both types of diet for each inflammation class (or at least include the most representative diet for the class), and the authors should highlight that there are no significant differences between the two diets for animal health; that is, the trial provides evidence for no difference between feeds. I still feel that the authors should be more forthcoming on terms of disclosing potential conflicts of interest too, but that’s their decision.
P.S.3 2013-07-04 I expand on aspects of the general research process in this post.
†Tongue-in-cheek, of course, and with reference to weeds. This blog mostly uses R, but I’m pushing myself to use lots of different software to ‘keep the language’. Now if I could only do this with Spanish.