R as a second language

Imagine that you are studying English as a second language; you learn the basic rules, some vocabulary and start writing sentences. After a little while, it is very likely that you’ll write grammatically correct sentences that no native speaker would use. You’d be following the formalisms but ignoring culture, idioms, slang and patterns of effective use.

R is a language and any newcomers, particularly if they already know another programming language, will struggle at the beginning to get what is beyond the formal grammar and vocabulary. I use R for inquisition: testing ideas, data exploration, visualization; under this setting, the easiest is to perform a task the more likely is one going to do it. It is possible to use several other languages for this but—and I think this is an important but—R’s brevity reduces the time between thinking and implementation, so we can move on and keep on trying new ideas.

A typical example is when we want to repeat something or iterate over a collection of elements. In most languages if one wants to do something many times the obvious way is using a loop (coded like, for() or while()). It is possible to use a for() loop in R but many times is the wrong tool for the job, as it increases the lag between thought and code, moving us away from ‘the flow’.

# Generate some random data with 10 rows and 5 columns
M = matrix(round(runif(50, 1, 5), 0), nrow = 10, ncol = 5)
M
 
#      [,1] [,2] [,3] [,4] [,5]
# [1,]    2    3    4    2    1
# [2,]    3    1    3    3    4
# [3,]    4    2    5    1    3
# [4,]    2    4    4    5    3
# [5,]    2    3    1    4    4
# [6,]    3    2    2    5    1
# [7,]    1    3    5    5    2
# [8,]    5    4    2    5    4
# [9,]    3    2    3    4    3
#[10,]    4    4    1    2    3
 
# Create dumb function that returns mean and median
# for data
sillyFunction = function(aRow) {
  c(mean(aRow), median(aRow))
}
 
# On-liner to apply our function to each row
apply(M, 1, sillyFunction)
 
#     [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
#[1,]  2.4  2.8    3  3.6  2.8  2.6  3.2    4    3   2.8
#[2,]  2.0  3.0    3  4.0  3.0  2.0  3.0    4    3   3.0
 
# or one could do it for each column
apply(M, 2, sillyFunction)
 
# Of course one could use a loop. Pre-allocating
# the result matrix would have a loop with little
# time penalty (versus growing the matrix)
nCases = dim(M)[1]
resMatrix = matrix(0, nrow = nCases, ncol = 2)
# and here is the loop
for(i in 1:nCases){
  resMatrix[i, 1:2] = sillyFunction(M[i,])
}
 
resMatrix
# Same results as before
#      [,1] [,2]
# [1,]  2.4    2
# [2,]  2.8    3
# [3,]  3.0    3
# [4,]  3.6    4
# [5,]  2.8    3
# [6,]  2.6    2
# [7,]  3.2    3
# [8,]  4.0    4
# [9,]  3.0    3
#[10,]  2.8    3
How apply loops around a matrix or data frame (Shaky handwriting and all).

How apply loops around a matrix or data frame, doing its business for all rows [1] or columns [2] (Shaky handwriting and all).

One of the distinctive features of R is that there is already a lot of functionality available for jobs that occur frequently in data analysis. The easiest is to perform a task the more likely is one going to do it, which is perfect if one is exploring/thinking about data.

Thomas Lumley reminded me of the ACM citation for John Chambers—father of S of which R is an implementation—which stated that Chambers’s work:

…will forever alter the way people analyze, visualize, and manipulate data . . . S is an elegant, widely accepted, and enduring software system, with conceptual integrity, thanks to the insight, taste, and effort of John Chambers.

If I could summarize the relevance of R in a Tweetable phrase (with hash tags and everything) it would be:

Most data analysis languages underestimate the importance of interactivity/low barrier to exploration. That’s where #Rstats shines.

One could run statistical analyses with many languages (including generic ones), but to provide the right level of interactivity for analysis, visualization and data manipulation one ends up creating functions that, almost invariably, look a bit like R; pandas in Python, for example.

There are some complications with some of the design decisions in R, especially when we get down to consistency which begets memorability. A glaring example is the apply family of functions and here is where master opportunist (in the positive sense of expert at finding good opportunities) Hadley Wickham made sense out of confusion in his package plyr.

There is also a tension in languages under considerable use because speakers/writers/analysts/coders start adapting them to new situations, adding words and turns of phrase. Look at English for an example! This is also happening to R and some people wish the language looked different in some non-trivial ways. A couple of examples: Coffeescript for R and Rasmus Bååth’s suggestions. Not all of them can be implemented, but suggestions like this speak of the success of R.

If you are struggling to start working with R, as with other languages, first let go. The key to learning and working with a new language is immersing yourself in it; even better if you do it with people who already speak it.

Just to be clear, there are several good statistical languages. However, none is as supportive of rapid inquisition as R (IMO). It is not unusual to develop models in one language (e.g. R) and implement it in another for operational purposes (e.g. SAS, Python, whatever).

The first thing I admire about Hadley is his ‘good eye’ for finding points of friction. The second one is doing something about the frictions, often with very good taste.

P.S. It should come clear from this post that English is indeed my second language.

Teaching linear models

I teach several courses every year and the most difficult to pull off is FORE224/STAT202: regression modeling.

The academic promotion application form in my university includes a section on one’s ‘teaching philosophy’. I struggle with that part because I suspect I lack anything as grandiose as a philosophy when teaching: as most university lecturers I never studied teaching, although I try to do my best. If anything, I can say that I enjoy teaching and helping students to ‘get it’ and that I want to instill a sense of ‘statistics is fun’ in them. I spend quite a bit of time looking for memorable examples, linking to stats in the news (statschat and listening the news while walking my dog are very helpful here) and collecting data. But a philosophy? Don’t think so.

One of the hardest parts of the course is the diversity of student backgrounds. Hitting the right level, the right tone is very hard. Make it too easy and the 1/5 to 1/4 of students with a good mathematical background will hate it; they may even decide to abandon any intention of continuing doing stats if ‘that’s all there is about the topic’. Make it too complicated and half the class will fail and/or hate the content.

Part of the problem is based around what we mean by teaching ‘statistics’. In some cases it seems limited to what specific software does; for example, teaching with Excel means restriction to whatever models are covered in Excel’s Data Analysis Toolpak (DAT). The next choice when teaching is using menu-driven software (e.g. SPSS), which provides much more statistical functionality than Excel + DAT, at the expense of being further removed from common usability conventions. At the other extreme of simplicity is software that requires coding to control the analyses (e.g. R or SAS). In general, the more control we want, the more we have to learn to achieve it.

A while ago I made a distinction between the different levels of learning (user cases) when teaching statistics. In summary, we had i- very few students getting in to statistics and heavy duty coding, ii- a slightly larger group that will use stats while in a while and iii- the majority that will mostly consume statistics. I feel a duty towards the three groups, while admitting that I have predilection for the first one. Nevertheless, the third group provides most of the challenges and need for thinking about how to teach the subject.

When teaching linear models (general form \(y = X \beta + \epsilon\)) we tend to compartmentalize content: we have an ANOVA course if the design matrix \(X\) represents categorical predictors (contains only 1s and 0s), a regression course if \(X\) is full of continuous predictors and we talk about ANCOVA or regression on dummy variables if \(X\) is a combination of both. The use of different functions for different contents of \(X\) (for example aov() versus lm() in R or proc reg versus proc glm in SAS) further consolidates the distinction. Even when using menus, software tends to guide students through different submenus depending on the type of \(X\).

Gliding in a hierarchy of models (Photo: Luis, click to enlarge).

Gliding in a hierarchy of models (Photo: Luis, click to enlarge).

At the beginning of the course we restrict ourselves to \(X\) full of continuous predictors, but we introduce the notion of matrices with small examples. This permits showing the connection between all the linear model courses (because a rose by any other name…) and it also allows deriving a general expression of the formulas for the regression coefficients (essential for the most advanced students). Slower students may struggle with some of this material; however, working with small examples they can replicate the results from R (or Excel or SAS or whatever one uses to teach). Some times they even think it is cool.

Here is where the model.matrix() R function becomes handy; rather than building incidence matrices by hand—which is easy for tiny examples—we can get the matrices used by the lm() function to then calculate regression parameters (and any other output) for more complex models.

Once students get the idea that on matrix terms our teaching compartments are pretty much the same, we can reinforce the idea by using a single function (or proc) to show that we can obtain all the bits and pieces that make up what we call ‘fitting the model’. This highlights the idea that ANOVA, ANCOVA & regression are subsets of linear models, which are subsets of linear mixed models, which are subsets of generalized linear mixed models. A statistical Russian doll.

We want students to understand, some times so badly that we lower the bar to a point where there is no much to understand. Here is the tricky part, finding the right level of detail so all types of students learn to enjoy the topic, although at different levels of understanding.

There is software that generates code from menus too, like Stata or Genstat.

P.S. This is part of my thinking aloud with hesitation about teaching, as in Statistics unplugged, Excel, fanaticism and R, Split-plot 1: How does a linear mixed model look like?, R, academia and the democratization of statistics, Mid-January flotsam: teaching edition & Teaching with R: the switch. I am always looking for better ways of transferring knowledge.

Statistics unplugged

How much does statistical software help and how much it interferes when teaching statistical concepts? Software used in the practice of statistics (say R, SAS, Stata, etc) brings to the party a mental model that it’s often alien to students, while being highly optimized for practitioners. It is possible to introduce a minimum of distraction while focusing on teaching concepts, although it requires careful choice of a subset of functionality. Almost invariably some students get stuck with the software and everything goes downhill from there; the student moved from struggling with a concept to struggling with syntax (Do I use a parenthesis here?).

I am a big fan of Tim Bell’s Computer Science Unplugged, a program for teaching Computer Science’s ideas at primary and secondary school without using computers (see example videos).

Here is an example video for public key encryption:

This type of instruction makes me question both how we teach statistics and at what level we can start teaching statistics. The good news is that the New Zealand school curriculum includes statistics in secondary school, for which there is increasing number of resources. However, I think we could be targeting students even earlier.

This year my wife was helping primary school students participating in a science fair and I ended up volunteering to introduce them to some basic concepts so they could design their own experiments. Students got the idea of the need for replication, randomization, etc based on a simple question: Did one of them have special powers to guess the result of flipping a coin? (Of course this is Fisher’s tea-drinking-lady-experiment, but no 10 year old cares about tea, while at least some of them care about super powers). After the discussion one of them ran a very cool experiment on the effect of liquefaction on the growth of native grasses (very pertinent in post-earthquake Christchurch), with 20 replicates (pots) for each treatment. He got the concepts behind the experiment; software just entered the scene when we needed to confirm our understanding of the results in a visual way:

Seven-week growth of native grasses with three proportions of liquefied soil.

Seven-week growth of native grasses with three proportions of liquefied soil. T1: pure liquefaction, T2: 50% liquefaction, 50% normal soil, T3: pure normal soil.

People tell me that teaching stats without a computer is like teaching chemistry without a lab or doing astronomy without a telescope, or… you get the idea. At the same time, there are some books that describe some class activities that do not need a computer; e.g. Gelman’s Teaching Statistics: A Bag of Tricks. (Incidentally, why is that book so friggin’ expensive?)

Back to uni

Back from primary school kiddies to a regression course at university. Let’s say that we have two variables, x & y, and that we want to regress y (response) on x (predictor) and get diagnostic plots. In R we could simulate some data and plot the relationship using something like this:

# Basic regression data
n = 100
x = 1:n
y = 70 + x*5 + rnorm(n, 0, 40)
 
# Changing couple of points and plotting
y[50] = 550
y[100] = 350
plot(y ~ x)
Typical simple linear regression scatterplot.

Typical simple linear regression scatterplot.

We can the fit the linear regression and get some diagnostic plots using:

# Regression and diagnostics
m1 = lm(y ~ x)
 
par(mfrow = c(2,2))
plot(m1)
par(mfrow = c(1,1))
Typical diagnostic plot for simple linear regression model. What's the meaning of the fourth plot (lower right)?

Typical diagnostic plot for simple linear regression model. What’s the meaning of the fourth plot (lower right)?

If we ask students to explain the 4th plot—which displays discrepancy (how far a point is from the general trend) on leverage (how far is a point from the center of mass, pivot of the regression)—many of them will struggle to say what is going on in that plot. At that moment one could go and calculate the Hat matrix of the regression (\(X (X’X)^{-1} X’\)) and get leverage from the diagonal, etc and students will get a foggy idea. Another, probably better, option is to present the issue as a physical system on which students already have experience. A good candidate for physical system is using a seesaw, because many (perhaps most) students experienced playing in one as children.

Take your students to a playground (luckily there is one next to uni), get them playing with a seesaw. The influence of a point is related to the product of leverage (how far from the pivot we are applying force) and discrepancy (how big is the force applied). The influence of a point on the estimated regression coefficients will be very large when we apply a strong force far from the pivot (as in our point y[100]), just as it happens in a seesaw. We can apply lots of force (discrepancy) near the pivot (as in our point [y[50]) and little will happen. Students like mucking around with the seesaw and, more importantly, they remember.

Compulsory seesaw picture  (source Wikipedia).

Compulsory seesaw picture (source Wikipedia).

Analogy can go only so far. Some times a physical analogy like a quincunx (to demonstrate the central limit theorem) ends up being more confusing than using an example with variables that are more meaningful for students.

I don’t know what is the maximum proportion of course content that could be replaced by using props, experiments, animations, software specifically designed to make a point (rather than to run analysis), etc. I do know that we still need to introduce ‘proper’ statistical software—at some point students have to face praxis. Nevertheless, developing an intuitive understanding is vital to move from performing monkeys; that is, people clicking on menus or going over the motion of copy/pasting code without understanding what’s going on in the analyses.

I’d like to hear if you have any favorite demos/props/etc when explaining statistical concepts.

P.S. In this post I don’t care if you love stats software, but I specifically care about helping learners who struggle understanding concepts.

Using Processing and R together (in OS X)

I wanted to develop a small experiment with a front end using the Processing language and the backend calculations in R; the reason why will be another post. This post explained the steps assuming that one already has R and Processing installed:

  1. Install the Rserve package. This has to be done from source (e.g. using R CMD INSTALL packagename).
  2. Download Rserve jar files and include them in the Processing sketch.
  3. Run your code

For example, this generates 100 normal distributed random numbers in R and then sorts them (code copy and pasted from second link):

import org.rosuda.REngine.Rserve.*;
import org.rosuda.REngine.*;
 
double[] data;
 
void setup() {
  size(300,300);
 
  try {
    RConnection c = new RConnection();  
    // generate 100 normal distributed random numbers and then sort them 
    data= c.eval("sort(rnorm(100))").asDoubles();
 
  } catch ( REXPMismatchException rme ) {
    rme.printStackTrace();
 
  } catch ( REngineException ree ) {
    ree.printStackTrace();
  }
}
 
void draw() {
  background(255);
  for( int i = 0; i < data.length; i++) {
    line( i * 3.0, height/2, i* 3.0, height/2 - (float)data[i] * 50 );
  }
}

The problem is that this didn’t work, because my OS X (I use macs) R installation didn’t have shared libraries. My not-so-quick solution was to compile R from source, which involved:

  1. Downloading R source. I went for the latest stable version, but I could have gone for the development one.
  2. Setting up the latest version of C and Fortran compilers. I did have an outdated version of Xcode in my macbook air, but decided to delete it because i- uses many GB of room in a small drive and ii- it’s a monster download. Instead I went for Apple’s Command Line Tools, which is a small fraction of size and do the job.
  3. In the case of gfortran, there are many sites pointing to this page that hosts a fairly outdated version, which was giving me all sorts of problems (e.g. “checking for Fortran 77 name-mangling scheme”) because the versions between the C and Fortran compilers were out of whack. Instead, I downloaded the latest version from the GNU site.
  4. Changing the config.site file in a few places, ensuring that I had:
  5. CC="gcc -arch x86_64 -std=gnu99"
    CXX="g++ -arch x86_64"
    F77="gfortran -arch x86_64"
    FC="gfortran -arch x86_64"

Then compiled using (didn’t want X11 and enabling shared library):

./configure --without-x --enable-R-shlib
make
make check
make pdf # This produces a lot of rubbish on screen and it isn't really needed
make info

And finally installed using:

sudo make prefix=/luis/compiled install

This used a prefix because I didn’t want to replace my fully functioning R installation, but just having another one with shared libraries. If one types R in terminal then it is still calling the old version; the new one is called via /luis/compiled/R.framework/Versions/Current/Resources/bin/R. I then installed Rserve in the new version and was able to call R from processing so I could obtain.

A 'hello world' of the calling R from Processing world.

A ‘hello world’ of the calling R from Processing world.

Now I can move to what I really wanted to do. File under stuff-that-I-may-need-to-remember-one-day.

Excel, fanaticism and R

This week I’ve been feeling tired of excessive fanaticism (or zealotry) of open source software (OSS) and R in general. I do use a fair amount of OSS and pushed for the adoption of R in our courses; in fact, I do think OSS is a Good ThingTM. I do not like, however, constant yabbering on why using exclusively OSS in science is a good idea and the reduction of science to repeatability and computability (both of which I covered in my previous post). I also dislike the snobbery of ‘you shall use R and not Excel at all, because the latter is evil’ (going back ages).

We often have several experiments running during the year and most of the time we do not bother setting up a data base to keep data. Doing that would essentially mean that I would have to do it, and I have a few things more important to do. Therefore, many data sets end up in… (drum roll here) Microsoft Excel.

How should a researcher setup data in Excel? Rather than reinventing the wheel, I’ll use a(n) (im)perfect diagram that I found years ago in a Genstat manual.

Suggested sane data setup in a spreadsheet.

Suggested sane data setup in a spreadsheet.

I like it because:

  • It makes clear how to setup the experimental and/or sampling structure; one can handle any design with enough columns.
  • It also manages any number of traits assessed in the experimental units.
  • It contains metadata in the first few rows, which can be easily skipped when reading the file. I normally convert Excel files to text and then I skip the first few lines (using skip in R or firstobs in SAS).

People doing data analysis often start convulsing at the mention of Excel; personally, I deeply dislike it for analyses but it makes data entry very easy, and even a monkey can understand how to use it (I’ve seen them typing, I swear). The secret for sane use is to use Excel only for data entry; any data manipulation (subsetting, merging, derived variables, etc.) or analysis is done in statistical software (I use either R or SAS for general statistics, ASReml for quantitative genetics).

It is far from a perfect solution but it fits in the realm of the possible and, considering all my work responsibilities, it’s a reasonable use of my time. Would it be possible that someone makes a weird change in the spreadsheet? Yes. Could you fart while moving the mouse and create a non-obvious side effect? Yes, I guess so. Will it make your life easier, and make possible to complete your research projects? Yes sir!

P.S. One could even save data using a text-based format (e.g. csv, tab-delimited) and use Excel only as a front-end for data entry. Other spreadsheets are of course equally useful.

P.S.2. Some of my data are machine-generated (e.g. by acoustic scanners and NIR spectroscopy) and get dumped by the machine in a separate—usually very wide; for example 2000 columns—text file for each sample. I never put them in Excel, but read them directly (a directory-full of them) in to R for manipulation and analysis.

As an interesting aside, the post A summary of the evidence that most published research is false provides a good summary for the need to freak out about repeatability.

Should I reject a manuscript because the analyses weren’t done using open source software?

“Should I reject a manuscript because the analyses weren’t done using open software?” I overheard a couple of young researchers discussing. Initially I thought it was a joke but, to my surprise, it was not funny at all.

There is an unsettling, underlying idea in that question: the value of a scientific work can be reduced to its computability. If I, the reader, cannot replicate the computation the work is of little, if any, value. Even further, my verification has to have no software cost involved, because if that is not the case we are limiting the possibility of computation to only those who can afford it. Therefore, the almost unavoidable conclusion is that we should force the use of open software in science.

What happens if the analyses were run using a point-and-click interface? For example SPSS, JMP, Genstat, Statistica, and a few other programs allow access to fairly complex analytical algorithms via a system of menus and icons. Most of them are not open source nor generate code for the analyses. Should we ban their use in science? One could argue that if users only spend the time and learn a programming language (e.g. R or Python) they will be free of the limitations of point-and-click. Nevertheless, we would be shifting accessibility from people that can pay for an academic license for a software to people that can learn and moderately enjoy programming. Are we better off as research community by that shift?

There is another assumption: open software will always provide good (or even appropriate) analytical tools for any problem. I assume that in many cases OSS is good enough and that there is a subset of problems where it is the best option. However, there is another subset where it is suboptimal. For example, I deal a lot with linear mixed models used in quantitative genetics, an area where R is seriously deficient. In fact, I should have to ignore the last 15 years of statistical development to run large problems. Given that some of the data sets are worth millions of dollars and decades of work, Should I sacrifice the use of best models so a hypothetical someone, somewhere can actually run my code without paying for an academic software license? This was a rhetorical question, by the way, as I would not do it.

There are trade-offs and unintended consequences in all research policies. This is one case where I think the negative effects would outweigh the benefits.

Gratuitous picture: I smiled when I saw the sign with the rightful place for forestry (Photo: Luis).

Gratuitous picture: I smiled when I saw the sign with the rightful place for forestry (Photo: Luis).

P.S. 2013-12-20 16:13 NZST Timothée Poisot provides some counterarguments for a subset of articles: papers about software.

If one were to invent scientific journals today

While taking a shower I was daydreaming about what would happen if one were to invent journals today, with a very low cost of publication and no physical limits to the size of a publication. My shower answer was that there would be little chance for a model like traditional printed journals.

One could create a central repository (a bit like the arXiv) taking submissions of text format of the article + figures, which are automatically translated to a decent-looking web format and a printable version. This would be the canonical version of the article and would get assigned a unique identifier. The submitters would get to update their article any number of times, creating versions (pretty much like software). This way they could fix any issues without breaking references from other articles.

There would be a payment for submitting articles to the repository (say $100 for the sake of argument), covering the costs of hosting and infrastructure, serving at the same time as a deterrent for spam.

Journals in their current form would tend to disappear, but there would be topical aggregators (or feeds). Thus, the ‘Journal of whatever’ would now be a curator of content from the ‘big bucket’ central repository, pulling aside articles worthy (in their opinion) of more scrutiny, or commentary, etc. This could be either a commercial venture or amateur labor of love, done by people very interested in a given topic and could even apply a different format to the canonical article, always pointing back to the unique identifier in the central repository.

Some aggregators could be highly picky and recognized by readers, becoming the new Nature or Science. Authors could still ‘submit’ or recommend their papers to these aggregators. However, papers could also be in multiple feeds and copyright would probably stay with the authors for a limited amount of time. The most important currency for academics is recognition, and this system would provide it, as well as the potential for broad exposure and no cost for readers or libraries.

There would be no pre-publication peer review because, let’s face it, currently it’s more of a lottery than anything else. Post-publication peer review, broad by the research community would be new standard.

Any big drawbacks for my shower daydream?

P.S.1 2013-12-15 13:40 NZST Thomas Lumley pointed me to a couple of papers on the ‘Selected Papers Network’, which would be one way of dealing with prestige/quality/recognition signals needed by academics.

P.S.2 2013-12-15 14:43 NZST This ‘journals are feeds’ approach fits well with how I read papers: I do not read journals, but odd papers that I found either via web searches or recommended by other researchers. There are, however, researchers that aim to read whole issues, although I can;t make sense of it.