Quantum Forest

a shoebox for data analysis

Month: September 2012

A word of caution: the sample may have an effect

This week I’ve tried to i-stay mostly in the descriptive statistics realm and ii-surround any simple(istic) models with caveats and pointing that they are very preliminary. We are working with a sample of ~1,000 schools that did reply to Fairfax’s request, while there is a number of schools that either ignored the request or told Fairfax to go and F themselves. Why am I saying this? If one goes and gets a simple table of the number of schools by type and decile there is something quite interesting: we have different percentages for different types of schools represented in the sample and the possibility of bias on the reporting to Fairfax, due to potential low performance (references to datasets correspond to the ones I used in this post):

Now let’s compare this number with the school directory:

As a proportion we are missing more secondary schools. We can use the following code to get an idea of how similar are school types, because the small number of different composite schools is a pain. If

Representation of different schools types and deciles is uneven.

Different participations in the sample for school types. This type is performance in mathematics.


I’m using jittering rather than box and whisker plots to i- depict all the schools and ii- get an idea of the different participation of school types in the dataset. Sigh. Another caveat to add in the discussion.

P.S. 2012-09-27 16:15. Originally I mentioned in this post the lack of secondary schools (Year 9-15) but, well, they are not supposed to be here, because National Standards apply to years 1 to 8 (Thanks to Michael MacAskill for pointing out my error.)

Some regressions on school data

Eric and I have been exchanging emails about potential analyses for the school data and he published a first draft model in Offsetting Behaviour. I have kept on doing mostly data exploration while we get a definitive full dataset, and looking at some of the pictures I thought we could present a model with fewer predictors.

The starting point is the standards dataset I created in the previous post:

There seems to be a different trend for secondary vs non-secondary schools concerning the relationship between number of full time teacher equivalent and total roll. The presence of a small number of large schools suggests that log transforming the variables could be a good idea.

Difference on the number of students per FTTE between secondary and non-secondary schools.

Now we fit a model where we are trying to predict reading standards achievement per school accounting for decile, authority , proportion of non-european students, secondary schools versus the rest, and a different effect of number of students per FTTE
for secondary and non-secondary schools.

The residuals are still a bit of a mess:

Residuals for this linear model: still a bit of a mess.

If we remember my previous post decile accounted for 45% of variation and we explain 4% more through the additional predictors. Non-integrated schools have lower performance, a higher proportion of non-European students reduce performance, secondary schools have lower performance and larger classes tend to perform better (Eric suggests reverse causality, I’m agnostic at this stage), although the rate of improvement changes between secondary and non-secondary schools. In contrast with Eric, I didn’t fit separate ethnicities as those predictors are related to each other and constrained to add up to one.

Of course this model is very preliminary, and a quick look at the coefficients will show that changes on any predictors besides decile will move the response by a very small amount (despite the tiny p-values and numerous stars next to them). The distribution of residuals is still heavy-tailed and there are plenty of questions about data quality; I’ll quote Eric here:

But differences in performance among schools of the same decile by definition have to be about something other than decile. I can’t tell from this data whether it’s differences in stat-juking, differences in unobserved characteristics of entering students, differences in school pedagogy, or something else. But there’s something here that bears explaining.

Updating and expanding New Zealand school data

In two previous posts I put together a data set and presented some exploratory data analysis on school achievement for national standards. After those posts I exchanged emails with a few people about the sources of data and Jeremy Greenbrook-Held pointed out Education Counts as a good source of additional variables, including number of teachers per school and proportions for different ethnic groups.

The code below call three files: Directory-Schools-Current.csv, teacher-numbers.csv and SchoolReport_data_distributable.csv, which you can download from the links.

This updated data set is more comprehensive but it doesn’t change the general picture presented in my previous post beyond the headlines. Now we can get some cool graphs to point out the obvious, for example the large proportion of Maori and Pacific Island students in low decile schools:

Proportion of Pacific Island (vertical axis) and Maori students (horizontal axis) in schools with points colored by decile. Higher proportions for both are observed in low decile schools.

I have avoided ‘proper’ statistical modeling because i- there is substantial uncertainty in the data and ii- the national standards for all schools (as opposed to only 1,000 schools) will be released soon; we do’t know if the published data are a random sample. In any case, a quick linear model fitting the proportion of students that meet reading standards (reading.OK) as a function of decile and weighted by total school roll—to account for the varying school sizes—will explain roughly 45% of the observed variability on reading achievement.

Model fit has a few issues with distribution of residuals, we should probably use a power transformation for the response variable, but I wouldn’t spend much more time before getting the full data for national standards.

Residuals of a quick weighted linear model. The residuals show some heterogeneity of variance (top-left) and deviation from normality (top-right) with heavy tails.

Bonus plot: map of New Zealand based on school locations, colors depicting proportion of students meeting reading national standards.

[sourcecode lang=”R”]
qplot(longitude, latitude,
data = standards, color = reading.OK)
[/sourcecode]

New Zealand drawn using school locations; color coding is for proportion of students meeting reading national standards.

P.S. 2012-09-26 16:01. The simple model above could be fitted taking into account the order of the decile factor (using ordered()) or just fitting linear and quadratic terms for a numeric expression of decile. Anyway, that would account for 45% of the observed variability.

P.S. 2012-09-26 18:17. Eric Crampton has posted preliminary analyses based on this dataset in Offsetting Behaviour.

A little break from data & policy: experibass

Music and creativity relax me: Diego Stocco’s Experibass.

New Zealand school performance: beyond the headlines

I like the idea of having data on school performance, not to directly rank schools—hard, to say the least, at this stage—but because we can start having a look at the factors influencing test results. I imagine the opportunity in the not so distant future to run hierarchical models combining Ministry of Education data with Census/Statistics New Zealand data.

At the same time, there is the temptation to come up with very simple analyses that would make appealing newspaper headlines. I’ll read the data and create a headline and then I’ll move to something that, personally, seems more important. In my previous post I combined the national standards for around 1,000 schools with decile information to create the standards.csv file.

Up to this point we have read the data, removed special schools and created variables that represent the proportion of students that al least meet standards. Now I’ll do something very misleading: calculate the average for reading.OK for each school decile and plot it, showing the direct relationship between socio-economic decile and school performance.

Scatterplot of average proportion of students at least meeting the reading national standards for each socioeconomic decile.

Using only the average strengthens the relationship, but it hides something extremely important: the variability within each decile. The following code will display box and whisker plots for each decile: the horizontal line is the median, the box contains the middle 50% of the schools and the vertical lines 1.5 times the interquantile range (pretty much the range in most cases):

Box and whisker plot for proportion of students at least meeting the reading national standard for each decile. Notice the variability for each decile.

A quick look at the previous graph shows a few interesting things:

  • the lower the decile the more variable school performance,
  • there is pretty much no difference between deciles 6, 7, 8 and 9, and a minor increase for decile 10.
  • there is a trend for decreasing performance for lower deciles; however, there is also a huge variability within those deciles.

We can repeat the same process for writing.OK and math.OK with similar trends, although the level of standard achievement is lower than for reading:

Box and whisker plot for proportion of students at least meeting the writing national standard for each decile.

Box and whisker plot for proportion of students at least meeting the mathematics national standard for each decile.

Achievement in different areas (reading, writing, mathematics) is highly correlated:

Scatterplot for proportion meeting mathematics and reading national standards. Dark points are deciles 1 to 5, while light points are deciles 6 to 10. Notice the large overlap for performance between the two groups.

All these graphs are only descriptive/exploratory; however, once we had more data we could move to hierarchical models to start adjusting performance by socioeconomic aspects, geographic location, type of school, urban or rural setting, school size, ethnicity, etc. Potentially, this would let us target resources on schools that could be struggling to perform; nevertheless, any attempt at creating a ‘quick & dirty’ ranking ignoring the previously mentioned covariates would be, at the very least, misleading.

Note 2012-09-26: I have updated data and added a few plots in this post.

New Zealand School data

Some people have the naive idea that the national standards data and the decile data will not be put together. Given that at some point in time all data will be available, there is no technical reason to not have merged the data, which I have done in this post for an early release.

Stuff made data for the National Standards for a bit over 1,000 schools available here as an Excel File. I cleaned it up a bit to read it in R, saved it to csv format, of which you can get a copy here. Decile data is available from the Ministry of Education, which I transformed to csv and uploaded here. We can now merge the datasets using R:

But there are some name discrepancies in the merge, which we can check using:

There are four schools that have very different names in the dataset: 82, 266, 429 and 1650. They may just have changed names or something could be wrong. At this point one may choose to drop them from any analysis; up to you. We then just rename the second variable to school.name and save the file to csv (available here standards.csv for your enjoyment).

Now we should try to merge any other economic and social information available in Statistics New Zealand.

Note 2012-09-26: I have updated data and added a few plots in this post.

(Unsurprisingly) users default to the defaults

Oddities tend to jump out when one uses software in a daily basis. The situation is even clearer when using software for teaching: many more people looking at it with fresh eyes.

Let’s say that we are fitting a simple linear model and we use the summary function, then POW! i- one gets all sorts of stars next to each of the coefficients and ii- some tiny p-values with lots of digits. Since immemorial times (childcare, at least) we got star stickers when doing a good job and here we have R doing the same. It is possible to remove the stars, I know, but the default is the subject of this post.

In contrast, I like the simplicity of the display() function in the arm package. If we use it with the same model m we get:

There is an emphasis on understanding and awareness of limitations. Rather than a false sense of accuracy and precision, we get coefficients rounded to two decimal places, no stars and no p-values. Did you really believed that the p-value was 0.0249 instead of 0.0243?

Defaults also require consistency, because consistency begets ‘intuitiveness’. Every function that deals with data should take a data argument and avoid being schizophrenic:

I’ve mentioned in previous posts the apply() family of functions. Yes, I hate them with passion and that’s why I think we should unite and rely on Talk like a Pirate Day (Rrrrstats, matey!) to lead a mutiny to get plyr as the new default.

However, even some of my favorite packages have strange defaults. In ggplot2, for example, the size (too small) and the color (too light) of the text in the axes is extremely hard to read. Maybe not for you young reader, but for a long-sighted, receding-hairline-guy with salt & pepper beard is a struggle. This is magnified when pasting said graph in a Keynote or PowerPoint presentation and then using a crappy projector. Again, thanks Hadley, I can define my own theme but there are thousands of difficult to read graphs doing the rounds because, hey, those are the defaults.

Every time we define a default we have to think that, even when we allow change, most users will probably stick to it. Of course I’m the first one to acknowledge that my favorite defaults may not coincide with yours, as shown below.

Sticker in a University of Canterbury toilet cubicle. There are cultural differences on the default correct position.

Mid-September flotsam

This is one of those times of the year: struggling to keep the head above the water, roughly one month before the last lecture of the semester. On top trying to squeeze trips, meetings and presentations in between while dealing with man flu.

Gratuitous picture: looking for peace in Japan (Photo: Luis).

Suicide statistics and the Christchurch earthquake

Suicide is a tragic and complex problem. This week New Zealand’s Chief Coroner released its annual statistics on suicide, which come with several tables and figures. One of those figures refers to monthly suicides in the Christchurch region (where I live) and comes with an interesting comment:

Suicides in the Christchurch region (Timaru to Kaikoura) have risen from 67 (2010/11) to 81 (2011/12). The average number of suicides per year for this region over the past four years is 74. The figure of 67 deaths last year reflected the drop in suicides post-earthquake. The phenomenon of a drop in the suicide rate after a large scale crisis event, such as a natural disaster, has been observed elsewhere. [my emphasis]

‘Provisional Suicide deaths in relation to the Christchurch earthquakes’ this is the original header for the graph in the report. The first earthquake was in September 2010 and is marked with red dots.

The figure highlights the earthquake and its major aftershocks using different colors. It is true that we have faced large problems following the 4th September 2010 earthquake and thousands of aftershocks, but can we really make the coroner’s interpretation (already picked up by the media)? In fact, one could have a look at the data before the earthquake, where there are big drops and rises (What would be the explanation for that? Nothing to do with any earthquake). In fact, the average number of suicides hasn’t really changed after the quake.

I typed the data in to this file, calculated the mean number of suicides per month (~6.3) and generated a few random realizations of a Poisson process using that mean; here I’m plotting the real data in panel 1 and 4 other randomly generated series in panels 2 to 5.

Observed suicide data for Christchurch (panel 1) and four 60-month simulations (panels 2-5).

Do they really look different? We could try to fit any set of peaks and valleys to our favorite narrative; however, it doesn’t necessarily make any sense. We’ll need a larger data set to see any long-time effects of the earthquake.

P.S. 2012-09-18. Thomas Lumley comments on this post in StatsChat.

© 2015 Quantum Forest

Theme by Anders NorenUp ↑