University rankings are released every winter. It gives the media a few free stories. One or two universities go up the rankings, and one or two go down. University presidents rush out to the media to congratulate themselves and their staff on the hard work and innovation that led to the rise, how they are doing more with less etc. The stories write themselves.
Alternatively the presidents remark gravely that the fall is the inevitable by-product of unsustainable funding cuts, and if the government doesn’t intervene it will only get worse.
Of course if either were true we’d expect to see trends. The universities of hard work and innovation should be improving consistently, and the funding losers dropping. In fact we see that Irish universities are not moving in any particular direction. There’s a dramatic fall in some rankings: the Times Higher; but a hardly noticeable one in the QS one. In the Shanghai ranking Irish universities have improved. Surely if a change in performance is dramatic and real all rankings should pick it up?
This gets us to the problems with these rankings.
First is using rank at all. Rankings are inherently unstable. The rank a university comes in can change dramatically even if the performance of the university hasn’t changed that much. Think of the rank of someone in a marathon with 1000 runners. The top 10 runners will be spaced apart, the first and the tenth runner could have quite different performances. And these will be quite a way from the 100th runner. But we usually see that after a while bunching happens. The 100th runner and the 200th runner might not be that far apart, but the rank indicates a big difference. In the main middle bunch, between the 250th and 750th runners, small differences in performance will yield huge differences in rank order.
Students might be better off with mediocre researchers who are fantastic teachers
What are these scores based on?
The different university rankings (there are about seven of them) use a variety of components to measure university performance, but they tend to rely heavily on one: university reputation. It accounts for between a third and a half of the score in each of the different rankings. Reputation is often based on some real differences, but it means that big, famous universities are scored higher because, well because we’ve heard of them. If you’re asked what is the best university in the world, some names come to mind, and these are the names that come out on top. Are they the best? They’re probably close to the best, but our assessment is based on nothing more than name recognition. This is a bit like using opinion polls far out from an election. Well known candidates perform best in these because voters have heard of them. After a campaign, in the real election, these advantages are reduced and the less well-known candidates perform better than expected.
One ranking, Shanghai, rejects these repetitional criteria. But it replaces it with possibly more obscure criteria. It looks at the number of students and staff who won a Nobel Prize. These prizes aren’t plentiful, so most universities in the world will score zero. But do they really measure anything a student would want to know? That Samuel Beckett went to Trinity and then won a Nobel Prize is as much a matter of luck (for Trinity) as anything else. Does it reflect the contemporary student experience?
Research output and citations measure less obscure things. Research is a core university activity, and it can be measured pretty reliably. A journal article that is published and relied on for further research makes some difference. But it is not clear that it makes a great deal of difference to the student experience. They should be getting taught by genuine experts in the field, which is good. But does it mean they are taught well? The busy researcher may be less engaged with teaching pesky undergraduates, who eat into research time or, more likely, the time they can spend chasing grant income.
And teaching itself is remarkably difficult to measure. So the ranks rely on the staff-to-student ratios. We know that good teaching relies on interaction, and it’s easier to be interactive with small groups. But it’s also possible to be mind-numbingly boring in small groups.
Students, and these are the core source of funding for Irish universities, might be better off with mediocre researchers who are fantastic teachers, who give them hands on experience doing whatever it is they are there
to learn. There are few German universities in the top 100 of any rankings, but no one thinks German graduates deficient. One of the reasons there are so few German successes there is because research is often done in independent institutes, but it’s also because Germans spread resources so as not to create a few small elite universities that teach few students, but to make all universities pretty good. The path some are suggesting of ‘picking winners’ to create at least one Irish elite university by targeting resources may not be such a good idea.
The UCC president Michael Murphy rather clumsily instructed staff to ask academics they knew to think of UCC when assessing reputation
A further criterion used by most rankings is the internationalisation of the institution. Universities with a large number of foreign staff and students are given a higher score. But measures of internationalisation tend to favour small, open countries. Ireland is one of the most globalised countries in the world mainly because it is one of the smaller developed countries in the world. A small number of foreign students make a big impact here. We also benefit from speaking English, so it’s easier to move here. I’m not sure this reflects anything meaningful in the performance of our universities.
Measuring university performance is difficult. The existing rankings capture something. If Harvard is in the top ten in all rankings, it is probably because it is indeed an excellent university. But as with most things we measure in the social world, we should look at a composite of different measures and look for trends. We should not make too much of annual changes in one measure. These measures are subject to gaming. Universities now actively target the measures. The UCC president Michael Murphy rather clumsily instructed staff to write ask academics they knew to think of UCC when assessing reputation. It might be more harmful if, as is possible, universities target foreign students over Irish ones.
As Goodhart’s Law warns us: when a measure becomes a target, it ceases to be a good measure.
If universities are really worried about the impact of funding cuts they should see how they could save money. They might think about why taxpayers pay academics to freely supply material to publishers, who then repackage it and then charge the taxpayers again to read it. Academic publishing is the most profitable of businesses. One, Elsevier made over €1bn last year, with a profit margin of 39%!
Instead of using ranking changes to lobby for more funds, dealing with this might be a better place for the HEA to start