When considering possible foreign universities to apply to for higher education, how exactly do you decide where to apply? The criteria for most people does not go beyond the most important basics – cost of university tuitions, courses offered, application and scope, scholarship possibilities, overall expenses, job market demand and of course, very importantly, university rankings! The latter being most controversial of them all!
University rankings can be controversial because there are over 10 international university ranking systems and hardly any two have results that are similar enough to be trusted or averaged out. For instance, the prestigious University of Oxford can be found anywhere between top 10 to top 50 universities worldwide.
If these so called professional systems follow the absurdities of either whim-based decision making, or one focused on fame, wealth and exclusivity, can we really trust them?
Sharing an analysis on the same issue, let’s have a look at what scientificblogging has to say:
Thousands of high school students are currently deliberating over which university to attend next year. But which are the best? A study published in the open access journal BMC Medicine warns against using international rankings of universities to answer this question. They are misleading and should be abandoned, the study concludes.
The study focuses on the published 2006 rankings of the Times Higher Education Supplement “World University Rankings” and the Shanghai Jiao Tong University “Academic Ranking of World Universities”. It found that only 133 institutions were shared between the top-200 lists of the Shanghai and Times rankings; four of the top-50 in the Shanghai list did not even appear among the first 500 universities of the Times ranking.
The study’s authors argue that such discrepancies stem from poor methodology and inappropriate indicators, making the ranking systems invalid.
The Shanghai system, for example, measures research excellence in part by the number of Nobel- and Fields-winning alumni at the institution. However, few universities boast laureates on their staff, and their presence does not necessarily lead to better undergraduate education. Furthermore, the prize-winning staff usually have performed their ground-breaking work at another institution, so the measurement really addresses the ability of institutions to attract prestigious awardees rather than being the site where ground-breaking work is done.
The Times ranking, on the other hand, places great emphasis on the results of a survey sent out to more than 190,000 researchers. They are asked to list what they think are the top 30 universities in their field of research. Yet this survey is entirely opinion-based, and with a response rate below 1% may contain significant bias.
“There are flaws in the way that almost every indicator is used to compile these two popular rankings,” says John Ioannidis, who led the analysis team. “I don’t disagree that excellence is important to define, measure, interpret and improve, but the existing ranking criteria could actually harm science and education.”
The study authors call for global collaboration to standardise data on key aspects of universities and other institutions, and any flaws should be openly admitted and not underestimated. “Evaluation exercises should not force spurious averages and oversimplified rankings for the many faces of excellence,” says Ioannidis. “And efforts to improve institutions should not focus just on the numbers being watched.”
Article: “International ranking systems for universities and institutions: a critical appraisal”, John PA Ioannidis, Nikolaos A Patsopoulos, Fotini K Kavvoura, Athina Tatsioni, Evangelos Evangelou, Ioanna Kouri, Despina G Contopoulos-Ioannidis and George Liberopoulos, BMC Medicine