Email updates

Keep up to date with the latest news and content from BMC Medicine and BioMed Central.

Journal App

google play app store
Open Access Highly Accessed Correspondence

International ranking systems for universities and institutions: a critical appraisal

John PA Ioannidis123*, Nikolaos A Patsopoulos1, Fotini K Kavvoura1, Athina Tatsioni13, Evangelos Evangelou1, Ioanna Kouri1, Despina G Contopoulos-Ioannidis45 and George Liberopoulos1

Author Affiliations

1 Department of Hygiene and Epidemiology, University of Ioannina School of Medicine, Ioannina 45110, Greece

2 Biomedical Research Institute, Foundation for Research and Technology-Hellas, Ioannina 45110, Greece

3 Institute for Clinical Research and Health Policy Studies, Department of Medicine, Tufts University School of Medicine, Boston, MA 02111, USA

4 Department of Pediatrics, University of Ioannina School of Medicine, Ioannina, Greece

5 Department of Pediatrics, George Washington University School of Medicine and Health Sciences, Washington DC, USA

For all author emails, please log on.

BMC Medicine 2007, 5:30  doi:10.1186/1741-7015-5-30

Published: 25 October 2007

Abstract

Background

Ranking of universities and institutions has attracted wide attention recently. Several systems have been proposed that attempt to rank academic institutions worldwide.

Methods

We review the two most publicly visible ranking systems, the Shanghai Jiao Tong University 'Academic Ranking of World Universities' and the Times Higher Education Supplement 'World University Rankings' and also briefly review other ranking systems that use different criteria. We assess the construct validity for educational and research excellence and the measurement validity of each of the proposed ranking criteria, and try to identify generic challenges in international ranking of universities and institutions.

Results

None of the reviewed criteria for international ranking seems to have very good construct validity for both educational and research excellence, and most don't have very good construct validity even for just one of these two aspects of excellence. Measurement error for many items is also considerable or is not possible to determine due to lack of publication of the relevant data and methodology details. The concordance between the 2006 rankings by Shanghai and Times is modest at best, with only 133 universities shared in their top 200 lists. The examination of the existing international ranking systems suggests that generic challenges include adjustment for institutional size, definition of institutions, implications of average measurements of excellence versus measurements of extremes, adjustments for scientific field, time frame of measurement and allocation of credit for excellence.

Conclusion

Naïve lists of international institutional rankings that do not address these fundamental challenges with transparent methods are misleading and should be abandoned. We make some suggestions on how focused and standardized evaluations of excellence could be improved and placed in proper context.