More of this! Here Daniel Diermeier chancellor of Vanderbilt University writes about the negative impact of #rankings on students. Not only a personal opinion, Mr. Diermeier mentions a report published by NORC at the University of Chicago on the methodology used in 5 prominent #uniranking systems. While Diermeier pinpoints findings like methodologies being unclear; rationale for the relative weights of various attributes being unknown; the concepts captured being not clear; data quality being inconsistent and some factors assessed that are critical components in the ranking process being highly subjective resulting in difficulty to establish definitive comparisons between institutions, my interest in the report was more on the emphasis on surveys flaws. The report concludes that #surveys are fraught with issues that can undermine their #validity and #reliability.
1. Subjectivity and Bias: Surveys, especially those based on reputation, are inherently subjective and can be influenced by factors like media representation, historical prestige, and individual experiences.
2. Total Survey Error: The report applies the Total Survey Error framework to highlight various errors that can occur in survey data, including measurement error, processing error, coverage error, sampling error, nonresponse error, and adjustment error. Each of these errors can significantly impact the reliability of survey-based measures.
3. Sampling and Coverage Issues: Surveys often suffer from sampling and coverage errors, where certain groups may be underrepresented or excluded altogether.
4. Nonresponse and Adjustment Errors: Nonresponse error is a significant concern, as those who choose not to participate may differ systematically from those who do. Post-survey adjustments are often made to mitigate these errors, but these adjustments can introduce their own biases.
5. Lack of Transparency: The report criticizes the lack of transparency in how survey data are collected, processed, and adjusted. This lack of clarity makes it difficult to assess the quality and reliability of the survey data used in rankings.
The last point is by far the most important to me and the report invites rankers to adopt more rigorous and transparent methodologies for survey data, I'd say these recommendations will fall on deaf ears! Too much money at stakes for them to be taken into consideration with governments now involved now in boosting votes for their countries' institutions!
The report beautifully aligns with other efforts like the United Nations University - IIGH who issued a statement on #global #university #rankings and the European University Association briefing on key considerations for the use of rankings as well as INORMS Research Evaluation Group, #morethanourrank etc... Clearly, tides are changing!
Marcelo Knobel Elizabeth Gadd Professor Ellen Hazelkorn Darren Reisberg Hilligje van't Land Jacob Jensen Eva Hildebrandt Richard Holmes Piet Van Hove Jeroen Bosman