Opinion

Read Between The University Rankings

Rankings largely reinforce the dominant impressions about institutions’ reputations, says little about students’ needs

Advertisement

Read Between The University Rankings
info_icon

I would strongly advise students not take institutional rankings as the absolute criterion of their choice of where to study. To students who can dare to be imaginative and experimental, I would go one step farther and say, don’t even consider them as your primary criterion. When it comes to ranking systems in India, these rankings simply indicate traditional and popular assumptions about education and excellence, failing to take into account changes that come with time, needs of particular students, or for that matter, the rapidly evolving needs of the world order. There are many reasons behind this. But let me begin with a personal experience that shaped and eventually upset educational choices made in my own family.

Advertisement

In 2016, when we moved from California to Delhi, our professional move was part of a significant step in the higher education ecosystem. After nine years teaching at Stanford University, my partner and I were going to be among the early cohort of faculty at Ashoka University, with the responsibility of setting up new programmes and curricula at the new private institution. A more intimate educational decision was to find schools for our own children, six and two, who would be moving from public school in Palo Alto and a Spanish-immersion playschool, respectively. We felt lucky to find places in one of the oldest and best-known schools in south Delhi. We loved its spacious grounds, and the lack of “bling” that seemed to have become the social norm in most south Delhi private schools. It also happened to be a school that was consistently ranked high, often first or second, in the annual rankings of schools carried out by a popular newsmagazine that was consulted seriously by many parents seeking school admission for their children.

Advertisement

Within just a couple of years, we realised that the children were feeling stifled by the school, both emotionally and intellectually. In spite of being at the top of her class and chosen class prefect, our daughter was both underwhelmed and demoralised by both the curriculum and the pedagogy. The only thing that kept her going was the group of close friends she’d made, whose company was eventually robbed by the prolonged pandemic. Our son, only in Class II, was less perturbed, but it was clear that school had become a pointless chore. Watching this state of affairs for over a year, we eventually dec­ided to transfer them to a school in the Delhi-NCR area that was defined by an alternative and child-centric approach to education, even though affiliated to a board known for greater academic rigour. It has now been several months in the new school, and neither we nor the children have been happier.

info_icon

The fact that the new school, though far lesser known than the older one, is also listed among the well-ranked schools in the Delhi-NCR area indicated to us that ranking systems are not necessarily off the mark. But our dissatisfaction with a far higher-ranked and widely known, older school revealed two things: 1) An institution’s place in a general ranking list says nothing about the students’ fit with that institution, and 2) Ranking systems largely reflect dominant impressions in society about established reputations of institutions, rarely including changes that have affected either that institution or the larger educational climate—and in this rankings can be years, even decades beh­ind time.  

Advertisement

This is one of the most obvious limitations of ranking systems. Reputational surveys are one of their key methodologies. These largely reinforce established wisdom and several limit the possibility of newer, alternate forms of knowledge, pedagogy and preparation that newer intuitions might have to offer. With international rankings of universities, established wisdom is backed by tremendous financial prowess, which is one of the explicit criteria of excellence used by many ranking systems. But it creates an endless cycle. Without the massive finances of the top Anglophone universities, mostly in the US and the UK, and certain kinds of resources such as specific professional programmes as well as excellence in certain disciplines (such as the life sciences), which produce citations at rates far higher than other disciplines, it is virtually impossible for new institutions to break into the higher ranks. This has precious little to do with the educational experience a student can get from an institution, and far less with whether an institution is well-suited to the particular needs of a student.   

Advertisement

The ranking of universities was started by the US News and World Report for American institutions. The major international ranking systems today are the Academic Ranking of World Universities from Shanghai’s Jiao Tong University, first released in 2003, and the World University Rankings from the Times Higher Education of Britain (henceforth THES), first released in November 2004. Rankings are also done by Maclean’s in Canada, Der Spiegel in Germany, the Asahi Shimbun in Japan, to global MBA rankings from the Financial Times and the GreenMetric World University ranking from Indonesia. There are also a number of international ranking systems for specific professional schools, mostly in business, law and medicine; rankings of business schools, for instance, are done by such organisations as The Economist, Financial Times, Wall Street Journal and Business Week.

Advertisement

Across the board, power and resources—which have a complex and unpredictable rel­ation with the educational experience of students or the research and intellectual output of faculty—emerge as by far the most inf­luential criteria for high ranking.Furt­her­more, as it is evident in the Asian ranking systems, rankings favour universities with strength in the sciences, engineering and medicine, and the particular models of knowledge, research, and cit­ations they generate. At the same time, English-speaking institutions get a clear preference over institutions from non-Anglophone cultures.

info_icon
Photograph by Suresh K. Pandey

“Many of the indicators,” Elizabeth Hazelkorn points out in an article titled The dubious practice of university rankings, “focus on inputs which are strongly correlated to wealth (eg. institutional age, tuition fees or endowments/philanthropy), as a proxy for educational quality.” She also critiques how the QS and THES use reputational surveys in order to assess how an institution is perceived by its faculty peers and key stakeholders. “This methodology,” she points out, “is widely criticised as overly-subjective, self-referential and self-perpetuating in circumstances where a respondent’s knowledge is limited to that which they already know, and reputation is conflated with quality or institutional age.” The last point aptly accounts for the high rankings of certain older schools all across India, irrespective of whether or not they have evolved with the times.

Advertisement

Rankings create a Catch-22 situation. Without high rankings, great financial support or greater student enrolment rem­ain difficult; but without greater financial resou­rces that come with these, it is almost impossible for institutions to improve ranking.

Left without a choice, many are forced to enter the competition to raise their ranking. As such, many institutions today have clear rankings strategy and institutional research units that benchmark rankings performance. But pushing themselves in this manner, particularly to certain valorised models of res­earch output, they create tension with their original mission and values, and the needs of the communities they were meant to serve.  

Advertisement

The overemphasis on particular models of research and international experience to climb up the rankings highlights emergent tensions between a university’s mission and values. They also try to reshape student entry criteria and become more selective and exclusive to better meet outcome indicators such as completion rates, graduate employment or salary levels, alumni donations, etc. All of which turn them into different animals than what they might have originally set out to become—as higher edu­cation worldwide aspires to a singular, monolithic model of excellence. Having taught at a wealthy, highly-ranked American university as Stanford with multiple professional schools—as well as at Ashoka, a new liberal arts university in the developing world—I’ve experienced this in a direct and intimate way.

Advertisement

Writing in The Conversation, Sioux McKenna, director of the Centre for Postgraduate Studies at Rhodes University, points out that Rhodes is included in the rankings surveys based on public data, even though the university refuses to engage with rankings organisations. “Despite having among the highest undergraduate success rates and publication rates in South Africa,” she writes, “the lack of medicine and engineering programmes works against it. So too does its strong focus on community engagement and its small size—though these might be exactly why the university is a good fit for many.”

Advertisement

In other words, the ranking system is alr­eady skewed against you if you are not a particular kind of university, no matter how good are at what you do. And finally, you cannot opt out of the rankings even if you want, as the rankings organisations are free to rank you based on publicly available information. And it is well-known that such rankings rarely rate non-cooperating institutions favourably.

info_icon

With the widely followed US News Ranking system, the disturbing stories are now well-known. One is that of Reed, an elite liberal arts college in Portland, Oregon, who refused to participate in the rankings and was penalised for it with a ranking far lower than what it merited, something that was corrected subsequently. On the other end is the somewhat questionable case of Northeastern University, which took a number of explicit measures with the direct and strategic goal to improve its US News Rankings over a range of years—at the expense of everything else, as some said—to eventually achieve a ranking in the top 100.

Advertisement

In an article written in Boston Magazine, Max Kutner has outlined the fascinating story of how Richard Freeland came to Northeastern as its President with one goal: to help the university climb up the ranks. “There’s no question that the system invites gaming,” Freeland told Kutner. “We made a systematic effort to influence [the outcome].” This started with accepting a strategy many American schools have followed to make their admission more exclusive, a criterion for high rank—the adaptation of the Common Application, which makes it easier for more students to apply to a wider range of colleges. The more applications Northeastern got, the greater percent they could reject, making them look more selective. Such a series of strategic measures, some of which, such as those meant to retain student enrolment, also doubtlessly benefited the university on the whole. Freeland got Northeastern to move up 42 spots to 120 in eight years. “We had to get into the top 100,” Kutner quotes Freeland as saying. “That was a life-or-death matter for Northeastern.” But to move into the top 100, Freeland felt he needed more insight into the magazine’s methodology, and possibly, its complicity. “We were trying to move the needle,” he said, “and we felt there were a couple of ways in which the formula was not fair to Northeastern.”

Advertisement

Eventually, one day in 2006, when the ret­ired Freeland was vacationing in Martha’s Vineyard, he received the news that Northeastern had reached 98 on the US News list. The trustees had already voted to award him a retirement supplement of $2 million to acknowledge his success. Years later, he would become the commissioner of higher education of the state of Massachusetts.

There are countless stories of American universities trying to game the system. In 2008, Baylor University told newly admitted students that they’d receive a $300 campus-bookstore credit if they retook their SATs, and $1,000 a year in student aid if the scores improved by more than 50 points—all of which would raise the university’s rankings. And there are instances of blatant dishonesty and suppression of inf­ormation. Such behaviour is penalised if discovered, but it shows the primary and powerful impulse of schools to do anything to get ahead in the rankings game.

Advertisement

University rankings are still a partially via­ble indicator of the quality and the reputation of institutions, and we cannot dismiss them entirely. But if you are a student or a parent trying to decide on college based on rankings—anywhere in the world—it is a good idea to keep in mind the loopholes and inadequacies of the system as practised across the world today.

(This appeared in the print edition as "Read Between The Rankings")

(With research input by Harshita Tripathi. Views are personal)

ALSO READ

Advertisement

Saikat Majumdar Professor of English & Creative Writing at Ashoka University

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement

Advertisement