Stanford University professor John Ioannidis recently published the latest version of the list of the world’s top 2% scientists. Its release every year causes much excitement among scientists and their institutions. And every year for a few weeks after, hardly a day passes without multiple announcements of the presence of Indian scientists on this list by many institutions, from well-known to obscure, on various media platforms. They tout the honour as recognition of their scientific excellence; indeed, many of these institutes even claim the achievement wouldn’t have been possible without their role in creating and sustaining an enabling research environment.
The 2025 list was published in September and ranked around 2.3 lakh scientists worldwide — themselves filtered out of a pool of 2.2 crore. This supposedly exalted fraction, which included several Nobel laureates, also counted 6,239 scientists from India. This number has been steadily increasing over the last few years.
Link to quality
The top 10 scientists from India in the 2025 list have been ranked 288 to 952 and are with Muthayammal Engineering College (Tamil Nadu), University of Petroleum and Energy Studies (Uttarakhand), Thapar Institute of Engineering & Technology (Punjab), Indian Institute of Toxicology Research (Uttar Pradesh), Sikkim University, National Institute of Mental Health and Neurosciences (Karnataka), Saveetha School of Engineering (Tamil Nadu), Government Degree College Pulwama (Jammu & Kashmir), and S.V. National Institute of Technology (Gujarat).
This picture was more or less the same in 2024, when the top 10 Indian scientists were ranked 163 to 1568 and lesser-known institutions dominated.
Curiously, six of the seven science Nobel laureates on the list ranked 1,373-28,782 — far lower than the lowest ranked (top 10) Indian scientist. Indeed, to say that it is pretty astounding that even the lowest Indian scientist (top 10) performed way better on this list than all but one of the Nobel laureates is an understatement.
Whether this gap is actually linked to research quality in a meaningful way is, however, a separate question — one also elicited by the fact that the top 10 scientists are not affiliated with Indian research centres generally associated with excellent R&D. To understand this disconnect, we need to understand what scientific research is and how it is routinely evaluated in the prevailing academic ecosystem.
Standing on shoulders
Scientific research begins when scientists have a question about some observation that has caught their interest. They formulate a hypothesis and test it with experiments. Depending on the needs of each experiment, they may devise tools, interact with other scientists to get different perspectives, gather evidence, and analyse them to draw conclusions. Then they write up their findings in reports, commonly known as papers, which are reviewed by their peers and published in scientific journals. These papers bear the names of the scientists, so the scientists are also called authors.
The vast majority of scientific research today builds on the work of others. So the authors of a paper cite an older paper — like an acknowledgment that also adds one more link to the chain of knowledge — where its findings are relevant in their current work. When one paper has been cited by another paper once, it’s said to have accrued one citation.
Scientists’ work is often evaluated by the number of citations their papers have garnered. But there is a catch: there has been a naïve assumption for a while now that scientists always only cite good-quality papers, which in turn has fed the notion that a paper’s citation number is indicative of its impact. But there is no need for this to always be true.
Evaluating science
Prof. Ioannidis prepares his list based on a global database of published research called Scopus. It is owned by Elsevier, a publishing company that is often accused of taking advantage of the ‘publish or perish’ culture in academia to earn profits rivalling those of Google and Microsoft.
For his analysis, Prof. Ioannidis developed a composite score, called c-score, for each scientist in Scopus and ranked them in the descending order of their c-scores.
The c-score gives equal weightage to multiple parameters including the total number of citations, the h-index (a metric that links the citation number to the numbers of papers published by a scientist), the number of papers, the order of authors in papers, co-authorship, and so on.
The ranking also includes scientists across several different fields and sub-fields. Comparing scientists in this way, across a breadth of enterprises, is generally considered problematic, like comparing apples to oranges. Also, contrary to popular perception, Stanford University neither participates in the ranking process nor endorses the list. The effort is the individual initiative of Prof. Ioannidis.
Nobel Prize v. c-score
To understand why Indian scientists from little known research centres rank above Nobel laureates on the list, the c-score is a good starting point. While Prof. Ioannidis and some others have said it provides a more comprehensive snapshot of a scientist’s impact, it has serious limitations. It gives extra weight to papers where the scientist is first, single or last author assuming that these positions imply major intellectual contribution, but this practice is not uniform across fields; it doesn’t account for differences in citation practices between fields; assumes the Scopus database covers all disciplines equally (it doesn’t); and ignores quantitative impact.
The final effect is for a scientist’s c-score to be divorced from the actual scientific content of their work, especially in terms of its quality, validity, and contribution to science and society. Indeed, despite their other flaws, the process of identifying Nobel prize-winning scientists takes into account all that the c-score misses.
As with all metrics, the c-score can also be gamed, especially by individuals who pre-agree to cite each other’s papers irrespective of the quality of their own papers or whether the citations are appropriate and warranted. This much is evident from the impossibly high levels of productivity — 1-2 papers per week — of many highly ranked Indian scientists. The rankings also don’t account for papers that are later retracted for misconduct, such as by including a penalty in the c-score formula. Indeed, the Scopus database itself includes many dubious journals and publishers with little respect for research ethics.
Without understanding the c-score in this way, one is liable to interpret Indian scientists outranking Nobel laureates in Prof. Ioannidis’ list as the former’s overlooked greatness — but one shouldn’t be forgiven for doing so. Like many other metrics that flatten the multi-dimensional enterprise that is scientific research into one-dimensional numbers, the c-score is fundamentally a vanity metric. Instead, researchers and their employers should focus on doing good research and the research establishment should focus on facilitating that — rather than chase numbers.
Swaminathan S. is retired professor, BITS Pilani – Hyderabad, and former scientist at ICGEB, New Delhi.
Published – October 27, 2025 03:05 pm IST