US scientific productivity is decreasing

12/11/2010 by Sylvain Hallé

A recent article in Scientific American covers an NSF study published last month and claims that "The machinery of U.S. scientific publishing [is] 29 percent less efficient than in 1990." Being a fan of these kinds of statistics, I obtained the report and started dissecting some of its results. Here are some of my observations.

Major caveat: conference proceedings are excluded

Word for word:

"In this report, we present counts of scientific and engineering articles, notes, and reviews published in scientific and technical journals tracked and indexed in the Thomson ISI SCI and SSCI. Counts exclude [...] conference proceedings [...]" (p. 22, emphasis added)

In Computer Science, this is a big hole. In our field, it is widely recognized that conferences are as important, if not sometimes more, than journals. In the May 2009 issue of the Communications of the ACM, Moshe Vardi's Letter from the Editor mentions:

"As far as I know, we are the only scientific community that considers conference publication as the primary means of publishing our research results."


  1. Table 5 from the NSF report shows that, compared to all other S&E fields, Computer Science has the smallest number of papers per dollar and per PhD student. Medical science produces 6 times more papers per dollar than CS. So either a) CS research costs much more than medical research (oh yeah?), or b) important CS publication venues (good conferences?) have been left out from the calculation.
  2. CS exhibits the weakest correlation between measured variables and predicted number of publications (0.747); for all other disciplines the r2 is over 0.9. This points again to the fact that some information could be missing.

We can argue whether the special status of conferences in CS is a good thing or not (Vardi thinks not), but in the meantime this how it works, and hence the NSF is not measuring things correctly. I am worried that they might be carrying the same misconception about the quality of Computer Science conferences to other parts of their activities, such as funding.

Other interesting bits

Number of papers published = k × Amount spent in R&D. There is a clearly linear relationship between the number of published papers and academic R&D expenditures (see Figure 15 of the report). From the slope of the graph, we can conclude that each institution spends approximately $83,000 in R&D expenditures for each additional paper it publishes in a year, regardless of the total number of published papers.

Harvard is the only outlier. Apart from Harvard, none of the top-200 institutions stands out by publishing exceptionally more "papers per dollar" than the rest of the pack. There is no "multiplicative effect" thanks to which greater or richer institutions could leverage more than less great ones with the money invested in them. The clear trend is that everybody is equally productive with whatever R&D dollars they have. This is good news for small research universities.

Collaboration is a liability. Of course, this is not stated this way. However, when trying to explain why US institutions have become less productive, increased collaboration is mentioned in the aggravating factors:

"[...] the data suggest that total resources per publication have increased over time, which may be attributable to a variety of factors (including changes in regulatory and administrative burden, need for more extensive or complex collaborations, decreased crosssubsidization, etc.)" (p. 29, emphasis added)

Given that the ambient discourse in funding agencies strongly favors the creation of teams and collaborative research in general, I am puzzled that empirical studies of productivity suggest the opposite.

Citation count = k × publication count. This is not the expression of a belief; this is something the NSF observed based on their data:

"[...] analysis of citation counts yields essentially identical results to analysis of publication counts." (p. 24)

This small remark is far from trivial. In the context of this report, it means that whenever they could predict the number of publications, they were equally accurate at predicting the number of citations. But by a previous remark, since most universities are equal at "publications per dollar", then this means that no university is better either at "citations per dollar". Hence, we observe the same rate of citations per paper, regardless of the institution.


This report didn't draw my attention about the decline of US academic productivity. What caught my eye was another set of facts, summarized by the question: what is the difference between the Ivy League and University of Maine? I am surprised that at the end of the day, top institutions that attract so many talented students and professors do not seem to produce more per dollar (neither in terms of papers or citations) than more modest ones. What makes them so prestigious? My personal conclusion: their ability to attract money, thanks to which they can accomplish more (in absolute numbers) than the rest of the plebe. Only the rich get richer.

comments powered by Disqus