Conference Gossip, Part One

05/07/2010 by Sylvain Hallé

Tired of always Googling for interesting conferences, call for papers, acceptance rates and the like, I started archiving all the information I could find about publishing venues into a database system for my personal use. In the two years since I started it, this hobby project became a relatively serious and useful tool I use almost daily. It now contains:

The fun part is that once everything is shoved into the database, I can query it in all sorts of interesting ways. Which is what I did, never wasting an opportunity to have fun. Here's now for your viewing pleasure a few bits of trivia to feed your inquisitive mind.

NOTE: Please keep in mind that this sample is somehow biased towards my own research interests, namely: formal methods and web services; to a lesser extent, the list also includes the main database, network and AI conferences. Do not contact me if you agree or disagree with these figures, do not ask me to add this or that to my list, do not involve me in a budding flame war pitting X against Y or whatever. As a rule, take these figures with a grain of salt (or two).

North America vs. Europe

By taking Europe as the circle of radius 1750 km with Hanover as its center, and North America as the circle of radius 3000 km with Winnipeg as its center, we can query the database about American vs. European conferences. We get:

Average acceptance rate of events...

...the same. And the standard deviation of both samples is also similar (11% for Europe and 12% for North America).

According to Wikipedia, Europe counts 733 M people and North America 443.3 M. North America is hence "overrepresented" in terms of number of events with respect to its population, by a factor of almost exactly 1.5.

CORE Ranking

There is a lot of debate about the CORE ranking of journals and conferences. I downloaded the PDF of the latest rankings (2008) and included it in the database. Let's compare these rankings with the acceptance rate, which is another common way of judging of an event's "prestige" (here, "Number" stands for the number of individual events; hence PODS 2008 and PODS 2009 count as two events; if they have a raking, it is the same, but each has its own acceptance rate):

Rank Number Min Avg. Max Std. dev.
Unranked 588 7.0% 31.8% 84.0% 13.0%
A 692 8.0% 25.1% 60.0% 9.3%
B 242 12.0% 35.9% 73.0% 13.0%
C 152 12.0% 34.3% 85.0% 13.1%

One can see that apart from category A, the ranking (or absence thereof) does not seem to be a very good predictor of an event's acceptance rate: the other three categories have surprisingly similar distribution parameters. Or, stated otherwise, CORE and acceptance rate appear to judge different things. Here's a graphical account of that distribution, for each category, if I limit myself to the past 5 years (2005-2009), and take the average for each event:

Graphic

I am surprised by the significant amount of overlap between each rank. In particular, some "C" conferences have lower average acceptance rates over five years than some "A" conferences --and not just one:

Don't get me wrong: I don't imply that these are "good" or "bad" conferences; only that the CORE ranking and acceptance rate seem to be pretty bad predictors of one another.

Stay tuned for the next episode: throwing the Impact Factor into the mix.

comments powered by Disqus

Navigation