05/08/2010 by Sylvain Hallé

In an earlier post, I explained how I ended up maintaining a personal database about CS conferences. I had taken the opportunity of a rainy day to query my database and draw nice graphs about average acceptance rates, conferences rankings, etc. It's raining again today, so here's more statistics.

Before we start, please read my initial post to learn what data I use, and **remember my initial warnings** before drawing any conclusion from these graphs.

I had already explored the relationship between the CORE ranking and the average acceptance rate of conference series in my previous post. I recently added data on the Impact Factor (IF) of these conferences, as given by CiteSeer.

I don't know the exact criteria neither for CORE nor for the IF, but I presume that globally, they should be positively correlated: category "A" should globally have more "impact" than category "B", and so on. Here's what I get when I plot the relative distribution of the IF for each rank:

Category A seems to discriminate more than B and C, which look relatively interchangeable. This is even more obvious if we plot the cumulative distribution of each rank:

The same trend was found in my first post, when I compared the CORE ranking with the acceptance rate. So, to summarize: B and C do not appear to differ significantly, neither on their acceptance rate, nor on their impact factor --at least not for the sample of conferences I am interested in.

Let's see now whether IF and acceptance rate are correlated. Again, I am naturally inclined to think people would rush in to conferences having the highest "impact", and that these events would have somehow lower acceptance rates in consequence. In the
following graph, each *series* of the same conference is a dot, placed according to CiteSeer's IF for that series of events and the average acceptance rate for the editions that are in my database:

The correlation coefficient for this set is -0.28.

An interesting quadrant contains conferences that maximize both the acceptance rate and the IF: those give you the most impact for the least competition. Just for fun, is there any conference living in the *first* quartile of impact (IF > 0.97) and in the *last* quartile of average acceptance rates (i.e. rate > 34%)? This corresponds to the yellow area in the graph above. As you can see, the answer is yes: among them AOSE (34%/1.57), COLT (43%/1.49), CONCUR (35%/1.44), SPIN (45%/1.25), ISMM (42%/1.47), RTA (40%/1.04).

The opposite quadrant is much less appealing: it corresponds to conferences where you fight hard to get in (acceptance rate is low), while not getting too much exposure (IF is low). In orange, we find conferences in the last quartile of impact (

- Do I show these metrics (e.g. in my CV) when they support my papers? Absolutely.
- Do I use these metrics when deciding where to send a paper? Only with a grain of salt, and always in combination with other criteria.
- Do I know how my peers, potential employers and grant committees use these metrics when judging my work and that of others? No!