This is going quite off-topic. But it might be of interest to somebody.
http://mrtz.org/blog/the-nips-experiment/
http://mrtz.org/blog/the-nips-experiment/
A perennial question for academics is how accurate the conference review and acceptance process is. Getting papers into top conferences is hugely important for our careers, yet we all have papers rejected that we think should have gotten in. One of my papers was rejected three times before getting into SODA — as the best student paper. After rejections, we console ourselves that the reviewing process is random; yet we take acceptances as confirmation that our papers are good. So just how random is the reviewing process? The NIPS organizers decided to find out.
The NIPS Experiment
The NIPS consistency experiment was an amazing, courageous move by the organizers this year to quantify the randomness in the review process. They split the program committee down the middle, effectively forming two independent program committees. Most submitted papers were assigned to a single side, but 10% of submissions (166) were reviewed by both halves of the committee. This let them observe how consistent the two committees were on which papers to accept. (For fairness, they ultimately accepted any paper that was accepted by either committee.)