Collusions are a major problem in peer review: two or more participants make a deal that they will try to get assigned each others’ papers and give good reviews. Many peer-review venues employ bidding, where reviewers indicate their expertise for submitted papers, and papers are assigned to reviewers based on this information. Bidding is thus an easy way for reviewers to get assigned any target paper. To mitigate collusion, one may consider completely removing bidding, but that can be problematic since other ways of computing the assignment are not yet good enough (e.g., NeurIPS 2022 did away with bidding for meta reviews, and faced numerous complaints about poor assignments). There has been some work in designing techniques to analyze and/or modify bids in order to mitigate collusion, however, we still don’t have a good solution to the problem of collusion via bidding. See Section 4.2 of this survey for a more detailed discussion and references.
😩 There are two key challenges to mitigating collusions:
- Much of data from other peer-review venues (e.g., previous editions of a conference) is private, so program chairs cannot see the bidding patterns of any reviewer across different venues.
- With thousands of submissions, it is generally infeasible for the small number of program chairs to conduct any extensive manual investigation.
💡Proposal of randomized transparency:
- After the conference, publicly release a random subset of bids made by each reviewer. Specifically, each paper-reviewer pair is given one of the following labels:
- “Conflicted”: The reviewer has a conflict of interest with the paper. (Separately, the entire list of conflicts of interests reported by each author and reviewer should also be made public.)
- “Hidden”: If not conflicted, then the pair is labeled as hidden with a certain probability, say, 0.5. Furthermore, each paper-reviewer pair where the reviewer actually reviewed the paper is also labeled as “hidden”.
- Release bid: If neither of the above applies, then the paper-reviewer pair is labeled with the bid that the reviewer made on that paper (e.g., “Eager”, “Unwilling”, “Did not bid”, etc.).
- If the conference publicly reveals the list of all submitted papers, then the labels of all paper-reviewer pairs are released. If the conference does not reveal rejected papers, then the labels of only the accepted papers are released.
🔧 How this may mitigate collusion:
- Colluders in different research areas: People can see if a certain researcher bid on a paper that is totally irrelevant, and repeated behavior of this form can be identified.
- Colluders in the same research area: If they are in the same research area, it is natural for them to bid on another paper in the same area. However, program chairs can notice past patterns of bidding coupled with any suspicious activity in the current conference.
- Importantly, even if program chairs may not be able to catch every instance of cheating, this may at least form a deterrence to colluding and exploiting the bidding process.
🔐 Possible concern — privacy:
Bidding data is considered sensitive, since it can reveal information about who reviewed which paper. Under the proposal to release some bidding data, a reviewer might be concerned that this data release will also reveal which papers they reviewed.
- It is important to note that a random subset is revealed.
- For each reviewer, the assigned papers are clubbed with the hidden set of papers, thereby providing ambiguity.
- Some peer-review venues employ a randomized assignment strategy. The randomization in the assignment of reviewers to papers may help further increase ambiguity.