In this pre-print, the authors examined data from thousands of grant reviewers to ask: how many reviewers do you need to produce reliable assessments? The answer ain't good: 3-5, reviewers produced around 0.2 reliability. 12 needed for 0.5!
It's sometimes said that grant proposals are like a lottery. Turns out it's correct. Reliability of reviewers is so low that claiming that reviewers successfully weed out good (or - not terrible) proposals seems, frankly, preposterous.
Study of 412 scientists reviewing 48 grant proposals highlights that grant peer review is depressingly unreliable
Preprint describes an experiment on reviewing R01 proposals. Current system has reliability of 0.2; need 12 reviewers to bring it up to 0.5! Current system is terribly underpowered. NIH funding is a lottery without the benefit of being truly random.
New working paper with , , : "How many reviewers are required to obtain reliable evaluations of NIH R01 grant proposals?" Our answer: "A lot"
How many reviewers are required to obtain reliable evaluations of NIH R01 grant proposals? via
Very interesting analysis of #PeerReview of R01 grant proposals (412 reviewers on 48 proposals): Reviewers were unreliable in their judgments of Significance and Innovation, but were consistent on rating PI. #AcademicTwitter #phdlife
So… study says using just 3 reviewers for NIH grants is dramatically underpowered. Could either dramatically increase number of reviews, or do what we currently do—sample by dramatically increasing the number of submissions. Which is more efficient?
Have you have ever received a referee report where the reviewers totally disagree with one another? Well, you're not the only one. This new study finds that 3 reviewers have reliability only 0.2. And you'd need 12 reviewers to reach a modest 0.5.
Is everyone OK with National Institutes of Health taking singular agreement?