CHANGES CONSIDERATIONS ISSUES ADDRESSED

Improve application success rate

A low percentage of applications that are awarded funding following full peer review (low success rate) may indicate that changes need to be made early on in the process. For example, clearer guidance could be provided to applicants and a better triage process could filter out a greater number of applications up front. This could consequently help speed up the entire process.

  • Success rate is currently not routinely recorded by all funders, though some information is already available to allow comparisons with other funders e.g. MRC and NIHR.
  • It may not be comparable depending on the funder, budget, the theme of call and the type of funding e.g. projects, fellowships or clinical trials.

Simplify the review process 

Review questions could be simplified (Turner et al., 2018), reviewers could be asked to focus only on certain elements of a proposal specific to their expertise, committee members could be provided with short summaries, and applications forms could be simplified. A blend of approaches could help to reduce the burden of peer review across funders, reviewers and applicants.

  • It may require additional resource upfront to understand the reasons behind the burden and to implement any changes.

Assess when written review is necessary 

Funders could decide to make more decisions without written review or limit written reviewers to two per application. Studies have shown that using more than four written reviewers scores doesn’t influence committee decisions (Sorrell et al., 2018).

It could speed up peer review and focus effort where it is most needed.

  • This will likely require careful thought around how this might be perceived and the controls needed to ensure fairness and rigour.
  • It is possible that just two reviewers may disagree, but a third written reviewer could be sourced in these cases.
  • If the funder felt comfortable that written review was not necessary because the committee already had sufficient expertise, it would need to develop a consistent mechanism by which to make this decision.

Reject the bottom 50-75% of applications

The applications with the lowest scores could be automatically rejected after external peer review so that only the remaining applications are discussed by the research review committee. This could significantly reduce the burden on the committee.

  • This could decrease the thoroughness (and therefore quality hallmark) of peer review. There is also a risk that high-quality applications are removed at triage.
  • The cut-off percentage will likely vary between funders, and even between funding streams, so there may need to be flexibility in setting it.
  • This approach could be combined with the lottery system to combat criticisms that peer review has difficulty distinguishing between levels of good. The US National Science Foundation did this for their short preliminary applications (Mervis, 2015).

Provide specific deadlines to reviewers

Scheduling reviews at specific times with external experts could improve response rates and speed up the written review process overall.

  • Many funders already employ this approach but still have problems.
  • Funders should provide as much notice as possible and have flexibility around timeframes.

Use teleconferencing for committee meetings 

This could be used to alleviate some of the burden on reviewers, reduce costs and speed up decision-making. It could allow reviewers to participate in the committee meeting remotely thereby removing the need for travel.

  • Technical issues could make this impractical.
  • Committee members may prefer face-to-face meeting and value its social aspects. The technology could perhaps be an option but not mandated.

Crowd-source peer review

The use of social media and virtual groups could be maximised to undertake peer review e.g. G1000. This could significantly increase the speed and reduce the burden of peer review.

  • These approaches are broadly untested, and it would therefore require careful testing on a small scale initially.
  • It could be very difficult to manage conflicts of interest.

Officialise a pool of experts from which to select reviewers

An official pool of willing and qualified reviewers could be established by the funder to undertake peer review. This would help to reduce the burden on the funder in finding reviewers and could speed up the process. It could also provide transparency as well as recognition to the reviewers if this information was available on the funder’s website.

  • It would be good practice to have a strict code of conduct and terms of reference for the pool of experts that they agree to when joining.
  • This could be paired with providing peer review training and refreshers on the charity’s aims and protocols to the reviewers.
Use a lottery system to allocate research funding

This would still require active involvement of a committee to ensure appropriateness and quality. An example implementation could come after ranking of application; the top 20% could be funded, the bottom 50% could be rejected and the middle 30% could be subject to random lottery – blurring the boundary between fundable and un-fundable proposals (Avin, 2015).

A lottery approach reduces the influence of biases, difficulty in ranking applications and the inconsistency of peer review. Using a lottery system could reduce the burden on reviewers by replacing a stage of the review process.

  • A lottery system might be perceived as unfair as an application that received a much lower ranking than others in the drawer might be selected.
  • This might be inappropriate for certain types of funding streams e.g. programme grants where funding is committed over multiple cycles.
  • A lottery system would remove subjectivity; however, some subjectivity might be desirable e.g. in considering a researcher’s career stage or the importance of a certain topic. It could be that a lottery system is applied to more risky and innovative research streams, such as seed funding, and that applications targeting more strategic issues are dealt with via separate streams.
  • Researchers could manipulate the system by submitting multiple applications to increase their chances. A per-researcher limit or a triaging system could be put in place to help mitigate attempts to manipulate the system.

Have an expert programme manager

A funder could employ trained specialists with technical expertise to make funding decisions in areas within their remit. This can speed up the review process and allow the charity to concentrate funding and resources on research it is specifically interested in.

 

  • This gives one individual significant influence and control, and should they have conflicts of interest or biases, this would impact fairness and in turn undermine confidence in the system. This technically breaches elements of ‘independence’, a core principle in of the AMRC Principles of Peer Review. As such significant attention should be given to ensuring that these expert programme managers have the required skills and experience to make funding decisions in order for this option to be viable. AMRC will clarify this in future iterations of their guidance.
  • For funders with broad remits, a lack of breadth of expertise could be an issue using this model. One approach would be to apply this model to specific types of directed or commissioned funding that are limited to a defined research area.
  • An ‘oversight’ committee (similar to a research review committee) consisting of external reviewers could make funding recommendations to the programme manager who makes the final funding decision.