CHANGES CONSIDERATIONS ISSUES ADDRESSED

Alter eligibility criteria to promote inclusion

Funders should re-examine their funding eligibility criteria to ensure equality, diversity and inclusion. For example, it should be recognised that early career researchers, particularly those submitting innovative proposals, may not always have preceding preliminary data.

If bias was addressed in funding eligibility criteria it could encourage more researchers to remain in their field.

  • Funders have not typically measured equality, diversity and inclusion and there are few tried and tested metrics that are commonly understood.
  • The equality and diversity of applications should be measured and assessed to monitor the success of these changes.
  • If effective, such changes could encourage more researchers to remain in their field.

Anonymise applications

Removing identifiers from applications could stop reviewers making decisions based on where the applications come from rather than the research itself.

  • Anonymisation may never be truly possible, especially in small fields.
  • Peer review typically involves assessing the research team to ensure suitability; however, this would be impossible with anonymity.
  • Testing would be needed to understand the impact on decision making.

Assess and analyse disagreement between reviewers

Significant disagreement could be an indicator of work with high potential and high risk. Funders could assess this and discuss the reasons why divergences exist including attitudes to risk and bias.

  • High levels of disagreement will not necessarily be due to biases but genuinely be different views on whether something is good quality.
  • A strong research review committee chair would be needed to come to conclusions on contentious issues.

Use a lottery system to allocate research funding

This would still require active involvement of a committee to ensure appropriateness and quality. An example implementation could come after ranking of application; the top 20% could be funded, the bottom 50% could be rejected and the middle 30% could be subject to random lottery – blurring the boundary between fundable and un-fundable proposals (Avin, 2015).

A lottery approach reduces the influence of biases, difficulty in ranking applications and the inconsistency of peer review. Using a lottery system could reduce the burden on reviewers by replacing a stage of the review process.

  • A lottery system might be perceived as unfair as an application that received a much lower ranking than others in the drawer might be selected.
  • This might be inappropriate for certain types of funding streams e.g. programme grants where funding is committed over multiple cycles.
  • A lottery system would remove subjectivity; however, some subjectivity might be desirable e.g. in considering a researcher’s career stage or the importance of a certain topic. It could be that a lottery system is applied to more risky and innovative research streams, such as seed funding, and that applications targeting more strategic issues are dealt with via separate streams.
  • Researchers could manipulate the system by submitting multiple applications to increase their chances. A per-researcher limit or a triaging system could be put in place to help mitigate attempts to manipulate the system.

Use the Delphi approach to triage applications

Here, iterative rounds of assessment are used, in which after each round of committee deliberation, the lowest scoring applications are removed.

Triage can used to select the top applications which can be sent to several non-conflicted experts. Multiple Delphi rounds can be held over several weeks to score scientific merit, innovativeness and level of risk. At the end of each round, reviewers can be provided with a table of de-identified scores and an overall ranking of proposals. Reviewers can raise any objections or proceed to the next round. Once everyone is content, the two lowest-ranking proposals can be excluded. The process can repeated until a few proposals remain. These will subsequently funded.

  • This approach would increase the commitment needed from reviewers and initial triage of applications would still be required to limit the number of applications that needed to be scored.
  • It can be a transparent and impartial way of reviewing grant applications in a specialised field of research where no local expertise is available (Holliday & Robotin, 2010).

Diversify funding streams 

A wider range of streams could be set up, with each stream having its own tailored criteria targeting various issues e.g. early career researchers, non-academic applicants, high-risk research, interdisciplinary research. In turn, reviewers with expertise specific to the funding stream could be recruited.

Various mechanisms for funding could also be employed e.g. commissioning models and seed grants.

  • It could increase the complexity of the funding landscape, making it harder for researchers to know which funding streams to apply for.
  • Not all funders will necessarily have enough funding to run multiple streams. Smaller funders could, however, collaborate to share funding to support a wider range of funding streams. AMRC and ABPI are undertaking work to make it easier for charities and industry to work together.
  • Less well-practiced funding stream criteria may present new challenges, so careful exploration will be needed.
  • In niche fields, finding appropriate expertise for each stream might be difficult, meaning individuals risk being over-burdened and conflicted. Where this might be the case, funders could consider collaborating and sharing a research review committee. The US National Institutes of Health does this for their Pioneer Award.
  • Funders should pay particular attention to the appropriate expertise required to chair the research review committee.

Greater focus on impact

Application forms could include questions about a variety of potential impacts (instead of publication record), examples of which can be seen in the AMRC impact report. This would put a greater emphasis on quality and move the conversation away from journal metrics. Reviewers could be asked to take this potential impact into consideration during the review process.

  • It requires culture change and would take all the funders to move forward together on this issue for maximal effect.
  • The research committee chair would need to be onboard with this in order to call out instances where reviewers revert to publication record as a metric for success.

Alter assessment criteria

To tackle key areas that the funder wants to prioritise e.g. reproducibility, high-quality research, innovation or interdisciplinary applications.

This could help to increase the quality and appropriateness of applications.

It tackles bias against any individual reviewer’s natural assessment e.g. risk-averse people.

  • Clarity and guidance would be needed to define these criteria and how they should be assessed, and reviewer training would be needed to implement this.
  • Funders, including the AMRC, could work together to define these criteria.
  • The weight given to the chosen criteria could be increased to drive changes in decision making.

Provide training and mentoring for reviewers 

Training could be used to tackle a number of issues such as bias, application ranking and scoring and perceived markers of ‘excellence’.

Mentoring by longer-serving reviewers could also support newer reviewers.

  • Training could be resource intensive, particularly when developing the training programme.
  • Funders could work together to develop standard training that could subsequently be tailored to each funder’s remit as required.
  • Some examples of existing training include ESRC’s online training, and NIHR’s online interactive course and guidance documentfor public reviewers.

Provide reviewers with a refresher on the charity’s aims and protocol

A clear summary of the charity’s views on key topics could re-emphasise the charity’s priorities. Topics could include:

  • bias
  • the type of research the charity is looking for e.g. innovative
  • research quality and reproducibility
  • the importance and value of the reviewer’s contribution
  • This would likely need repeating on a regular basis as committee members could revert to their standard ways of working. It could be covered at the start of each research review committee meeting, for example.
  • Charities could develop a suite of supporting materials to summarise this information.
  • A strong chair could help to call out behaviour that is not in the spirit of the charity’s aims.

Make reviewers’ identities known to applicants

Applicants would be able to view the name of the reviewer alongside the peer review scores and feedback comments. This could encourage reviewers to be more constructive in their feedback which should, in theory, improve the quality of future applications.

It would also improve transparency and self-policing of conflicts of interest.

 

  • Reviewers may be less honest in their feedback.
  • Early-career researchers may feel uncomfortable having their identity revealed, particularly if they are criticising senior colleagues.

Measure the confidence of reviewer scores 

In addition to scoring applications, reviewers could be asked to indicate their confidence in that score.

Where lower confidence scores are the result of concerns around e.g. the riskiness of the research, reviewers lacking expertise in every strand of the research or its quality or reproducibility, this system could drive discussion around these issues.

The addition of a confidence score could allow two applications that otherwise rank very similarly to be differentiated. 

 

  • A written review could be requested only for those applications that received a low confidence score from the committee, rather than for all of them.
  • Reviewers may be tempted to exaggerate their confidence to avoid being perceived as less knowledgeable than others.
  • It might be beneficial to allow reviewers to comment on why they submitted certain scores – does if reflect their overall confidence of the application or is it because there is a specific element of the application they know less about? This additional information would also prevent good applications being wrongly labelled as ‘poor’ because of a lack of confidence of reviewers.

Provide more opportunity to engage with applicants

For applications that don’t require interviews, funders could provide the opportunity for reviewers to ask questions of the applicants via teleconference to get a better understanding of the research and the applicant. This could serve to lessen pre-existing biases toward the applicant or the proposed research.

 

  • If this was introduced, there should be a set protocol in order to ensure all applicants are treated equally and fairly.
  • It could potentially draw attention away from the quality of the research and more towards personal attributes of the applicant.
  • Interaction with applicants could conversely increase reviewers’ biases.
  • This would increase burden on both applicants and reviewers.