Skip over main navigation
  • Sign up
  • Log in
  • Basket: (0 items)
Association of Medical Research Charities
  • Twitter
  • Linkedin
  • YouTube
Contact us Donate
Menu
  • About us
    • Who we are
    • Our team
    • Governance
    • Networks and groups
  • Our members
    • Sector infographics and reports
    • Become a member
    • Member benefits
    • Member directory
  • Our guidance
    • Research management
    • Partnering for patients
    • Research impact
    • Future trends
    • Digital health ethics
    • IP
  • Our views
    • Position statements and briefings
    • Consultation responses
    • APPG on Medical Research
    • Leaving the EU
    • Research in the NHS
    • Patient data
    • Charity Research Support Fund (CRSF)
  • Our supporters
    • Become a supporter
    • Our supporters
  • What's happening
    • Peer review audit 2020
    • AMRC Open Research
    • Events
    • News
    • Blogs
    • Vacancies
  • Admin
    • Log in
  • Basket: (0 items)
  1. FAQs

Reviewing peer review

Peer review is currently the preferred way for health and medical research charities to decide what research to fund. Done properly, it should allow charities to support the best research and the best researchers.

However, peer review does not come without its pitfalls. From being time intensive to potentially obstructing innovative or high-risk research, charities and researchers face many hurdles when it comes to peer review.    

This thought-piece is for charities to reflect on the future of peer review and consider changes that could be made. You can navigate by clicking one of the sections listed in the below diagram.

Introduction

Common issues of peer review Changes to address the issues of peer review

The evolution of peer review

Peer review is the way in which all AMRC members make funding decisions on grant applications. In peer review, the research proposal is read and commented on by experts with knowledge and interest in the subject area. This is usually done anonymously to enable open and constructive criticism.

It is currently the preferred way for health and medical research charities to decide what research to fund. Done properly, peer review should allow charities to support the best research and the best researchers. This, in turn, can help charities maximise the impact of their funding and deliver changes that really matter to their supporters and stakeholders, so that patients benefit from the fruits of research. [To further understand the reasoning and processes behind peer review, please see AMRC’s Principles of Peer Review]

Peer review can look different across our membership and may vary depending on the type of research being undertaken, who is doing it and for how long. The peer review process must be practical and adaptable – there isn’t a ‘one size fits all’ model and what has worked in the past may not work in the future.

Peer review, however, does not come without its pitfalls. From being time intensive to potentially obstructing innovative or high-risk research, charities and researchers face many hurdles when it comes to peer review.    

Our goal is for charities to support the best research and the best researchers, but is this not something that is continually changing? As novel research methods emerge, young researchers arise, and charities increasingly develop new funding approaches, we want to ensure that the peer review process will continue to pick out the applications that will deliver the best possible outcomes, without tradition dominating in a continually evolving environment.

Alongside this, charities should seek to constantly improve the process to ease the burden for themselves, their advisory committees, their reviewers and grant applicants.

So, whilst the principles of peer review (accountability, independence, balance, rotation, impartiality) remain the same, we wanted to set out the current challenges of the processes involved, and how charities might seek to address them.

This document is a thought-piece for charities to reflect on the future of peer review and begin to consider ways of:

  • Reducing the likelihood of funding poor quality, irreproducible research
  • Funding more innovative, interdisciplinary or translational research
  • Improving the accuracy of application ranking
  • Increasing the consistency of peer review
  • Ensuring the peer review process is as unbiased as possible
  • Addressing issues such as equality, diversity and inclusion of both reviewers and researchers
  • Adapting peer review to changing research strategies and new modes of funding
  • Increasing the speed of peer review
  • Reducing the burden of peer review
  • Offering appropriate recognition and incentives for peer reviewers

Common issues of peer review

THE PEOPLE INVOLVED

Burden

Peer review is time consuming, taking 9 to 12 months from the receipt of applications to the ultimate funding decisions.

Burden on the funders and reviewers:

Funders report difficulties with identifying reviewers, reviewers declining invitations to review, late submission of review reports and high administrative and financial costs.

Burden on the applicants:

While efforts to reduce the burden are often focussed on funders and reviewers, data shows that approximately 75% of the burden falls on the applicants (Guthrie et al., 2017).

Recognition

Traditionally there has been little recognition or incentive for researchers to perform peer review. Peer review is generally undertaken because it is ‘the right thing to do’ and part of an academic’s duty.

However, many are questioning whether this argument still stands, as there is increasing pressure on academics to bring in funding and deliver impact.

This means that the traditional lack of incentivisation or recognition may no longer be viable. Proving recognition to reviewers could help to offset some of the burden associated with peer review and could strengthen the relationship between the funder and reviewers.

THE DECISION-MAKING PROCESS

Bias

The peer review process can be impacted by different types of bias by all players involved. This can in turn prevent new, innovative research from being funded.

Biases in reviewer selection:

Funders may display biases in their selection of reviewers. Some rely on a small pool they know are dependable, which reduces diversity. Others favour UK reviewers as they believe international reviewers may not be aware of the nuances and funding policies of the UK system.

Biases towards applicants:

Reviewers may choose to assess applicants according to the number of papers they’ve published, or the funding they’ve received in the past. This may disadvantage early career researchers and those who have taken career breaks, especially when they are submitting innovative applications.

Funders and reviewers may, often unintentionally, introduce biases such as ageism, sexism, racism, cronyism (appointment of friends and associates) and institutional affiliation into eligibility and selection criteria.

Biases towards the proposed research:

Reviewers may favour proposals that align with their own ideas or applications in areas they are completely unfamiliar with (Wang & Sandstrom, 2015). However, others have suggested reviewers may be more critical of those within their own expertise (Gallo et al., 2016).

Reviewers often favour traditional approaches to research and may view more ground-breaking research as ‘reckless speculation’ with insufficient preceding work. This is likely contributing to the observed drop in the number of innovation or high-risk applications.

Applications which combine multiple threads of research, such as interdisciplinary and translational research may be unfairly disadvantaged. It may be difficult to find reviewers with the necessary expertise to be able to review an entire application in detail. In combination with reviewer’s reliance on funding success as an important assessment, a negative cycle may develop for researchers attempting to undertake research in these areas.

Quality

Peer review may not necessarily fund the best quality research.

Ranking applications

Reviewers can typically identify the top 20-30% of applications, but struggle with identifying the top 10% (Fang & Casadevall, 2012).

The duration of a reviewer committee meeting may affect ratings (e.g. they may be harsher to those applications reviewed later in the meeting).

Assessing the research

Assessing applications based on publication history, which is common, may not necessarily translate into the best quality research being funded. This is particularly true for early career researchers and for applicants who have performed high-quality research that has not been published (i.e. replication studies, negative results).

In addition, reviewers’ ratings may vary considerably due to genuine difference of opinion or differing interpretations or understanding – it is difficult to ascertain which. The risk of misinterpretation of applications may be further exacerbated by the fact that reviewers have no real opportunity to ask applicants questions to inform their decision.

On topics where they are less knowledgeable, reviewers may defer to the perceived wisdom of a few specific experts, reducing the chances of critical discussion and quality review.

Limitations

The transparency and anonymity of a reviewer’s identity may have an effect e.g. if reviewers’ identities are known to the applicant, it may lead to less honest reviews whereas if they remain anonymous it may lead to harsh or poor-quality reviews.

The funder’s role in research is often limited to the application stage meaning that projects that are perhaps under-performing will still be funded until completion. 

Indeed, some research has shown that peer review is a weak predictor of commercial success of early-stage technologies in small businesses (Galbraith et al., 2010).

Are any of these issues impacting your funding process and decision? Use these questions to reflect on the efficiency and effectiveness of your current peer review system and identify areas for improvement.

Changes to address the issues of peer review

Changes that address recognition

Expand
CHANGES CONSIDERATIONS ISSUES ADDRESSED

Develop new incentives for reviewers

This could increase participation, speed up the process and motivate reviewers.

A range of incentives could be considered: a thank you, a certificate of recognition, publishing the reviewer’s name on the charity website or annual research review, a small gift, remuneration, free access to the charity’s research conference, access to invitation-only research networking events run by the charity, linking reviews to ORCIDs (Open Research and Contributor ID) or use of Publons.

  • This could be costly to introduce.
  • Offering material incentives could decrease the desire to voluntarily undertake peer review in the future or for other funders.

Officialise a pool of experts from which to select reviewers

An official pool of willing and qualified reviewers could be established by the funder to undertake peer review. This would help to reduce the burden on the funder in finding reviewers and could speed up the process. It could also provide transparency as well as recognition to the reviewers if this information was available on the funder’s website.

  • It would be good practice to have a strict code of conduct and terms of reference for the pool of experts that they agree to when joining.
  • This could be paired with providing peer review training and refreshers on the charity’s aims and protocols to the reviewers.

Published: 26th July, 2019

Updated: 19th September, 2019

Author: Leonora Neale

Related topics:
  • Reviewing peer review
Share this page
  • Email
  • Facebook
  • Twitter

Changes that address burden

Expand
CHANGES CONSIDERATIONS ISSUES ADDRESSED

Improve application success rate

A low percentage of applications that are awarded funding following full peer review (low success rate) may indicate that changes need to be made early on in the process. For example, clearer guidance could be provided to applicants and a better triage process could filter out a greater number of applications up front. This could consequently help speed up the entire process.

  • Success rate is currently not routinely recorded by all funders, though some information is already available to allow comparisons with other funders e.g. MRC and NIHR.
  • It may not be comparable depending on the funder, budget, the theme of call and the type of funding e.g. projects, fellowships or clinical trials.

Simplify the review process 

Review questions could be simplified (Turner et al., 2018), reviewers could be asked to focus only on certain elements of a proposal specific to their expertise, committee members could be provided with short summaries, and applications forms could be simplified. A blend of approaches could help to reduce the burden of peer review across funders, reviewers and applicants.

  • It may require additional resource upfront to understand the reasons behind the burden and to implement any changes.

Assess when written review is necessary 

Funders could decide to make more decisions without written review or limit written reviewers to two per application. Studies have shown that using more than four written reviewers scores doesn’t influence committee decisions (Sorrell et al., 2018).

It could speed up peer review and focus effort where it is most needed.

  • This will likely require careful thought around how this might be perceived and the controls needed to ensure fairness and rigour.
  • It is possible that just two reviewers may disagree, but a third written reviewer could be sourced in these cases.
  • If the funder felt comfortable that written review was not necessary because the committee already had sufficient expertise, it would need to develop a consistent mechanism by which to make this decision.

Reject the bottom 50-75% of applications

The applications with the lowest scores could be automatically rejected after external peer review so that only the remaining applications are discussed by the research review committee. This could significantly reduce the burden on the committee.

  • This could decrease the thoroughness (and therefore quality hallmark) of peer review. There is also a risk that high-quality applications are removed at triage.
  • The cut-off percentage will likely vary between funders, and even between funding streams, so there may need to be flexibility in setting it.
  • This approach could be combined with the lottery system to combat criticisms that peer review has difficulty distinguishing between levels of good. The US National Science Foundation did this for their short preliminary applications (Mervis, 2015).

Provide specific deadlines to reviewers

Scheduling reviews at specific times with external experts could improve response rates and speed up the written review process overall.

  • Many funders already employ this approach but still have problems.
  • Funders should provide as much notice as possible and have flexibility around timeframes.

Use teleconferencing for committee meetings 

This could be used to alleviate some of the burden on reviewers, reduce costs and speed up decision-making. It could allow reviewers to participate in the committee meeting remotely thereby removing the need for travel.

  • Technical issues could make this impractical.
  • Committee members may prefer face-to-face meeting and value its social aspects. The technology could perhaps be an option but not mandated.

Crowd-source peer review

The use of social media and virtual groups could be maximised to undertake peer review e.g. G1000. This could significantly increase the speed and reduce the burden of peer review.

  • These approaches are broadly untested, and it would therefore require careful testing on a small scale initially.
  • It could be very difficult to manage conflicts of interest.

Officialise a pool of experts from which to select reviewers

An official pool of willing and qualified reviewers could be established by the funder to undertake peer review. This would help to reduce the burden on the funder in finding reviewers and could speed up the process. It could also provide transparency as well as recognition to the reviewers if this information was available on the funder’s website.

  • It would be good practice to have a strict code of conduct and terms of reference for the pool of experts that they agree to when joining.
  • This could be paired with providing peer review training and refreshers on the charity’s aims and protocols to the reviewers.
Use a lottery system to allocate research funding

This would still require active involvement of a committee to ensure appropriateness and quality. An example implementation could come after ranking of application; the top 20% could be funded, the bottom 50% could be rejected and the middle 30% could be subject to random lottery – blurring the boundary between fundable and un-fundable proposals (Avin, 2015).

A lottery approach reduces the influence of biases, difficulty in ranking applications and the inconsistency of peer review. Using a lottery system could reduce the burden on reviewers by replacing a stage of the review process.

  • A lottery system might be perceived as unfair as an application that received a much lower ranking than others in the drawer might be selected.
  • This might be inappropriate for certain types of funding streams e.g. programme grants where funding is committed over multiple cycles.
  • A lottery system would remove subjectivity; however, some subjectivity might be desirable e.g. in considering a researcher’s career stage or the importance of a certain topic. It could be that a lottery system is applied to more risky and innovative research streams, such as seed funding, and that applications targeting more strategic issues are dealt with via separate streams.
  • Researchers could manipulate the system by submitting multiple applications to increase their chances. A per-researcher limit or a triaging system could be put in place to help mitigate attempts to manipulate the system.

Have an expert programme manager

A funder could employ trained specialists with technical expertise to make funding decisions in areas within their remit. This can speed up the review process and allow the charity to concentrate funding and resources on research it is specifically interested in.

 

  • This gives one individual significant influence and control, and should they have conflicts of interest or biases, this would impact fairness and in turn undermine confidence in the system. This technically breaches elements of ‘independence’, a core principle in of the AMRC Principles of Peer Review. As such significant attention should be given to ensuring that these expert programme managers have the required skills and experience to make funding decisions in order for this option to be viable. AMRC will clarify this in future iterations of their guidance.
  • For funders with broad remits, a lack of breadth of expertise could be an issue using this model. One approach would be to apply this model to specific types of directed or commissioned funding that are limited to a defined research area.
  • An ‘oversight’ committee (similar to a research review committee) consisting of external reviewers could make funding recommendations to the programme manager who makes the final funding decision.

Published: 26th July, 2019

Updated: 2nd September, 2019

Author: Leonora Neale

Related topics:
  • Reviewing peer review
Share this page
  • Email
  • Facebook
  • Twitter

Changes that address bias

Expand
CHANGES CONSIDERATIONS ISSUES ADDRESSED

Alter eligibility criteria to promote inclusion

Funders should re-examine their funding eligibility criteria to ensure equality, diversity and inclusion. For example, it should be recognised that early career researchers, particularly those submitting innovative proposals, may not always have preceding preliminary data.

If bias was addressed in funding eligibility criteria it could encourage more researchers to remain in their field.

  • Funders have not typically measured equality, diversity and inclusion and there are few tried and tested metrics that are commonly understood.
  • The equality and diversity of applications should be measured and assessed to monitor the success of these changes.
  • If effective, such changes could encourage more researchers to remain in their field.

Anonymise applications

Removing identifiers from applications could stop reviewers making decisions based on where the applications come from rather than the research itself.

  • Anonymisation may never be truly possible, especially in small fields.
  • Peer review typically involves assessing the research team to ensure suitability; however, this would be impossible with anonymity.
  • Testing would be needed to understand the impact on decision making.

Assess and analyse disagreement between reviewers

Significant disagreement could be an indicator of work with high potential and high risk. Funders could assess this and discuss the reasons why divergences exist including attitudes to risk and bias.

  • High levels of disagreement will not necessarily be due to biases but genuinely be different views on whether something is good quality.
  • A strong research review committee chair would be needed to come to conclusions on contentious issues.

Use a lottery system to allocate research funding

This would still require active involvement of a committee to ensure appropriateness and quality. An example implementation could come after ranking of application; the top 20% could be funded, the bottom 50% could be rejected and the middle 30% could be subject to random lottery – blurring the boundary between fundable and un-fundable proposals (Avin, 2015).

A lottery approach reduces the influence of biases, difficulty in ranking applications and the inconsistency of peer review. Using a lottery system could reduce the burden on reviewers by replacing a stage of the review process.

  • A lottery system might be perceived as unfair as an application that received a much lower ranking than others in the drawer might be selected.
  • This might be inappropriate for certain types of funding streams e.g. programme grants where funding is committed over multiple cycles.
  • A lottery system would remove subjectivity; however, some subjectivity might be desirable e.g. in considering a researcher’s career stage or the importance of a certain topic. It could be that a lottery system is applied to more risky and innovative research streams, such as seed funding, and that applications targeting more strategic issues are dealt with via separate streams.
  • Researchers could manipulate the system by submitting multiple applications to increase their chances. A per-researcher limit or a triaging system could be put in place to help mitigate attempts to manipulate the system.

Use the Delphi approach to triage applications

Here, iterative rounds of assessment are used, in which after each round of committee deliberation, the lowest scoring applications are removed.

Triage can used to select the top applications which can be sent to several non-conflicted experts. Multiple Delphi rounds can be held over several weeks to score scientific merit, innovativeness and level of risk. At the end of each round, reviewers can be provided with a table of de-identified scores and an overall ranking of proposals. Reviewers can raise any objections or proceed to the next round. Once everyone is content, the two lowest-ranking proposals can be excluded. The process can repeated until a few proposals remain. These will subsequently funded.

  • This approach would increase the commitment needed from reviewers and initial triage of applications would still be required to limit the number of applications that needed to be scored.
  • It can be a transparent and impartial way of reviewing grant applications in a specialised field of research where no local expertise is available (Holliday & Robotin, 2010).

Diversify funding streams 

A wider range of streams could be set up, with each stream having its own tailored criteria targeting various issues e.g. early career researchers, non-academic applicants, high-risk research, interdisciplinary research. In turn, reviewers with expertise specific to the funding stream could be recruited.

Various mechanisms for funding could also be employed e.g. commissioning models and seed grants.

  • It could increase the complexity of the funding landscape, making it harder for researchers to know which funding streams to apply for.
  • Not all funders will necessarily have enough funding to run multiple streams. Smaller funders could, however, collaborate to share funding to support a wider range of funding streams. AMRC and ABPI are undertaking work to make it easier for charities and industry to work together.
  • Less well-practiced funding stream criteria may present new challenges, so careful exploration will be needed.
  • In niche fields, finding appropriate expertise for each stream might be difficult, meaning individuals risk being over-burdened and conflicted. Where this might be the case, funders could consider collaborating and sharing a research review committee. The US National Institutes of Health does this for their Pioneer Award.
  • Funders should pay particular attention to the appropriate expertise required to chair the research review committee.

Greater focus on impact

Application forms could include questions about a variety of potential impacts (instead of publication record), examples of which can be seen in the AMRC impact report. This would put a greater emphasis on quality and move the conversation away from journal metrics. Reviewers could be asked to take this potential impact into consideration during the review process.

  • It requires culture change and would take all the funders to move forward together on this issue for maximal effect.
  • The research committee chair would need to be onboard with this in order to call out instances where reviewers revert to publication record as a metric for success.

Alter assessment criteria

To tackle key areas that the funder wants to prioritise e.g. reproducibility, high-quality research, innovation or interdisciplinary applications.

This could help to increase the quality and appropriateness of applications.

It tackles bias against any individual reviewer’s natural assessment e.g. risk-averse people.

  • Clarity and guidance would be needed to define these criteria and how they should be assessed, and reviewer training would be needed to implement this.
  • Funders, including the AMRC, could work together to define these criteria.
  • The weight given to the chosen criteria could be increased to drive changes in decision making.

Provide training and mentoring for reviewers 

Training could be used to tackle a number of issues such as bias, application ranking and scoring and perceived markers of ‘excellence’.

Mentoring by longer-serving reviewers could also support newer reviewers.

  • Training could be resource intensive, particularly when developing the training programme.
  • Funders could work together to develop standard training that could subsequently be tailored to each funder’s remit as required.
  • Some examples of existing training include ESRC’s online training, and NIHR’s online interactive course and guidance documentfor public reviewers.

Provide reviewers with a refresher on the charity’s aims and protocol

A clear summary of the charity’s views on key topics could re-emphasise the charity’s priorities. Topics could include:

  • bias
  • the type of research the charity is looking for e.g. innovative
  • research quality and reproducibility
  • the importance and value of the reviewer’s contribution
  • This would likely need repeating on a regular basis as committee members could revert to their standard ways of working. It could be covered at the start of each research review committee meeting, for example.
  • Charities could develop a suite of supporting materials to summarise this information.
  • A strong chair could help to call out behaviour that is not in the spirit of the charity’s aims.

Make reviewers’ identities known to applicants

Applicants would be able to view the name of the reviewer alongside the peer review scores and feedback comments. This could encourage reviewers to be more constructive in their feedback which should, in theory, improve the quality of future applications.

It would also improve transparency and self-policing of conflicts of interest.

 

  • Reviewers may be less honest in their feedback.
  • Early-career researchers may feel uncomfortable having their identity revealed, particularly if they are criticising senior colleagues.

Measure the confidence of reviewer scores 

In addition to scoring applications, reviewers could be asked to indicate their confidence in that score.

Where lower confidence scores are the result of concerns around e.g. the riskiness of the research, reviewers lacking expertise in every strand of the research or its quality or reproducibility, this system could drive discussion around these issues.

The addition of a confidence score could allow two applications that otherwise rank very similarly to be differentiated. 

 

  • A written review could be requested only for those applications that received a low confidence score from the committee, rather than for all of them.
  • Reviewers may be tempted to exaggerate their confidence to avoid being perceived as less knowledgeable than others.
  • It might be beneficial to allow reviewers to comment on why they submitted certain scores – does if reflect their overall confidence of the application or is it because there is a specific element of the application they know less about? This additional information would also prevent good applications being wrongly labelled as ‘poor’ because of a lack of confidence of reviewers.

Provide more opportunity to engage with applicants

For applications that don’t require interviews, funders could provide the opportunity for reviewers to ask questions of the applicants via teleconference to get a better understanding of the research and the applicant. This could serve to lessen pre-existing biases toward the applicant or the proposed research.

 

  • If this was introduced, there should be a set protocol in order to ensure all applicants are treated equally and fairly.
  • It could potentially draw attention away from the quality of the research and more towards personal attributes of the applicant.
  • Interaction with applicants could conversely increase reviewers’ biases.
  • This would increase burden on both applicants and reviewers.

Published: 26th July, 2019

Updated: 2nd September, 2019

Author: Leonora Neale

Related topics:
  • Reviewing peer review
Share this page
  • Email
  • Facebook
  • Twitter

Changes that address quality

Expand
CHANGES CONSIDERATIONS ISSUES ADDRESSED

Encourage more reproducible and high-quality research applications and reduce research waste

Funders could offer training for researchers to improve quality and reproducibility of research. Charity staff could also play a greater role in triage by checking applicants have provided sufficient information around reproducibility.

Routinely paying for replication studies, requiring researchers to undertake a systematic review before new work and obliging them to pre-register their hypotheses to avoid ‘harking’ could all help to improve quality and rigour.

Funders could signpost to training in research waste/reproducibility, and could expand information to potential applicants to include information around waste, reproducibility and quality e.g. MRC’s applicant guidance.

  • Infrastructure such as publishing platforms for work like protocols, replication studies and negative results is not well developed. However, funders could sign up to DORA and encourage researchers to publish through AMRC Open Research.
  • It might require significant resource to undertake this.
  • Researchers could be encouraged to make videos of their methods rather than written records to aid replication by other research teams (i.e. https://www.jove.com/)

Use peer review to evaluate on-going funded research

Funders could play a larger role in managing the whole piece of research rather than front-loaded management at application stage. Peer review could be used to provide ‘stop-go’ decisions at mid-term evaluations. Applicants could also be asked to submit adaptable work-plans so that they would be prepared to deal with unexpected developments or opportunities.

  • There would need to be a formal procedure in place to allow negative reviews to be addressed before withdrawal of funding.
  • It would place more burden on funders and reviewers and take more time.
  • It could undermine trust and strain the funder-researcher relationship.
  • There could be unintended negative consequences on research culture and increase pressure on researchers.

Diversify funding streams 

A wider range of streams could be set up, with each stream having its own tailored criteria targeting various issues e.g. early career researchers, non-academic applicants, high-risk research, interdisciplinary research. In turn, reviewers with expertise specific to the funding stream could be recruited.

Various mechanisms for funding could also be employed e.g. commissioning models and seed grants.

  • It could increase the complexity of the funding landscape, making it harder for researchers to know which funding streams to apply for.
  • Not all funders will necessarily have enough funding to run multiple streams. Smaller funders could, however, collaborate to share funding to support a wider range of funding streams. AMRC and ABPI are undertaking work to make it easier for charities and industry to work together.
  • Less well-practiced funding stream criteria may present new challenges, so careful exploration will be needed.
  • In niche fields, finding appropriate expertise for each stream might be difficult, meaning individuals risk being over-burdened and conflicted. Where this might be the case, funders could consider collaborating and sharing a research review committee. The US National Institutes of Health does this for their Pioneer Award.
  • Funders should pay particular attention to the appropriate expertise required to chair the research review committee.

Greater focus on impact

Application forms could include questions about a variety of potential impacts (instead of publication record), examples of which can be seen in the AMRC impact report. This would put a greater emphasis on quality and move the conversation away from journal metrics. Reviewers could be asked to take this potential impact into consideration during the review process.

  • It requires culture change and would take all the funders to move forward together on this issue for maximal effect.
  • The research committee chair would need to be onboard with this in order to call out instances where reviewers revert to publication record as a metric for success.

Alter assessment criteria

To tackle key areas that the funder wants to prioritise e.g. reproducibility, high-quality research, innovation or interdisciplinary applications.

This could help to increase the quality and appropriateness of applications.

It tackles bias against any individual reviewer’s natural assessment e.g. risk-averse people.

  • Clarity and guidance would be needed to define these criteria and how they should be assessed, and reviewer training would be needed to implement this.
  • Funders, including the AMRC, could work together to define these criteria.
  • The weight given to the chosen criteria could be increased to drive changes in decision making.

Provide training and mentoring for reviewers 

Training could be used to tackle a number of issues such as bias, application ranking and scoring and perceived markers of ‘excellence’.

Mentoring by longer-serving reviewers could also support newer reviewers. 

  • Training could be resource intensive, particularly when developing the training programme.
  • Funders could work together to develop standard training that could subsequently be tailored to each funder’s remit as required.
  • Some examples of existing training include ESRC’s online training, and NIHR’s online interactive course and guidance documentfor public reviewers.

Provide reviewers with a refresher on the charity’s aims and protocol

A clear summary of the charity’s views on key topics could re-emphasise the charity’s priorities. Topics could include:

  • bias
  • the type of research the charity is looking for e.g. innovative
  • research quality and reproducibility
  • the importance and value of the reviewer’s contribution
  • This would likely need repeating on a regular basis as committee members could revert to their standard ways of working. It could be covered at the start of each research review committee meeting, for example.
  • Charities could develop a suite of supporting materials to summarise this information.
  • A strong chair could help to call out behaviour that is not in the spirit of the charity’s aims.

Make reviewers’ identities known to applicants

Applicants would be able to view the name of the reviewer alongside the peer review scores and feedback comments. This could encourage reviewers to be more constructive in their feedback which should, in theory, improve the quality of future applications.

It would also improve transparency and self-policing of conflicts of interest.

  • Reviewers may be less honest in their feedback.
  • Early-career researchers may feel uncomfortable having their identity revealed, particularly if they are criticising senior colleagues.

Measure the confidence of reviewer scores 

In addition to scoring applications, reviewers could be asked to indicate their confidence in that score.

Where lower confidence scores are the result of concerns around e.g. the riskiness of the research, reviewers lacking expertise in every strand of the research or its quality or reproducibility, this system could drive discussion around these issues.

The addition of a confidence score could allow two applications that otherwise rank very similarly to be differentiated. 

  • A written review could be requested only for those applications that received a low confidence score from the committee, rather than for all of them.
  • Reviewers may be tempted to exaggerate their confidence to avoid being perceived as less knowledgeable than others.
  • It might be beneficial to allow reviewers to comment on why they submitted certain scores – does if reflect their overall confidence of the application or is it because there is a specific element of the application they know less about? This additional information would also prevent good applications being wrongly labelled as ‘poor’ because of a lack of confidence of reviewers.

Provide more opportunity to engage with applicants

For applications that don’t require interviews, funders could provide the opportunity for reviewers to ask questions of the applicants via teleconference to get a better understanding of the research and the applicant. This could serve to lessen pre-existing biases toward the applicant or the proposed research.

 

  • If this was introduced, there should be a set protocol in order to ensure all applicants are treated equally and fairly.
  • It could potentially draw attention away from the quality of the research and more towards personal attributes of the applicant.
  • Interaction with applicants could conversely increase reviewers’ biases.
  • This would increase burden on both applicants and reviewers.

Published: 26th July, 2019

Updated: 2nd September, 2019

Author: Leonora Neale

Related topics:
  • Reviewing peer review
Share this page
  • Email
  • Facebook
  • Twitter
Back to top

Showing 10 of 4

Latest

  • Mesothelioma UK

    Mesothelioma UK

    Mesothelioma UK funds research into asbestos-related cancer, mesothelioma

  • Charity-funded #ResearchAtRisk

    Charity-funded #ResearchAtRisk

  • The people behind the charities

    The people behind the charities

  • Dr Sonya Babu-Narayan

    Elected AGM 2018

Most read

  • Who we are

    We are a membership organisation for over 140 medical research charities across the UK. We support them in saving and improving lives through research and innovation.

  • Vacancies

    Vacancies

  • Changes to the way Excess Treatment Costs (ETCs) are paid

    Changes to the way Excess Treatment Costs (ETCs) are paid

  • Principles of peer review

    Principles of peer review

  • Become a member

    Become a member

    Join us and become part of a network of 150 medical and health research charities who deliver high-quality research that saves and improves lives

  • Leaving the EU

    Leaving the EU

    Negotiations continue to determine the UK’s new relationship with the EU. We are working to ensure that the voice of the medical research charity sector is heard as part of this process

  • Aisling Burnand

    Chief Executive - Phone: 020 8078 6042 - Email: [email protected]

  • Become a supporter

    Become a supporter

    You don’t have to be a member charity to get involved with our work

  • Facilitating adoption of off-patent, repurposed medicines into NHS clinical practice

    Facilitating adoption of off-patent, repurposed medicines into NHS clinical practice

  • Consultation on revised AMRC IP guidance

    Consultation on revised AMRC IP guidance

Contact us

AMRC is a registered charity in England and Wales (296772). Registered as a company limited by guarantee (2107400) in England and Wales.

AMRC
99 Gray's Inn Road
London WC1X 8TY, UK

General enquiries: 020 8078 6042; [email protected]
Press enquiries: 020 8078 6044; [email protected] 

  • Sitemap
  • Accessibility
  • Terms & Conditions
  • Privacy Notice
  • Complaints
  • Disclaimer
  • Twitter
  • Linkedin
  • YouTube

Copyright © 2017 AMRC. All rights reserved.


AMRC is a registered charity in England and Wales (296772). 

Registered as a company limited by guarantee (2107400) in England and Wales.
Registered office at 99 Gray's Inn Rd, London WC1X 8TY.

© 2019 AMRC. All rights reserved.