Changes that address quality CHANGES CONSIDERATIONS ISSUES ADDRESSED Encourage more reproducible and high-quality research applications and reduce research waste Funders could offer training for researchers to improve quality and reproducibility of research. Charity staff could also play a greater role in triage by checking applicants have provided sufficient information around reproducibility. Routinely paying for replication studies, requiring researchers to undertake a systematic review before new work and obliging them to pre-register their hypotheses to avoid ‘harking’ could all help to improve quality and rigour. Funders could signpost to training in research waste/reproducibility, and could expand information to potential applicants to include information around waste, reproducibility and quality e.g. MRC’s applicant guidance. Infrastructure such as publishing platforms for work like protocols, replication studies and negative results is not well developed. However, funders could sign up to DORA and encourage researchers to publish through AMRC Open Research. It might require significant resource to undertake this. Researchers could be encouraged to make videos of their methods rather than written records to aid replication by other research teams (i.e. https://www.jove.com/) Use peer review to evaluate on-going funded research Funders could play a larger role in managing the whole piece of research rather than front-loaded management at application stage. Peer review could be used to provide ‘stop-go’ decisions at mid-term evaluations. Applicants could also be asked to submit adaptable work-plans so that they would be prepared to deal with unexpected developments or opportunities. There would need to be a formal procedure in place to allow negative reviews to be addressed before withdrawal of funding. It would place more burden on funders and reviewers and take more time. It could undermine trust and strain the funder-researcher relationship. There could be unintended negative consequences on research culture and increase pressure on researchers. Diversify funding streams A wider range of streams could be set up, with each stream having its own tailored criteria targeting various issues e.g. early career researchers, non-academic applicants, high-risk research, interdisciplinary research. In turn, reviewers with expertise specific to the funding stream could be recruited. Various mechanisms for funding could also be employed e.g. commissioning models and seed grants. It could increase the complexity of the funding landscape, making it harder for researchers to know which funding streams to apply for. Not all funders will necessarily have enough funding to run multiple streams. Smaller funders could, however, collaborate to share funding to support a wider range of funding streams. AMRC and ABPI are undertaking work to make it easier for charities and industry to work together. Less well-practiced funding stream criteria may present new challenges, so careful exploration will be needed. In niche fields, finding appropriate expertise for each stream might be difficult, meaning individuals risk being over-burdened and conflicted. Where this might be the case, funders could consider collaborating and sharing a research review committee. The US National Institutes of Health does this for their Pioneer Award. Funders should pay particular attention to the appropriate expertise required to chair the research review committee. Greater focus on impact Application forms could include questions about a variety of potential impacts (instead of publication record), examples of which can be seen in the AMRC impact report. This would put a greater emphasis on quality and move the conversation away from journal metrics. Reviewers could be asked to take this potential impact into consideration during the review process. It requires culture change and would take all the funders to move forward together on this issue for maximal effect. The research committee chair would need to be onboard with this in order to call out instances where reviewers revert to publication record as a metric for success. Alter assessment criteria To tackle key areas that the funder wants to prioritise e.g. reproducibility, high-quality research, innovation or interdisciplinary applications. This could help to increase the quality and appropriateness of applications. It tackles bias against any individual reviewer’s natural assessment e.g. risk-averse people. Clarity and guidance would be needed to define these criteria and how they should be assessed, and reviewer training would be needed to implement this. Funders, including the AMRC, could work together to define these criteria. The weight given to the chosen criteria could be increased to drive changes in decision making. Provide training and mentoring for reviewers Training could be used to tackle a number of issues such as bias, application ranking and scoring and perceived markers of ‘excellence’. Mentoring by longer-serving reviewers could also support newer reviewers. Training could be resource intensive, particularly when developing the training programme. Funders could work together to develop standard training that could subsequently be tailored to each funder’s remit as required. Some examples of existing training include ESRC’s online training, and NIHR’s online interactive course and guidance documentfor public reviewers. Provide reviewers with a refresher on the charity’s aims and protocol A clear summary of the charity’s views on key topics could re-emphasise the charity’s priorities. Topics could include: bias the type of research the charity is looking for e.g. innovative research quality and reproducibility the importance and value of the reviewer’s contribution This would likely need repeating on a regular basis as committee members could revert to their standard ways of working. It could be covered at the start of each research review committee meeting, for example. Charities could develop a suite of supporting materials to summarise this information. A strong chair could help to call out behaviour that is not in the spirit of the charity’s aims. Make reviewers’ identities known to applicants Applicants would be able to view the name of the reviewer alongside the peer review scores and feedback comments. This could encourage reviewers to be more constructive in their feedback which should, in theory, improve the quality of future applications. It would also improve transparency and self-policing of conflicts of interest. Reviewers may be less honest in their feedback. Early-career researchers may feel uncomfortable having their identity revealed, particularly if they are criticising senior colleagues. Measure the confidence of reviewer scores In addition to scoring applications, reviewers could be asked to indicate their confidence in that score. Where lower confidence scores are the result of concerns around e.g. the riskiness of the research, reviewers lacking expertise in every strand of the research or its quality or reproducibility, this system could drive discussion around these issues. The addition of a confidence score could allow two applications that otherwise rank very similarly to be differentiated. A written review could be requested only for those applications that received a low confidence score from the committee, rather than for all of them. Reviewers may be tempted to exaggerate their confidence to avoid being perceived as less knowledgeable than others. It might be beneficial to allow reviewers to comment on why they submitted certain scores – does if reflect their overall confidence of the application or is it because there is a specific element of the application they know less about? This additional information would also prevent good applications being wrongly labelled as ‘poor’ because of a lack of confidence of reviewers. Provide more opportunity to engage with applicants For applications that don’t require interviews, funders could provide the opportunity for reviewers to ask questions of the applicants via teleconference to get a better understanding of the research and the applicant. This could serve to lessen pre-existing biases toward the applicant or the proposed research. If this was introduced, there should be a set protocol in order to ensure all applicants are treated equally and fairly. It could potentially draw attention away from the quality of the research and more towards personal attributes of the applicant. Interaction with applicants could conversely increase reviewers’ biases. This would increase burden on both applicants and reviewers.