We have created an interactive version of the executive summary that you can use to click-through to the sections of the guidelines you would like to read more about:
Within a couple of decades, digitally-enabled practices have become prolific across our work and lives and are the norm. These transformations have largely been driven and controlled by commercial organisations, however there is increased interest and participation from health and medical research charities which see the potential benefit to patients.
Charities hold a unique position of trust, and therefore have a mandate to embody best practice in the use of digital technologies and any interactions these may have with people’s data. To enable this, the AMRC commissioned DataKind UK to develop an ethics framework for members to reference when developing and deploying digital products and services. The result being this paper, as well as a guide for enacting this framework when collaborating with industry partners through a series of questions: ‘Navigating the Digital Health Ethics Landscape: Questions for charities to ask digital technology company partners’.
There are a wealth of existing ethical principles that are to some extent applicable, but not specific to, the digital health work of charities. Hence, these various frameworks were collated and relevant aspects from across all of them were developed into this single framework.
The first step in the process was to characterise the environment in which health and medical research charities work when undertaking digital health research.
This can be represented as the sectors they might interact with:
Existing ethical principles in each of these areas were identified and parsed to tease out consistencies across them that are relevant to charities developing health technologies.
Nine key concepts materialised:
Beneficence
Do work that is to the benefit, not detriment of people. The benefits of the work should outweigh the potential risks.
Non-maleficence
Avoid harm. This is closely related to beneficence.
Autonomy
Enable people to make choices. This requires people to have sufficient knowledge and understanding to decide.
Justice
The benefits and risks should be distributed fairly.
Explicability
Transparency around how and why digital health solutions generate the outcomes they do. Particularly relevant to AI, for which the assumptions, working and outputs should be explicable.
Sustainability (financial and operational)
Minimise risk of developing digital products and services which users become dependent on but cannot be sustained.
Open research
Commitment to make research freely open and accessible for reuse.
Community mindedness
Willingness to collaborate within the digital health community, such as sharing platforms applicable across medical conditions.
Proportionality
Being proportionate to the relevant risk and potential benefit.
There are a plethora of ethical principles and codes of conduct that have been created in relation to the development of new technologies, as well as principles for more traditional health interventions. As part of this work, we reviewed over 60 principles/codes of conduct developed from a range of disciplines and perspectives, including legal, data science, computer science, social science, biomedicine, public health, government, and the charity sector. In fact, several ethical guidelines that were included were published during the course of the project, demonstrating the timeliness of this assessment (see Appendix D for a list of principles reviewed for this project). So how should a health and medical research charity navigate through this ever-increasing array of principles? Detailed in the next section is the process by which the nine core principles in this guidance were distilled, navigating through the various interacting systems encapsulated by the diagram below.
It should be noted that this desk research was also supplemented by interviews with six medical research charities and eight expert stakeholders within the field of ethics. This helped to steer and underpin the research into the existing ethical principles across the other sectors.
The field of biomedical research has an established set of ethical principles - the ‘Georgetown Mantra’[2], which has played an important role in the contemporary development of bioethics. It sets out four key principles to assess ethical good practice:
These principles are viewed as a cornerstone for ethical principles and have also more recently been applied to digital technologies.
Case study: Assessing beneficence and non-maleficence in the context of unmet need
An ethics analysis of using a machine-learning model to predict patient outcomes in psychotic episodes highlighted the severity of illness, the lack of other options for improving recovery rates, and the potential for a more accurate prognosis to have a meaningful impact on patient recovery. The authors found that there was a real chance of potential benefit for patients, and that the risks were acceptable relative to the risks of the underlying condition.[3]
As a result of technologies being pushed out that have not necessarily undergone appropriate evaluation, a large corpus of ethical guidelines, toolkits and principles have been developed for tech R&D. Floridi et al. (2018)[4] undertook a review of the core opportunities and risks that AI brings to society. By synthesizing six ethical frameworks they found that the same four bioethics principles highlighted above were commonly incorporated, but argued that an additional principle should be added - explicability.
Case study: Understanding what the algorithm is doing
Zech J (2018)[5] trained a convolutional neural network (a type of image classification algorithm) aimed at diagnosing an enlarged heart from chest X-rays. Some X-rays had the word ‘portable’ in the top corner, indicating that the patient scanned had been bedbound, a correlate of increased rates of a positive diagnosis. The AI therefore, rather than using only the heart image itself as intended, started to incorporate the word (or lack thereof) into its diagnosis calculations. The researchers were able to identify this because they had encoded the algorithm’s risk level ratings as numbers across a grid overlaying the X-raying they were able to see that it was giving unusually high ratings to the unexpected area outside the heart. This illustrates the importance of a clear understanding of an algorithm’s decision-making to ensure it is working as intended.
Increasingly, non-profit organisations, in particular in the international development sector, are developing ethical principles for working with tech companies. For example, Monitoring, Evaluation, Research and Learning (MERL) Tech[6] have collaborated with donors to create a live set of Principles for Digital Development:[7] Five of their principles encompassed ideas distinct from those identified thus far (see Appendix D). These broadly typify the themes of sustainability, open research and community mindedness.
The theme of sustainability, specifically financially and operationally, was also raised in the paper Promoting More Effective Partnerships: A discussant document within which seven medical research charity chief executives and the AMRC discussed four principles for effective tech partnerships. It is also raised by the recently developed UK Code of Conduct for data-driven health and care technology, in the Principle 10: Define the Commercial Strategy.[8]
The harm of developing digital technology products and services which are discontinued due to finances would likely be unethical (for instance, patients may have grown to rely on a tool or service, or limiting use to only an initial pilot population may lead to increased inequality and breach the ethical requirement for Justice). In the UK, this may mean that charity partners should have an idea for ongoing value generation within the NHS and commissioning system.
Case study: Collaborative platforms?
In 2018, Samsung removed all compatibility features from its health app, blocking any third parties from being able to deliver services alongside it[9]. In the private sector, it is accepted that organisations seek to be market leaders, an environment that disincentivises collaboration. In the health charity sector, such exclusionary practices should not be an acceptable norm given the need to prioritise maximal benefit to patients. Interoperability should be standard, particularly given the increasing need to tackle multimorbidities.
Public health is primarily concerned with the health and wellbeing of society, and balances respect for individual choice (an important principle within bioethics) with wider societal benefit., sometimes resulting individual autonomy being overridden for the greater good. This is distinct from the approach taken in traditional bioethics.
There isn’t a standard set of public health principles (like those used within bioethics), but public health ethics commonly share a commitment to achieve greater good and to address social inequalities[10]. Public health ethics frameworks typically take the form of a set of goals that should be achieved[11] and/or apply ethics to real life case studies, considering various stages of design and deployment. One such case study is the Department of Health’s position on a response to pandemic influenza[12] which proposes eight principles, largely reflecting concepts already covered. A notion that surfaced, however, that is a useful addition is that of proportionality.
This is particularly relevant to digital health within the context of data. Proportionality around data usage is a core principle within the Data Protection Act 2018 (covered under data minimisation), and is discussed within a number of ethical frameworks. This can be proportionality across individuals (as in the case study) or proportionality of data about a specific individual (e.g. is it necessary to collect and store a patient’s location data as part of an activity tracking study?).
Case study: Disproportionate data access
Royal Free London NHS Foundation shared data on 1.6 million patients with Google DeepMind in order to develop and validate an app to help clinicians detect acute kidney injury in its earliest stages[13]. This was deemed to be a breach of the Data Protection Act as the Information Commissioner “[wasn’t] persuaded that it was necessary and proportionate to disclose 1.6 million patient records to test the application.”[14]
All the research around existing principles culminated in a consensus-building workshop with eight AMRC members and staff from the AMRC (for more information see Appendix A) to agree on the final core set of principles, which are summarised in the table below. The nine principles were derived from existing principles across five key areas, which characterise the environment in which health and medical research charities operate when developing and deploying digital technology products and services.
The exercise of identifying key ethical principles against which charities could assess the ethical impacts of digital health projects has necessarily been a subjective one. The nature of ethics is such that there is no objective truth and it is dependent on the context and circumstances in which decisions are being made.
Note: the need to be evidence-driven is an important enabler for all principles. See Appendix A for further discussion.
[2] Developed by James Childress from the University of Virginia and Thomas Beauchamp from Georgetown University
[3] Martinez-Martin, Nicole & Dunn, Laura B. & Roberts, Laura Weiss. (2018). Is it ethical to use prognostic estimates from machine learning to treat psychosis?
[4] Floridi, Luciano & Cowls, Josh & Beltrametti, Monica & Chatila, Raja & Chazerand, Patrice & Dignum, Virginia & Lütge, Christoph & Madelin, Robert & Pagallo, Ugo & Rossi, Francesca & Schafer, Burkhard & Valcke, Peggy & Vayena, Effy. (2018). An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.
[5] Zech, J (2018) What are radiological deep learning models actually learning.
[6] http://merltech.org
[7] https://digitalprinciples.org/principles/
[8] UK Department of Health & Social Care (2019)
[9] https://gadgetsandwearables.com/2018/08/28/samsung-health-update/
[10] Public Health England, HSC Public Health Agency, Public Health Wales, NHS Scotland (2017) Public Health Ethics In Practice. A background paper on public health ethics for the UK Public Health Skills and Knowledge Framework.
[11] Nuffield Council on Bioethics (2007). Public health: ethical issues.
[12] Department of Health (2007) Responding to Pandemic Influenza: The Ethical Framework for Policy and Planning
[13] https://www.royalfree.nhs.uk/patients-visitors/how-we-use-patient-information/our-work-with-deepmind/
[14] ICO (2017) Blog: Four Lessons NHS Trusts Can Learn From the Royal Free Case.