We have created an interactive version of the executive summary that you can use to click-through to the sections of the guidelines you would like to read more about:
Within a couple of decades, digitally-enabled practices have become prolific across our work and lives and are the norm. These transformations have largely been driven and controlled by commercial organisations, however there is increased interest and participation from health and medical research charities which see the potential benefit to patients.
Charities hold a unique position of trust, and therefore have a mandate to embody best practice in the use of digital technologies and any interactions these may have with people’s data. To enable this, the AMRC commissioned DataKind UK to develop an ethics framework for members to reference when developing and deploying digital products and services. The result being this paper, as well as a guide for enacting this framework when collaborating with industry partners through a series of questions: ‘Navigating the Digital Health Ethics Landscape: Questions for charities to ask digital technology company partners’.
There are a wealth of existing ethical principles that are to some extent applicable, but not specific to, the digital health work of charities. Hence, these various frameworks were collated and relevant aspects from across all of them were developed into this single framework.
The first step in the process was to characterise the environment in which health and medical research charities work when undertaking digital health research.
This can be represented as the sectors they might interact with:
Existing ethical principles in each of these areas were identified and parsed to tease out consistencies across them that are relevant to charities developing health technologies.
Nine key concepts materialised:
Beneficence
Do work that is to the benefit, not detriment of people. The benefits of the work should outweigh the potential risks.
Non-maleficence
Avoid harm. This is closely related to beneficence.
Autonomy
Enable people to make choices. This requires people to have sufficient knowledge and understanding to decide.
Justice
The benefits and risks should be distributed fairly.
Explicability
Transparency around how and why digital health solutions generate the outcomes they do. Particularly relevant to AI, for which the assumptions, working and outputs should be explicable.
Sustainability (financial and operational)
Minimise risk of developing digital products and services which users become dependent on but cannot be sustained.
Open research
Commitment to make research freely open and accessible for reuse.
Community mindedness
Willingness to collaborate within the digital health community, such as sharing platforms applicable across medical conditions.
Proportionality
Being proportionate to the relevant risk and potential benefit.
Digital technology is transforming the ways in which business, government and civil society operate. Much of this innovation has been led by industry and academia. However, many health and medical research charities recognise the benefits that digital products and services can bring. These range from improvements in treatment and disease management, through to innovative research methods. It can enable greater access to products and services for patients sooner and potentially at a lower cost.
In the commercial sector, new digital innovations are quickly tested and deployed by technology organisations without necessarily completing a full assessment of its ethical implications. There is, however, increasing awareness and concern around the potential negative impacts technologies can cause. For example, there is recognition that the data fed into AI algorithms is often not neutral, and can exacerbate existing inequalities or create new ones[1]. The greater understanding of such implications has led to an increasing number of ethical principles and codes of practice being developed.
These kinds of consequences are not permissible for charities whose very purpose is to provide public benefit. In entering the relatively new field of digital technologies, charities should seek ways in which their values can be embedded within their approach to innovation and must actively work to minimise potential harm. This puts greater responsibility on the shoulders of charities, who will traditionally have had limited ethical accountability with clinical trials overseen by ethics review boards (often of academic partners).
This report provides a framework for health and medical research charities to develop ethical guidelines when undertaking projects involving digital technologies. It is not the intention of this report to develop a mandatory set of principles which should be adhered to by all health and medical research charities. The context and aims of each particular project and the organisational values will inform each charity’s principles and will bring nuanced meaning and implications to the principles. Instead, this report acts as a guide to enable charities to access current thinking on ethical principles and to consider how it relates to their own work. This exercise can be further facilitated by a second resource, ‘Navigating the Digital Health Ethics Landscape: Questions for charities to ask digital technology company partners’. This provides example questions to ask potential tech partners that charities could use to establish if there are potential ethical issues in a project and work towards a solution. Charities undertaking industry partnerships might also like to refer to our resource ‘Digital Essential Partnerships: a guide for charities working with digital technology companies’.
[1] See Williams, J and Gunn L (2018). Math Can’t Solve Everything: Questions We Need To Be Asking Before Deciding an Algorithm is the Answer. Electronic Frontier Foundation. Access here.