We have created an interactive version of the executive summary that you can use to click-through to the sections of the guidelines you would like to read more about:
Within a couple of decades, digitally-enabled practices have become prolific across our work and lives and are the norm. These transformations have largely been driven and controlled by commercial organisations, however there is increased interest and participation from health and medical research charities which see the potential benefit to patients.
Charities hold a unique position of trust, and therefore have a mandate to embody best practice in the use of digital technologies and any interactions these may have with people’s data. To enable this, the AMRC commissioned DataKind UK to develop an ethics framework for members to reference when developing and deploying digital products and services. The result being this paper, as well as a guide for enacting this framework when collaborating with industry partners through a series of questions: ‘Navigating the Digital Health Ethics Landscape: Questions for charities to ask digital technology company partners’.
There are a wealth of existing ethical principles that are to some extent applicable, but not specific to, the digital health work of charities. Hence, these various frameworks were collated and relevant aspects from across all of them were developed into this single framework.
The first step in the process was to characterise the environment in which health and medical research charities work when undertaking digital health research.
This can be represented as the sectors they might interact with:
Existing ethical principles in each of these areas were identified and parsed to tease out consistencies across them that are relevant to charities developing health technologies.
Nine key concepts materialised:
Do work that is to the benefit, not detriment of people. The benefits of the work should outweigh the potential risks.
Avoid harm. This is closely related to beneficence.
Enable people to make choices. This requires people to have sufficient knowledge and understanding to decide.
The benefits and risks should be distributed fairly.
Transparency around how and why digital health solutions generate the outcomes they do. Particularly relevant to AI, for which the assumptions, working and outputs should be explicable.
Sustainability (financial and operational)
Minimise risk of developing digital products and services which users become dependent on but cannot be sustained.
Commitment to make research freely open and accessible for reuse.
Willingness to collaborate within the digital health community, such as sharing platforms applicable across medical conditions.
Being proportionate to the relevant risk and potential benefit.
For a PDF version of this report and if you have any questions please contact our Digital Project Manager Lotte.
Algorithm: a set of rules that are followed when making calculations/ problem solving.
Artificial intelligence (AI): an area of computer science in which machines can perform tasks which require human intelligence.
Bioethics: the study of the ethical issues that may emerge from advances in biology and medicine.
Biomedical research: the study of researching prevention and treatment for illness and diseases.
Digital: applying the culture, practices, processes & technologies of the Internet-era to respond to people’s raised expectations.
Digital products and services: products and services which have been built through advancement of computer science, data processing and data storage capabilities, for example activity tracking devices such as Smart watches, predictive modelling, and augmented reality- interactive experience in which computer-generated images are superimposed onto real world environments.
Ethics: a set of moral principles, or ways in which we determine right from wrong.
Machine learning: a branch of artificial intelligence in which algorithms are developed which by learning from existing data automate the data analysis process.