Navigating the Digital Health Ethics Landscape: A framework for understanding ethical principles for digital health

Executive summary

We have created an interactive version of the executive summary that you can use to click-through to the sections of the guidelines you would like to read more about:


Within a couple of decades, digitally-enabled practices have become prolific across our work and lives and are the norm. These transformations have largely been driven and controlled by commercial organisations, however there is increased interest and participation from health and medical research charities which see the potential benefit to patients.  

Charities hold a unique position of trust, and therefore have a mandate to embody best practice in the use of digital technologies and any interactions these may have with people’s data. To enable this, the AMRC commissioned DataKind UK to develop an ethics framework for members to reference when developing and deploying digital products and services. The result being this paper, as well as a guide for enacting this framework when collaborating with industry partners through a series of questions: Navigating the Digital Health Ethics Landscape: Questions for charities to ask digital technology company partners’. 

There are a wealth of existing ethical principles that are to some extent applicable, but not specific to, the digital health work of charities. Hence, these various frameworks were collated and relevant aspects from across all of them were developed into this single framework. 

Understanding the context 

The first step in the process was to characterise the environment in which health and medical research charities work when undertaking digital health research.  

This can be represented as the sectors they might interact with: 

Nine core principles 

Existing ethical principles in each of these areas were identified and parsed to tease out consistencies across them that are relevant to charities developing health technologies.  

Nine key concepts materialised: 


Do work that is to the benefit, not detriment of people. The benefits of the work should outweigh the potential risks. 


Avoid harm. This is closely related to beneficence.  


Enable people to make choices. This requires people to have sufficient knowledge and understanding to decide. 


The benefits and risks should be distributed fairly. 


Transparency around how and why digital health solutions generate the outcomes they do. Particularly relevant to AI, for which the assumptions, working and outputs should be explicable. 

Sustainability (financial and operational) 

Minimise risk of developing digital products and services which users become dependent on but cannot be sustained. 

Open research 

Commitment to make research freely open and accessible for reuse. 

Community mindedness 

Willingness to collaborate within the digital health community, such as sharing platforms applicable across medical conditions. 


Being proportionate to the relevant risk and potential benefit.