Key points
There have been several recent cases of tech companies accessing people’s data and using it without explicit consent. This includes medical data.
People’s data may not always be used in their best interests.
If there is any public gain to be had from medical data-sharing, a different approach is needed.
Recent revelations over Cambridge Analytica’s veiled accessing of millions of users’ Facebook data have spurred public outrage over the oblique ways in which big tech gathers and monetizes information about us. The long overdue realisation that we do indeed all pay for big tech’s free services – by recording our lives in ways marketers could only have dreamt of a couple decades ago – has reaffirmed data privacy’s place in mainstream political discussions.
In this context, it is worth shining a light on the ways in which big tech is involved in the healthcare industry. Surely, if we’re outraged over a third party knowing which memes and cat videos we clicked on, we care even more if such a party has information about our medical predispositions and treatment histories.
In July 2015, DeepMind Technologies Limited, the Google company specialising in artificial intelligence (AI), began its first major healthcare project. Through a data-sharing agreement with the Royal Free Hospital London, one of the largest healthcare providers in Britain’s National Health Service (NHS), the company received access to Royal Free patient data. In exchange, DeepMind were to develop a smartphone app to help clinicians in the treatment of acute kidney disease and engage in the experimental development of real-time diagnosis and decision support services.
In November 2015, the Royal Free granted DeepMind access to 1.6 million patient records. These were identifiable on an individual level, and included sensitive information such as blood test results, diagnoses, and admission and discharge records.
UK medical data privacy law states that explicit consent must be obtained from each patient whose identifiable data is passed on to a third-party. By not obtaining this consent, Royal Free and DeepMind violated the data privacy of 1.6 million UK individuals. If the third-party is in a direct care relationship with the patient there is an exception to the law and no explicit consent is needed. However, DeepMind’s experimental development of diagnosis and decision support services hardly qualifies as “direct care”.
For a firm specialising in the development of machine learning software, which requires large troves of data for the training of algorithms, the Royal Free’s patient records constitute a strategic resource. Thus, the two organisations’ data-sharing partnership is not only questionable from a legal perspective, but also an economic one. A government organisation giving valuable resources to a conglomerate already under fire for monopolistic practices hardly fosters fair market competition.
This case raises issues from a political standpoint, too. DeepMind’s sister company Google already has large amounts of behavioural data on its users, potentially revealing medically relevant knowledge such as an individual’s interest in sports. The popular genomic sequencing company 23andMe is another part of Google's conglomerate. A private organisation mining these three buckets in tandem – medical records, behavioural online data, and genomic data – and selling their results to third parties such as health insurance or political consulting firms, has the potential to supercharge existing issues of market exploitation and racial discrimination. As a consequence, public organisations sharing information with private ones need to explicitly define the contexts that their data is to be used in.
The collaboration between DeepMind and the Royal Free finally came under public scrutiny in 2017, and both sides have since vowed to improve their practices. Further, the EU’s General Data Protection Regulation is poised to make vast improvements to the transparency and fair use of individuals’ data once its enforcement begins in May 2018.
However, the case of the DeepMind-Royal Free Collaboration shows that even with clear regulation in place, government organisations can find themselves conflicted between data protection and public service improvement, potentially becoming the first to violate existing regulations. Thus, we need to develop new governance approaches to these partnerships, ones that incorporate both sides’ underlying incentives, and find ways of aligning them without raising new risks.
Further Reading
Julia Powles’ study of the NHS-DeepMind case, published in Health and Technology: https://link.springer.com/article/10.1007/s12553-017-0179-1
Guardian article on the NHS-DeepMind case: https://www.theguardian.com/technology/2017/jul/03/google-deepmind-16m-patient-royal-free-deal-data-protection-act
Economist article on the question of big tech monopolism: https://www.economist.com/news/leaders/21735021-dominance-google-facebook-and-amazon-bad-consumers-and-competition-how-tame
Website on EU’s General Data Protection Regulation: https://www.eugdpr.org/
About Me
Final year MPhil student in Technology Policy, fascinated by the societal dimensions of advances in science and technology. I enjoy reading sci-fi, because it has a lot to say about my field of study.
Comments