Loyola University > Center for Digital Ethics & Policy > Research & Initiatives > Essays > Archive > 2019 > How One Globally-Centered Group is Tackling the Ethical Implications of Image-Based mHealth in Low Resource Settings
How One Globally-Centered Group is Tackling the Ethical Implications of Image-Based mHealth in Low Resource Settings
May 5, 2019
As our world becomes increasingly digital, the technology of mobile health applications, or mHealth, is proliferating, helping to bring health care interventions and resources to communities across the globe. While people in developed nations might experience mHealth as an app to track their wellness or a kind of insurance app to make filing and tracking claims more convenient, mHealth takes an entirely different form in low-resource settings.
In countries with few or no health-care resources, mHealth options seek to balance numerous socioeconomic challenges, such as a shortage of medical professionals, poverty, lack of facilities and various cultural barriers that contribute to poor health outcomes. In these cases, mHealth would be used for more critical health matters like getting second opinions on diagnostic images or sharing patient data to more accurately diagnose and treat illness. This kind of real-time data sharing could bring better care to people that have little access to medical facilities or other standard health care technologies.
However, along with this kind of access, mHealth brings with it a slew of concerning ethical issues. I explored a number of these in a 2014 essay for the Center for Digital Ethics and Policy— among them poor stewardship of private data, lack of data security, and conflicts of interest between app developers and patients.
Fortunately, this year a group of medical professionals primarily based in low-resource settings, ethicists, app developers, World Health Organization (WHO) representatives and other high-profile individuals met in Geneva, Switzerland at the Brocher Fondation to work intensively on getting to the core of the ethical concerns of this burgeoning technology.
Led by Lee Wallis, Head of Emergency Medicine for the Western Cape Town Government, University of Cape Town and Stellenbosch University in South Africa and Lucie Laflamme, former Head of the Department of Health Sciences at the Karolinska Institutet, Sweden. This ground-breaking workshop sought to address ethical concerns through shared perspectives and free discourse. The final product of the workshop will be presented later this year, but as a participant I was allowed a first-hand look at how this group tackled the complexity of image-based mHealth.
Sorting Out Ethical Issues Posed by Image-Based mHealth
Speakers representing different factors of the mHealth equation — caregivers, patient advocates, app developers, ethicists and others gave over thirty workshop attendees insight into the diverse aspects of mHealth’s many facets. Every perspective of mHealth was explored, from app development and implementation, all the way through follow-up and scale-up. The workshop tackled issues related to patient safety, autonomy and justice, as well as other concerns generated by the various presentations and open-room discussions.
While there are a number of ethical issues surrounding the transfer of image-based data from the treatment site to other medical professionals for processing and/or opinions, one stood out beyond the others — patient privacy.
And, within the many-faceted discussions regarding protection of private data, debate centered around the use of the controversial, yet in many cases critical, communication application, WhatsApp.
Doctors on the ground in areas with low levels of health resources are using the WhatsApp application to exchange information, an app that is fraught with privacy concerns. Since being purchased by Facebook nearly five years ago, the app has undergone a number of iterations — the latest being the addition of targeted ads, which of course are enabled through the act of data harvesting.
Doctors in remote settings find WhatsApp to be a simpler way to communicate patient data with colleagues who have access to more elaborate resources. The genius of WhatsApp for medical providers is that it doesn’t require them to have a phone designated for medical use only. Providers in low-resource settings often don’t have the funding for additional phones and juggling a personal and a work phone in the field can become cumbersome and problematic. Since WhatsApp was thought to be relatively secure when coupled with touch or face identification, doctors could simply download the app to their personal phone.
However, in February of this year, just weeks after the workshop ended, news broke of a security bug that lets anyone bypass WhatsApp’s inherent touch or face identification features to grab data. While WhatsApp developers are working to close the vulnerability, this scenario points to inherent weaknesses in the app that the health care community must take into consideration the frailty of technology when examining mHealth solutions.
Another positive selling point about WhatsApp for use in medical settings was the fact that sensitive patient information could be deleted within seven minutes of sending to minimize the chance that it would remain active on someone’s personal or business phone. However, a hack was discovered earlier this year that showed that data sent on WhatsApp could be easily retrieved — even after deletion.
These issues, coupled with Mark Zuckerberg’s announcement that he plans to integrate Instagram, Facebook Messenger and WhatsApp later this year, will make Facebook’s iffy privacy practices part of WhatsApp’s bag of problems and pose serious issues for doctors in low-resource settings who are trying to leverage the ease and power of the app to bring better care to patients.
All of the problems we’re seeing with WhatsApp goes back to the workshop’s most critical insights on patient autonomy and safety. As Lucie Laflamme stated during the one of the sessions, “How many more cases do we need to see before we protect patient autonomy? It’s our responsibility as authorities in the health care sector to help prevent this from happening.”
She is, of course, correct.
But there is so much to consider when balancing the need to bring sufficient health care into remote areas with the need to protect privacy. The question often becomes, Will we allow patients to go without basic health care services because we can’t adequately protect data? It’s a crucial — and pressing — issue in many parts of the world. And yet, it’s difficult to reach consensus on some of these most basic concerns, partially because there are so many different perspectives on data safety and privacy throughout the world.
The Output: Finding Ethical Consensus on Global Privacy Issues
Keymanthri Moodley, Professor in the Department of Medicine and Director for the Centre for Medical Ethics & Law at Stellenbosch University, mentioned during a workshop discussion session that autonomy is viewed in various ways in different countries. Because of this spectrum of perspectives, she cautioned that we must be careful not to impose a specifically North American view of this quality on a global level.
Her point is well taken.
It’s easy for homogenous groups to forget to explore outside of their own limited perspectives. That’s why groups like the one tackling this issue at the Brocher Fondation are critical to any exploration of mHealth ethics on a global scale. Representatives at the workshop included citizens that spanned the globe, from Finland and Sweden, to Africa, Switzerland, Germany, the United States, Canada, South America and others. Discussion groups were assigned specifically to include world representation, as much as possible, within each group.
Exploring mHealth ethics from a global perspective is the right way to manage the ethical lifecycle of mHealth products, beginning with initial app development and continuing through scale-up.
Bringing together the entire chain of people affected by mHealth technology, from those that develop it to those that use and benefit from it, allows for a broad-spectrum analysis of issues along the entire lifecycle of technology. Ensuring that those people are a diverse — and global — population guarantees subsequent investigations employing varying perspectives for a stronger, more complete set of protective standards.
We must face facts: Our world is no longer defined by local, state, or even continental borders. When we deploy technology like mHealth, whether for the benefit of low-resource or high-resource areas, we must do so with ethical guidelines and frameworks that can exist as comfortably on one side of the globe as another.
Knowing the amount of insight that the Brocher workshop generated makes the final product from this multinational team greatly anticipated. Whatever document is created from the hard work and perspectives of the attendees is sure to be a pivotal springboard to an even more inclusive global conversation about the ethical implications of image-based mHealth.
Nikki Williams is a bestselling author based in Houston, Texas. She writes about fact and fiction and the realms between, and her nonfiction work appears in both online and print publications around the world. Follow her on Twitter @williamsbnikki or at nbwilliamsbooks.com.