By Dr John Sargent, co-founder of Vantage Technologies and the BroadReach Group
Open any news source today and no doubt you will read about the risks or rewards of Generative AI models like ChatGPT. There seems to be frenzied predictions about how this single technology will change the world for good or evil. My work, over the past 20 years, has been in health equity and how we can harness technology to supercharge health systems and empower healthcare workers to support their patients better. Through this lens, I take a cautiously optimistic, sceptically excited view of this and other emerging technologies. Here is why.
Technologies, data, analytics, artificial intelligence, machine learning and other technologies have been used to manage health systems and population level health outcomes for years. Within Vantage Health Technologies alone, our AI-enabled solutions have been embedded within health systems and supporting healthcare workers around the world for nearly a decade. What we are seeing today is the popularisation of ChatGPT – an exciting AI model – rising to the mainstream. Within healthcare – where lives are on the line – we need to assess these emerging technologies in a thoughtful and balanced manner and apply the most appropriate ones to help us achieve our true north: a world where access to good health enables people to flourish.
1. It is not about technology, it never was
Our first principle must always be: does this improve the lives of patients and increase the healthcare workers’ capacity to provide care? As health tech enthusiasts we must always remember that technology is just a tool in service of that goal, rather than getting caught up in the next shiny new thing. We need to be grounded in this principle and thoughtful in how to avoid the pitfalls and, instead, leverage the potential of the emerging technology. In the case of generative AI, I think both pitfalls and potential exist.
2. Defining ‘Ethical AI
Health data is some of the most sacred and private data we have custodianship of and, combined with fast-evolving AI, presents both terrifying opportunities for misuse and game-changing opportunities for care advancements. We must make it our mission to build trustworthy AI tools as health-tech moves more and more into the mainstream.
Fundamentally, I think it is important to have a common understanding of the risks, limitations and pitfalls of unleashing AI on health data. The concept of ‘ethical AI’ is currently being widely debated by diverse thought leaders. For example, Microsoft looks at the topic across six principles: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security. A recent PwC paper prepared for the World Economic Forum defined it as: ‘Ethical Al should promote and reflect the common good such as sustainability, cooperation and openness. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.’ Mira Murati, Chief Technology Officer for OpenAi, in turn says, ‘AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values’.
For me, therefore, ethical AI for healthcare means keeping our true north – of patient care – at the forefront, ensuring that we have guardrails in place to protect their health, privacy and integrity so that technologies really can empower human action. As an industry, we should acknowledge that we are in the early phases of what must be a multi-phase approach in implementing these tools into everyday life. Industry collaboration across all types of stakeholders is being led by groups such as the Coalition for Health AI which brings together a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. It is essential to have standards and frameworks in place that are consistently reviewed and updated to ensure adherence and to educate end-users on how to evaluate these technologies to drive their adoption.
3. Garbage in, garbage out
All data is inherently biased. Data is the process of transforming, capturing and storing information for human consumption, and things can go wrong during any of these steps. As to old adage goes: garbage in, garbage out – and with health data these blind spots can literally be life or death. The quality of the data we collect, and store is critical. How and on what we train our algorithms to mine within the data can also be tainted by oversights, prejudices or errors. Biased and distorted health data can perpetuate discrimination and health inequity in society.
We should all learn how to identify and name the many forms of bias. Neil DeGrasse Tyson in his Masterclass on the types of bias tells us that distortion can be created by wilfulness, but it can also be created by our own limitations of training, knowledge, vocabulary and assumptions. Intentional or not, biases can hurt people. Poor management of data privacy can also hurt people, causing patients to be exposed and potentially exploited. It is therefore essential to have guardrails and mechanism to clean the data before it is used, so that is accurately represents the population it is describing!
In the case of the commercially ‘free to use’ Generative AI models, this bias can proliferate unchecked. In the example of ChatGPT, its model was trained on digital content found across the internet up until September 2021. So, this would not be suitable data upon which to draw conclusions for today’s health challenges. For context, the data utilized is from before mass COVID vaccinations were available within many developing health systems. Populations change, data is dynamic, and this particular model has not yet caught up. It simply will not have the nuanced and specific information needed to generate valid conclusions for certain use cases. So, while it can write convincing instructions, the output could be too biased to be valid and ultimately dangerous.
4. Now you are speaking my language
However, add the underlying natural language processing power of the OpenAI’s model within a discreet, validated and secure data set and there is real opportunity for pinpoint efficiency and true empowerment of the health worker. We are exploring this within our own Vantage Workforce Empowerment solution. The AI-enabled solution (developed prior to ChatGPT) is used by thousands of healthcare workers in Africa. Weekly, over 24 000 emails are sent from our system, to guide both frontline and management healthcare workers with their next best actions, personally derived from their health system data. This data is housed securely within Vantage, it is quality checked and managed for reliability. We are experimenting with Generative AI to enhance this work with more personally crafted language so that its guidance is more precise and empowering for each unique user.
By adding Generative AI natural language capabilities, we can send even more tailored, personalised messages in the tone that we’ve learnt the recipient will respond to best, with the kinds of prompts that resonate with the recipient. Generative AI is exceptionally good at mimicking how humans present written information and can even imbue it with a real sense of confidence and persuasiveness. This could be key to motivating already stretched healthcare workers and guiding them to action with fidelity – and across many languages.
Additionally, we can harness the insights we gain about a patient’s unique circumstances, and craft tailored messages to the patient and their healthcare workers to improve care. We can use everything we know about a patient’s social determinants of health, such as language barriers (e.g., if Spanish or isiZulu is their first language), transport issues (not having a car or money for public transport) or food security (not being able to eat before taking their meds), to tailor solutions for them to keep them healthy. For instance, we use our algorithm to predict when a patient is likely to stop treatment and intervene before it is too late, in language that is likely to resonate with them. This allows us to improve patient retention rates which ensures better health outcomes and lower costs.
Where to from here?
AI can help us revolutionise the delivery of very useful “next best action” prompts to health workers and patients, enabling them to make better decisions and save more lives. Generative AI is here to stay – so let’s harness its potential for good to create a world where health equity is a reality.
If we can meet people where they are at, it can be a real game-changer for how we keep cancer, tuberculosis or HIV patients on treatment. If we can keep people on treatment, we can prevent the drug-resistance that can develop when a patient misses a dose.
Going forward, the key is to ensure that these interventions can carry on with proper data and AI management guardrails in place, while never losing sight of our true north – our patients. This is a top priority I will continue to pursue in my conversations with other global health leaders who are committed to AI for Good and who continue to strive, like us, to create a world where access to good health enables people to flourish.