Latest News

Watch our In Conversation Series with Batuke Walusiku-Mwewa, Country Director of CMMB Zambia

Filter resources
137 Results
GenAI can revolutionise public healthcare but ‘guardrails by design’ are needed to protect patients

The next generation of ethical Generative Artificial Intelligence (GenAI) provides new hope for an equitable healthcare revolution – but advances in technology must never come at the cost of patient rights.

This was the consensus amongst top African and American health AI experts who participated in a

“The fundamental issue in healthcare, whether you are in Sub-Saharan Africa, Western Europe, or the USA, is that demand outstrips supply in terms of health services, doctors, nurses, and medications. In Sub-Saharan Africa, for instance, there are 0.2 doctors per 1000 people,” explains Dr John Sargent, co-founder of the BroadReach Group, a social impact business which has worked in over 30 countries to support governments, international NGOs, public and private sector to improve health outcomes for their populations. This includes implementing one of the largest HIV treatment programmes in the world, right here in South Africa.

He says we are trying to deliver on an antiquated model of “sick care”, where there is a certain ratio of doctors to patients. “We need to change this paradigm to be more effective by matching the supply and demand sides of our health systems in new digital ways.” Dr Sargent, who is a Harvard alumni and former World Economic Forum Social Entrepreneur of the Year, says that while GenAI has the potential to revolutionise how healthcare supply and demand is balanced, it is not the be-all-and-end-all of health tech. “The aim is not to get distracted by a shiny new toy –– we need to put the patient first by protecting privacy and training our models against bias. We must always remember that technology is just a tool in service of patient care and supporting the healthcare workforce improve health outcomes.”

Using GenAI to tackle specific diseases such as HIV and AIDS

Jaya Plmanabhan, chief scientist at innovation consultancy Newfire Global who trains health AI models for a living, says he is particularly excited about how large language models could be trained to revolutionise virtual expertise on diseases such as HIV and AIDS. “We call these ‘Role Specific Domain Models’ and they have the potential to be programmed to know everything about a particular disease, to better guide healthcare professionals on how to treat patients. This is a tremendously exciting prospect in the mission to end new HIV infections by 2030.”

These Private Language Models (PLMs) become oracles on a subject and are especially useful in helping solve hard problems in HIV management, such as loss to follow-up – a term for patients who drop off treatment. “Trying to find patients is critical to ensure that they don’t become resistant to drugs due to skipping doses. We can make our outreach much more engaging through conversational messages in their mother tongue and this can help us get people back into the clinic and back into care,” explains Ruan Viljoen, Chief Technology Officer of the BroadReach Group

Start with the problem, not the solution

“There is a quote that says we should fall in love with the problem, not the solution, which in this case is AI,” says Viljoen.  “I believe the biggest challenge is still health inequity – healthcare access can vary depending on race, location, or age.”

Viljoen said GenAI can help solve practical problems, such as frontline healthcare workers being overburdened and not having enough time. “What are the repetitive, administrative tasks that are stealing their time? For instance, GenAI can help nurses with automated note-taking in patient interviews, relieving an administrative burden. The goal is not to replace the role but to free up their time for value-added work.”

One of the greatest uses of AI in health is to help healthcare workers focus on the next best action. “We can use large datasets and extract insights to help healthcare workers, delivered via easy-to-digest and secure messaging like emails or text messages. This is nothing new – we’ve done this in some form for nearly a decade using our AI-enabled platform, Vantage. What I’m most excited about, is how we can augment the quality of the interactions to bring together human and artificial intelligence.”

Heeding the risks and creating guardrails

Vedantha Singh, an AI ethics in healthcare researcher and virologist from the University of Cape Town, said the top ethical considerations for AI in healthcare are privacy, accuracy, and fairness. She urged at all AI systems should start with guardrails and ethics within their foundational design.

“There is a perception that there are no regulations for the use of AI in healthcare, but to assume we are operating in the wild west is not true. International bodies are sharing guidelines and regulation is slowly evolving – including in Africa. Egypt, Rwanda and Mauritius already have strong AI policies,” says Singh. This includes an emphasis on human labour not being completely replaced and giving patients agency over how their data is used.

Singh says that companies must embed ethical guardrails – aka ‘guardrails by design’ in their health products from the start. Plmanabhan adds that GenAI can reduce costs and personalise care, but it must be used carefully. “For example, if the data is biased, the model will be biased. GenAI can also be used to create fake patient profiles to commit fraud.” Unbiased, quality data which complies to regulations such as HIPPA and POPIA or GDPR must be prioritised.

Plmanabhan emphasises the importance of patients giving informed consent, knowing how GenAI is being used on their data. “We need to stay committed to immovable core principles – we cannot compromise on the human in the middle of it all.”

Reaching the hardest to reach patients

Viljoen says GenAI is not just improving healthcare for urban patients. Those in deeply rural areas could benefit too.

“Internet connectivity and satellite communication are becoming more ubiquitous. A few things provide hope: Big cloud providers are providing more ‘edge computing’ for rural areas, the mobile phone is becoming a very powerful computer in the pockets of people all around the world, and small rural clinics can use smaller GenAI models which require smaller amounts of data and computing power– they don’t need to use ChatGPT,” says Viljoen.

Plmanabhan explains that there are secondary GenAI models that can function offline. The primary models are always online, with and secondary models sending information back to the primary model once it is back online.

Hope for an equitable healthcare revolution

GenAI can increase affordable and equitable healthcare through the automation of routine tasks. To create a world where more equitable healthcare exists, it is critical to establish strong partnerships between donors, policy makers, researchers, and healthcare implementers.

Viljoen concludes, “We need to be experiment rapidly with AI, and deploy cautiously. It’s an incredible time to work in health technology and to see how we can use it to at last achieve health equity.”


Read more
Opinion Piece: Why now is our moment to leverage Generative AI for Good in Public Healthcare

By Dr John Sargent, co-founder of Vantage Technologies and the BroadReach Group

Open any news source today and no doubt you will read about the risks or rewards of Generative AI models like ChatGPT. There seems to be frenzied predictions about how this single technology will change the world for good or evil. My work, over the past 20 years, has been in health equity and how we can harness technology to supercharge health systems and empower healthcare workers to support their patients better. Through this lens, I take a cautiously optimistic, sceptically excited view of this and other emerging technologies. Here is why.

Technologies, data, analytics, artificial intelligence, machine learning and other technologies have been used to manage health systems and population level health outcomes for years. Within Vantage Health Technologies alone, our AI-enabled solutions have been embedded within health systems and supporting healthcare workers around the world for nearly a decade. What we are seeing today is the popularisation of ChatGPT – an exciting AI model – rising to the mainstream. Within healthcare – where lives are on the line – we need to assess these emerging technologies in a thoughtful and balanced manner and apply the most appropriate ones to help us achieve our true north: a world where access to good health enables people to flourish.

1. It is not about technology, it never was

Our first principle must always be: does this improve the lives of patients and increase the healthcare workers’ capacity to provide care? As health tech enthusiasts we must always remember that technology is just a tool in service of that goal, rather than getting caught up in the next shiny new thing. We need to be grounded in this principle and thoughtful in how to avoid the pitfalls and, instead, leverage the potential of the emerging technology. In the case of generative AI, I think both pitfalls and potential exist.

2. Defining ‘Ethical AI

Health data is some of the most sacred and private data we have custodianship of and, combined with fast-evolving AI, presents both terrifying opportunities for misuse and game-changing opportunities for care advancements. We must make it our mission to build trustworthy AI tools as health-tech moves more and more into the mainstream. 

Fundamentally, I think it is important to have a common understanding of the risks, limitations and pitfalls of unleashing AI on health data. The concept of ‘ethical AI’ is currently being widely debated by diverse thought leaders.  For example, Microsoft looks at the topic across six principles: accountability, inclusiveness, reliability and safety, fairness, transparency, and privacy and security. A recent PwC paper prepared for the World Economic Forum defined it as: ‘Ethical Al should promote and reflect the common good such as sustainability, cooperation and openness. As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides.’  Mira Murati, Chief Technology Officer for OpenAi, in turn says, ‘AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values’.

For me, therefore, ethical AI for healthcare means keeping our true north – of patient care – at the forefront, ensuring that we have guardrails in place to protect their health, privacy and integrity so that technologies really can empower human action. As an industry, we should acknowledge that we are in the early phases of what must be a multi-phase approach in implementing these tools into everyday life. Industry collaboration across all types of stakeholders is being led by groups such as the Coalition for Health AI which brings together a community of academic health systems, organizations, and expert practitioners of artificial intelligence (AI) and data science. It is essential to have standards and frameworks in place that are consistently reviewed and updated to ensure adherence and to educate end-users on how to evaluate these technologies to drive their adoption.

3. Garbage in, garbage out

 All data is inherently biased. Data is the process of transforming, capturing and storing information for human consumption, and things can go wrong during any of these steps. As to old adage goes: garbage in, garbage out – and with health data these blind spots can literally be life or death. The quality of the data we collect, and store is critical. How and on what we train our algorithms to mine within the data can also be tainted by oversights, prejudices or errors. Biased and distorted health data can perpetuate discrimination and health inequity in society.

We should all learn how to identify and name the many forms of bias. Neil DeGrasse Tyson in his Masterclass on the types of bias tells us that distortion can be created by wilfulness, but it can also be created by our own limitations of training, knowledge, vocabulary and assumptions. Intentional or not, biases can hurt people. Poor management of data privacy can also hurt people, causing patients to be exposed and potentially exploited.  It is therefore essential to have guardrails and mechanism to clean the data before it is used, so that is accurately represents the population it is describing!

In the case of the commercially ‘free to use’ Generative AI models, this bias can proliferate unchecked. In the example of ChatGPT, its model was trained on digital content found across the internet up until September 2021[1].  So, this would not be suitable data upon which to draw conclusions for today’s health challenges.  For context, the data utilized is from before mass COVID vaccinations were available within many developing health systems. Populations change, data is dynamic, and this particular model has not yet caught up. It simply will not have the nuanced and specific information needed to generate valid conclusions for certain use cases. So, while it can write convincing instructions, the output could be too biased to be valid and ultimately dangerous.

4. Now you are speaking my language

However, add the underlying natural language processing power of the OpenAI’s model within a discreet, validated and secure data set and there is real opportunity for pinpoint efficiency and true empowerment of the health worker. We are exploring this within our own Vantage Workforce Empowerment solution. The AI-enabled solution (developed prior to ChatGPT) is used by thousands of healthcare workers in Africa. Weekly, over 24 000 emails are sent from our system, to guide both frontline and management healthcare workers with their next best actions, personally derived from their health system data. This data is housed securely within Vantage, it is quality checked and managed for reliability.  We are experimenting with Generative AI to enhance this work with more personally crafted language so that its guidance is more precise and empowering for each unique user.

By adding Generative AI natural language capabilities, we can send even more tailored, personalised messages in the tone that we’ve learnt the recipient will respond to best, with the kinds of prompts that resonate with the recipient. Generative AI is exceptionally good at mimicking how humans present written information and can even imbue it with a real sense of confidence and persuasiveness. This could be key to motivating already stretched healthcare workers and guiding them to action with fidelity – and across many languages.

Additionally, we can harness the insights we gain about a patient’s unique circumstances, and craft tailored messages to the patient and their healthcare workers to improve care. We can use everything we know about a patient’s social determinants of health, such as language barriers (e.g., if Spanish or isiZulu is their first language), transport issues (not having a car or money for public transport) or food security (not being able to eat before taking their meds), to tailor solutions for them to keep them healthy. For instance, we use our algorithm to predict when a patient is likely to stop treatment and intervene before it is too late, in language that is likely to resonate with them. This allows us to improve patient retention rates which ensures better health outcomes and lower costs.

 Where to from here?

 AI can help us revolutionise the delivery of very useful “next best action” prompts to health workers and patients, enabling them to make better decisions and save more lives. Generative AI is here to stay – so let’s harness its potential for good to create a world where health equity is a reality.

If we can meet people where they are at, it can be a real game-changer for how we keep cancer, tuberculosis or HIV patients on treatment. If we can keep people on treatment, we can prevent the drug-resistance that can develop when a patient misses a dose.

Going forward, the key is to ensure that these interventions can carry on with proper data and AI management guardrails in place, while never losing sight of our true north – our patients. This is a top priority I will continue to pursue in my conversations with other global health leaders who are committed to AI for Good and who continue to strive, like us, to create a world where access to good health enables people to flourish.

Read more