AI is here for healthcare

Healthcare workers who use the advancements of digital interventions to support equitable healthcare will be the drivers of clinically significant outcomes

by Alana Nash

It’s no secret that our society has become ever more confused on how to navigate the healthcare system! The healthcare team used to consist of clinical staff and administrators working together to care for patients. Over time, electronic medical records have become more complex and tedious, compliance standards have grown to require coursework and consultants to fully comprehend – and here we are with a cumbersome and inefficient bureaucracy. 

Healthcare workers are asked to do more with fewer resources and reimbursements from insurances are going down. They are getting paid less to do more and experiencing difficulty keeping up with workload demand and complexity of patients’ health. At the same time, with the introduction of artificial intelligence, healthcare workers see improved efficiency, diagnosis, outcomes, and the patient experience. For example, artificial intelligence helps healthcare workers detect patterns in diagnostic data, collect data from home monitoring devices, and assist in electronic medical record dictation and billing. 

Secondly, we are also seeing a push from the patient side, where those with technology expertise feel compelled to contribute to better healthcare solutions. I’ve spoken to software engineers, data scientists, product managers, and many others in the tech realm, and they report having seen firsthand how the patient experience has become more complex. They are eager to apply their talents to make our communities a healthier place.

For example, I spoke with Rafael M Lopes, Founder & Chief Digital Health Product Strategist at Bold Digital Health, who told me that he saw how the most vulnerable patients struggle to receive quality care. It is not an inherently fair system and he could not stand by without doing something to advance health equity.

❝As a family caregiver for my disabled mom, I accompanied her to hundreds of doctor, hospital and ER visits. The opportunities to enhance health equity, digitize workflows, innovate or simplify a patient experience were endless. The system is ripe with opportunities to extract business value and humanize the experience for patients and families.❞ –Rafael M Lopes

And third, industry adoption is moving rapidly. A survey conducted by The Health Management Academy found that 47.5% of healthcare executives are currently using artificial intelligence for their workforce and 52.5% report their health system is evaluating or considering use. The top uses of artificial intelligence are related to revenue cycle management, clinical operations, and clinical care. Nursing is the biggest target for AI use. Surveys suggest 82.5% of executives report their health system is currently evaluating or considering AI for nursing as nursing continues to be the biggest labor issue. Those in a patient facing role must understand what artificial intelligence is and how they can make a positive impact. 

These three sectors – healthcare workers, patient and industry – all come together in addressing and adapting to the advancements of artificial intelligence. 

Mitigating implicit bias in AI leads to better healthcare outcomes

The boom of AI in health tech brings solutions and a new set of problems that need to be acknowledged and addressed that can either escalate or improve underlying problems in the healthcare system, primarily in providing equitable care. AI will not replace healthcare workers, but if utilized properly, it can help them improve their patient outcomes. 

Healthcare organizations (hospitals, outpatient centers, wellness centers, homecare, telehealth companies) are rapidly adopting AI products. They have a responsibility to mitigate the effect of implicit bias in all interactions and points of contact with patients.

Implicit bias has the potential to impact:

  • Data collection and outcomes of care

  • Whether patients will return for services and/or seek care at the organization in the first place

Image by Mo Farrelly from Pixabay

For example, a white physician may describe an African-American patient who didn't take medications as prescribed as “nonadherent,” yet that same physician might say a white patient who didn’t follow instructions for taking the medication “forgot the timing” or “needs additional instruction.” These subtle attributions may cumulatively affect all future encounters with those patients, how other clinicians view that patient, and how their medical record reads. 

Implicit bias in healthcare has been a long studied topic, but awareness in AI requires discussion and oversight to provide equitable care. Healthtech uses machine learning where algorithms are trained to find patterns in data sets. Clinicians will be one of the main data collectors as they spend the most time interacting with the patients and documenting those interactions. AI presents as a solution for healthcare, but implicit biases from our data collectors cannot be ignored.  If bias plays a role in a healthcare professional’s decision, it will be highlighted in the output an AI algorithm provides. Healthcare workers can and should address their own implicit biases before AI models can help them achieve health equity. 

For example, a white physician may describe an African-American patient who didn't take medications as prescribed as “nonadherent,” yet that same physician might say a white patient who didn’t follow instructions for taking the medication “forgot the timing” or “needs additional instruction.” These subtle attributions may cumulatively affect all future encounters with those patients, how other clinicians view that patient, and how their medical record reads. 

Implicit bias in healthcare has been a long studied topic, but awareness in AI requires discussion and oversight to provide equitable care. Healthtech uses machine learning where algorithms are trained to find patterns in data sets. Clinicians will be one of the main data collectors as they spend the most time interacting with the patients and documenting those interactions. AI presents as a solution for healthcare, but implicit biases from our data collectors cannot be ignored.  If bias plays a role in a healthcare professional’s decision, it will be highlighted in the output an AI algorithm provides. Healthcare workers can and should address their own implicit biases before AI models can help them achieve health equity. 

Companies taking the lead in addressing implicit bias

Two programs that hold promise in addressing implicit bias that other peers are reluctant or unprepared to address.

Duke University Hospital’s emergency department openly acknowledges the risks of implicit bias. In 2019, the department developed an algorithm to help predict childhood sepsis. Three years into their project, the team at Duke discovered bias slipped into their algorithm. Doctors took longer to order blood tests for Hispanic children who were eventually diagnosed with sepsis. The team acknowledged this undersight and immediately began a new round of testing that showed success in combating underlying biases. They are now working on building a more diverse team with anthropologists, sociologists, community members and patients working together to root out bias in Duke's algorithms. More information can be found here.

Another notable organization is MedCognetics, which uses AI for cancer detection. It uses medical imaging technology and AI to assist radiologists in conducting mammograms for early signs of breast cancer. Ron Nag, CEO and cofounder, has focused on building this AI system to be an unbiased support tool for physicians in identifying signs of breast cancer in all ethnicities. This is done by using its AI program on diverse patient populations, allowing the system to learn what’s normal and what’s not in all patient populations. Last year, MedCognetics received a grant from National Institutes of Health’s branch titled, “Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity program.” This grant has allowed them to expand its cancer detecting capabilities in all patients, protecting against racial bias. More information can be found here.

Initial recommendations

As of now, the FDA does not require health tech to submit subpopulation specific data as part of device applications, but it is highly encouraged to do so. The FDA released a recommended protocol on how to collect and submit data and flag disparities. 

At the same time, hospitals and researchers have formed the Center for Applied AI at Chicago Booth to share best practices and have developed “playbooks” to prevent bias from slipping through. 

For example, the Algorithmic Bias Playbook addresses the need for product labeling; clear labeling of potential disparities in a health tech product could promote health equity. For example, if a company is seeking FDA approval for an AI enabled product, but it has not tested it in diverse populations, the agency could require the developer to note this omission in the labeling. This would alert healthcare systems that the product could be inaccurate for some patient populations and could result in disparities of care or outcomes. The healthcare providers could then take steps to mitigate the risk or choose a different product. More examples can be found here.

But protocols and playbooks fall short when they are only recommendations, because a lack of meaningful regulation allows for unintentional bias.

Moving Forward

The integration of technology and healthcare presents opportunities and challenges. As healthcare systems adopt AI to improve patient care, it’s important to recognize implicit biases within the healthcare system. This requires assessing yourself, your colleagues, and the organization you are a part of. Collaboration between healthcare and health tech workers and regulatory bodies is essential to develop strategies to mitigate bias. 

By acknowledging and actively combating biases, we can use AI as a way to improve outcomes and equitable care, creating a healthcare system for all members of our community.


From the author, Alana Nash

“As a healthcare worker myself, I am experiencing these changes firsthand and understand the need to be a leader in the advancements that are taking place. I am passionate about learning how technology can improve the pain points in our current healthcare model. I believe expanding our network to collaborate with professionals beyond a healthcare background is the next step in making our communities a healthier place and have recently presented a webinar on Healthcare Data Equity with Rafael M Lopes and Brandy Wilkins.”

Previous
Previous

Navigating the pros & cons of gen AI in creative workflows

Next
Next

Digital epistemology: the third digital spool of educational justice