Skip to main content
Markkula Center for Applied Ethics Homepage

Exploring AI in Health Care and the Promise and Peril for Patients and Society

Quote from Guadalupe Hayes-Mota on Ethics and AI in Health Care: “The future of AI in health care isn’t AI versus doctor; it should be thought of as doctors with the support of AI...By using AI to predict diseases and help accelerate treatment, patients will be able to better take control of their health.

Quote from Guadalupe Hayes-Mota on Ethics and AI in Health Care: “The future of AI in health care isn’t AI versus doctor; it should be thought of as doctors with the support of AI...By using AI to predict diseases and help accelerate treatment, patients will be able to better take control of their health."

The Markkula Center for Applied Ethics’ new director of bioethics, Guadalupe Hayes-Mota, presented as a part of SCU’s Grand Reunion on October 10, 2025. Hayes-Mota spoke about AI’s impact on health care – both its wonders and its challenges. 

Artificial Intelligence in medicine is no longer a distant concept. It is already diagnosing illnesses, monitoring symptoms, and making sure that patients are informed of everything that is going on within their bodies. 

Hayes-Mota touched on a wide range of topics surrounding AI, emphasizing that its impact on health care could either be transformative or harmful, depending on how it’s designed and implemented. Since many people still have a mixed, or unclear, understanding of the technology, Hayes-Mota explained what AI is currently doing right, what it will do in the future, and its perils and ethical risks. He also introduced the CARES framework, an approach he created to ensure AI has a positive impact on health care. 

Hayes-Mota emphasized that AI is helping to catch diseases earlier and empowering patients to take control of their health without second guessing their condition. AI can help doctors see things that they couldn’t before. He noted an example from Google: “Google’s Deep Mind AI detects breast cancer with greater accuracy and far earlier than expert radiologists, cutting false negatives by 9%. That means thousands of women who might have gone undiagnosed are now receiving timely treatment,” said Hayes-Mota. 

This doesn’t replace radiologists; it simply increases their vision and creates a tool that allows them to complete their work more efficiently. 

Hayes-Mota also touched on the integration of genomics, lifestyle, and wearables. Since we already wear smartwatches, rings, and other devices that can track our health care data, what if we had an AI system that used this information to learn from our bodies? 

Think about it – AI-powered systems could merge our data, such as DNA, information from sleep and diet trackers, biocensor data, and medical records, all to build a full picture of a patients' health. This information could predict disease years in advance, using algorithms to spot risk markers. These future care models will continuously update, using the data to create recommendations to maintain wellness.

“The future of AI in health care isn’t AI versus doctor; it should be thought of as doctors with the support of AI,” Hayes-Mota said. By using AI to predict diseases and help accelerate treatment, patients will be able to better take control of their health.

Hayes-Mota then turned to the perils and ethical risks of AI by discussing how AI systems are effective, but are not empathetic. “They can mimic compassion through words, but they cannot feel or imagine it,” he emphasized. If we turn to AI to handle too much of the patient interactions, there is a risk we’re creating a health care system that is fast and flawless, but ultimately hollow. 

The "CARES" framework, created by Hayes-Mota, outlines five principles he believes companies should consider when using AI to answer medical questions. First, Clinical Accuracy must be guaranteed, with systems trained on verified medical data and containing language that patients understand. 

The second principle is Accessibility Matters. AI should serve everyone, across all languages and cultures so no patient is left behind. “Accessibility also means designing for low-resource settings, low bandwidth systems, multilingual interferences, and simple mobile tools that can reach rural areas or marginalized communities,” Hayes-Mota stressed. 

Responsibility and Human Oversight states that AI can support care, but it should never lead it. The human doctor, along with the patient, will always stay in charge and make decisions. Accountability must always remain with clinicians, researchers, and institutions, not with the algorithms themselves. 

Ethics in Data Privacy states that privacy is non-negotiable. Consent, transparency, and security must come first, and every patient should have the right to understand how their data fuels AI. Hayes-Mota talked about how protecting personal data, in these cases, is protecting human dignity. 

The last principle is Social Accountability, which states that companies should be accountable for their actions–including their mistakes. Transparent reporting is critical, and they need to undergo independent audits. 

Following the presentation, I had the opportunity to talk to Mr. Hayes-Mota about his additional insights on AI today and AI in the future. 

By merging the power of algorithms with the empathy and experience of physicians, AI isn’t replacing the human element of care but enhancing it–ensuring that decisions are informed by both data and compassion. Medicine will evolve from treating disease to preserving well-being, guided by AI systems that reflect our best human values: accuracy, equity, and accountability,” said Hayes-Mota.

Hayes-Mota invites us all to think differently. “Build systems that explain, not just compute. Collect data that protects, not exploits. Design with inclusion, not assumption. Measure success by the lives improved, not just by the metrics achieved.” 

The future of medicine won’t be written in code; it will be written in values. Hayes-Mota emphasized that this isn’t just a technical project, it’s a moral one. 

The event recording can be accessed for replay on the Ethics Center’s YouTube Channel.

 

Diya Chaudhary ’28, sophomore in the College of Arts and Sciences and a 2025-26 marketing and communications intern at the Markkula Center for Applied Ethics, authored this story.

 

Oct 21, 2025
--

Subscribe

* indicates required
Subscribe me to the following blogs: