0:00
/
0:00

163 If AI gets it wrong, patients at risk with Dr. Jay Anders

Jay Anders, MS, MD - LinkedIn

MediComp Systems

"What If AI Gets It Wrong? Patients at Risk" with Dr. Jay Anders

🔍 Executive Summary:

In this powerful episode, host Michael Mann sits down with Dr. Jay Anders, Chief Medical Officer at Medicomp, to dissect the evolving relationship between AI and clinical decision-making. With AI advancing at breakneck speed, the conversation centers on trust, accuracy, and responsibility in medical applications. Dr. Anders emphasizes the importance of keeping physicians in the loop, warns against the risks of unchecked AI hallucinations, and critiques the current over-reliance on generative AI tools not tailored for real clinical settings. Together, they call for thoughtful integration, robust regulation, and a recommitment to human-centered care.

🧠 Key Takeaways:

  1. AI is helpful — but not a replacement for clinicians.
    AI tools can augment diagnosis and patient care, but they lack the nuance, context, and accountability of a trained medical professional.

  2. "Hallucinations" in AI can be dangerously misleading.
    Generative AI systems may fabricate plausible-sounding—but incorrect—information. These errors can end up in medical records, affecting treatment, billing, and even insurance eligibility.

  3. Trust and transparency are critical.
    Physicians currently lack the trust needed to rely on AI-generated diagnoses. Without understanding how AI arrives at its recommendations, clinicians hesitate to fully embrace it.

  4. Ambient listening and AI scribes are not plug-and-play.
    While some clinicians like AI scribes, others find they introduce errors and require more time for review and correction — defeating their purpose.

  5. AI must adapt to real-world clinical and economic constraints.
    Many hospitals, especially smaller ones, can’t afford advanced AI infrastructure. Integration must be cost-effective and scalable.

  6. Validation and regulation are essential.
    Dr. Anders suggests a regulatory framework akin to medical device approvals, with graded certification levels to ensure safety and reliability.

  7. Clinicians must lead the way.
    Doctors should be at the center of AI development and deployment. Asking them what they need — not dictating from the top down — is the key to building tools that work in the real world.

✍️ Article: When AI Gets It Wrong: The Hidden Risk to Patients and the Sacred Role of Clinicians

By Michael Mann, Host of Planetary Health First Mars Next

Artificial Intelligence is revolutionizing every corner of society — and healthcare is no exception. But when it comes to treating real human beings, the stakes couldn’t be higher. In this candid conversation with Dr. Jay Anders, Chief Medical Officer at Medicomp, we explore one of the most pressing questions of our time: What happens when AI gets it wrong?

It’s not just a technical glitch — it’s a matter of life and death.

A Tool, Not a Replacement

Dr. Anders is no stranger to technology. With over 20 years in clinical practice and another 20 in digital health, he’s seen the evolution firsthand. Yet his message is clear: AI is a tool to assist — not replace — physicians.

From diagnosing diseases to writing notes, AI is being thrown at everything in healthcare. But is it actually helping? "We need to stop assuming and start asking clinicians what they really need help with," says Dr. Anders.

The Problem with "Hallucinations"

One of the biggest red flags in today’s AI systems? Hallucinations — or, as Michael Mann bluntly puts it, "AI lies." These tools can fabricate information that seems accurate but is completely false. When these errors sneak into a patient’s chart — say, attributing liver cancer to someone whose parent had it — the consequences can be severe, from mistreatment to denial of insurance.

Worse still, many clinicians are lulled into trusting these systems, only to find out too late that something was incorrect.

Where AI Shines

AI’s potential isn’t all doom and gloom. According to Dr. Anders, AI can excel at pattern recognition and alerting clinicians to anomalies — like a sudden change in blood glucose from wearable data, or surfacing the top 5 patients who need urgent follow-up.

Used wisely, AI can streamline care, reduce clinician burden, and improve outcomes. But only if the right data is surfaced, at the right time, in the right context.

Regulation and Innovation: A Balancing Act

Dr. Anders advocates for regulation similar to that used for medical devices. Just as a pacemaker undergoes rigorous testing, AI tools in healthcare should be validated, certified, and overseen.

"Innovation without validation is dangerous," he notes. "We’re dealing with people’s lives."

The UK’s NHS has begun certifying select AI systems, treating them as regulated tools. The U.S. is behind but making slow progress. The message? We need standards that foster innovation without sacrificing patient safety.

The Sacred Role of the Clinician

Despite all the hype around AI superintelligence, one truth remains unshaken: physicians are irreplaceable. No algorithm can replicate the human connection, intuition, and long-term relationship a doctor builds with their patient.

As Dr. Anders says: "The computer isn’t responsible for patient care. The physician is."

Find us & follow us:

Michael Mann's LinkedIn

Follow us on YouTube

TikTok

Instagram

Twitter / X

Apple

Spotify

😍 💕 🌍 💜 😊 🚀Most important thing is to subscribe to keep updated with our latest podcasts, newsletters...etc.

Planetary Health First Mars Next is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Planetary Health First Mars Next is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

We might just lift off to Mars if the orbit is right! 😍 💕 🌍 💜 😊 🚀

  • In love & kindness,

  • Michael Mann, (😍 💕)

Disclaimer: the views of the participants are their own only and do not reflect the views of other participants, participants' organizations, etc or Planetary Health First Mars Next or the Host…….

This podcast is for informational purposes only and should not be considered professional or medical advice.

In addition if there are any mistakes or facts that need to be corrected please feel free to reach out to us so we can correct any statement.

Understand we are a self published entity and do the best we can.

If you have an idea or have an inspiring topic or know anyone that would be a great guest for our show please reach out to info@planetaryhealthfirstmarsnext.org

Discussion about this video