An Official EEE AI Certified™ Program

Founding Cohort Enrollment Is Now Open

AI Ethics in Law and Healthcare:

June 15, 20256 min read

AI Ethics in Law and Healthcare:

How Smarter Tools Can Lead to Riskier Results Without Oversight

By Nikki Mehrpoo | The MedLegal Professor™


In my work with leaders across law, medicine, and insurance, I’ve seen firsthand the rush to adopt artificial intelligence. While the promise of efficiency is alluring, I must caution you: a dangerous gap is widening between what these powerful tools can do and how well we, as professionals, understand them. Relying on AI without rigorous oversight isn't just embracing innovation—it's exposing your license, your clients, and your reputation to profound risk.

I want to share some critical insights into these risks, not to create fear, but to empower you with the knowledge to lead responsibly in this new era. The conversation begins with a story that is becoming all too common.


It Looked Like a Real Case. It Wasn’t.

I recently learned of a partner at a law firm who submitted a demand letter drafted by AI. The citations it produced were confident and precise. The problem? They were also completely fabricated.

The client relied on it. The opposing counsel reviewed it. But when the case moved to litigation and those citations made it into the official complaint, the court spotted the error. One case cited didn’t exist. Neither did the others. The fallout was swift and severe. This isn’t a fictional tale; this is happening right now in our industry.

This is more than a technical error; it’s a professional risk event. And it points to a startling statistic from a 2024 MIT study: over 64% of professionals using AI cannot explain how their tools generate answers. Let me be clear: that is not innovation. That is exposure.


Speed Without Oversight Is Malpractice in Motion

To understand the mechanics of this risk, I want to draw your attention to a critical study released in March of this year by researchers from the University of Minnesota and the University of Michigan. In their paper, AI-Powered Lawyering, they tested AI-assisted legal work under real-world conditions. Their key takeaway confirms what I have been teaching for years: speed without oversight is malpractice in motion. Oversight is no longer a best practice—it is an absolute legal and ethical requirement.

The study compared two main types of AI, and the difference between them is something every professional must understand.


What the Study Reveals About AI Hallucinations and Legal Accuracy

The researchers assigned complex legal tasks to three groups of law students: one with no AI, one using a Retrieval-Augmented Generation (RAG) model, and one using a legal reasoning model.

Here are the results:

  • Vincent AI (RAG) users were fastest and most accurate.

  • o1-preview (reasoning model) users showed advanced logic but created more hallucinations.

  • No-AI users underperformed both groups.

  • RAG models consistently produced verifiable results with lower risk.

The key difference lies in how these models work. A Retrieval-Augmented Generation (RAG) model functions like a highly trained legal assistant. It retrieves information from a closed set of real documents and then generates answers based only on that verified data. As I like to say, “It’s the difference between quoting a law book and making one up.” Because it pulls from trusted data, the risk of an AI hallucination—inventing a fake case name, a repealed statute, or a made-up policy—drops significantly.

A Legal Reasoning Model, on the other hand, is trained to simulate legal logic based on patterns. It doesn’t search or verify anything in real time. While the output can sound polished and authoritative, a smart-sounding answer is not the same as a legally reliable one. A hallucination isn't a glitch; it's a legal hazard.


Your License is on the Line

These findings are not academic. In our fields, AI tools now handle legal briefs, claims summaries, and clinical assessments. But the truth is simple: if the tool is wrong, the risk is yours. The tool doesn’t carry a license. You do.

This is why I teach a simple but powerful framework at The MedLegal Professor™:

AI + HI™ = Artificial Intelligence + Human Intelligence

  • AI can help.

  • But you are in charge.

  • Your name is on it. Your judgment must back it.

You cannot ethically or legally rely on tools you do not understand. The AI + HI™ framework means never outsourcing your license to a black box. The single greatest risk in this new landscape is not the technology itself, but our own overtrust in it. When an AI sounds right, we believe it’s right, and that can lead to unintentional misrepresentation.


Five Questions to Ask Before Using AI

Before you trust any AI with your professional work, I urge you to ask these five questions:

  1. Do I know where this model gets its information?

  2. Can I tell if it is citing real sources or just guessing?

  3. Would I override it if something feels off?

  4. Could I explain this to a judge, client, or compliance officer?

  5. Would I sign this with my name and reputation?

If you answer “no” to any of these, then your AI tool or your internal training is not ready.

Using a tool you don’t understand is not bold; it’s negligence in motion. Using an AI that hallucinates is not innovation; it’s liability in disguise. AI + HI™ is not just a framework. It is the future of ethical work. This is how we preserve trust, ensure compliance, and protect our credibility.


💡 Want to Lead Safely in the Age of AI?

Stay connected with The MedLegal Professor™ and join a growing movement of legal, medical, and insurance professionals rethinking compliance, claims, and care through AI + HI™.


📅 Join Us Live – Every Monday at Noon (PST)

We host two powerful webinar formats each month:

  • 🧠 Monthly Webinar (First Monday of the Month)
    Explore a signature concept, compliance strategy, or MedLegal tool designed to empower professionals across law, medicine, and insurance.
    🔗 Register Here 

  • 🛠️ Office Hours (All Following Mondays)
    Dive deeper into real-world case studies, platform walkthroughs, and AI-powered workflows—often featuring guest experts or sponsored solutions.
    🔗 Access Office Hours Schedule 


💡 Want more from The MedLegal Professor™?

  • 📰 Subscribe to the Blog
    Get fresh insight on compliance, ethics, AI + HI™, and system transformation.
    🔗 Subscribe Now 

  • 🧰 Explore the TMP Client Portal
    Access exclusive tools, courses, and guided frameworks for transforming your practice.
    🔗 Log In or Request Access 

  • 📬 Get MedLegal Alerts
    Be the first to know when new content drops, webinars launch, or industry shifts happen.
    🔗 Join the Mailing List 

  • 📱 Text “TMP” to +1(888) 976-1235
    Get exclusive compliance resources and direct invites delivered to your phone.
    🔗 Dial to meet The MedLegal Professor AI


👉 Visit MedLegalProfessor.ai to learn more and take the next step.

The MedLegal ProfessorMedLegalNikki MehrpooAI + HIWorkers’ Compensation AIAugmented MedLegal MovementLegalTech ComplianceMedical-Legal ConsultingEthics in AICompliance AutomationAI workflow automationClaims triage systemFirst 24-hour responseLegal AI toolsMedical compliance softwareHuman-in-the-loop governanceIntake and onboarding automationPredictive alert systemsAI documentation toolsSmart claims managementAttorneys and adjustersQualified Medical Evaluators (QMEs)Treating physiciansNurse case managersHR and risk managersLaw firm automationInsurance compliance toolsEmployer return-to-work strategyDigital health and medtech startupsAI Legal DefensibilityDignified ModernizationThe Grand Bargain 2.0Stakeholder-centered designEthical AI in workers' compRegenerative leadership in lawReturn-to-Work technologySmart compliance systemsFuture of Law and MedicineLegal AI certificationMasterclass webinarsAI + HI certificationMedLegal BlogCase study: AI in claimsTMP Client PortalMedLegal ToolkitDownloadable guidesWebinar replaysCompliance checklistsLegalTech partnershipsAI HallucinationsRetrieval-Augmented GenerationRAG ModelsLegal Reasoning EnginesVerifiable AIAI Professional LiabilityGenerative AI GovernanceHow to prevent AI hallucinationsRisks of using generative AI in lawHow to verify AI citationsAI legal malpractice preventionAI overtrust in professional settingsGenerative AI risk managementWhat is a RAG model in legal technology?Can a lawyer be sued for an AI mistake?Ethical guidelines for using AI in healthcareHow to ensure AI compliance in the insurance industryWhat is the difference between RAG and a reasoning model?Liability for AI-generated medical or legal documentsAI due diligence for law firmsAI risk management for insurance carriersGenerative AI policy for healthcare providersEthical AI tools for claims adjustersAI fact-checking for paralegalsRAG models for legal research
Nikki Mehrpoo is The MedLegal Professor™—a former California Workers’ Compensation Judge turned LegalTech Strategist, AI Ethics Advisor, and national educator shaping the future of compliance.

She leads as Courts Functional Lead for the EAMS Modernization Project and created the AI + HI™ Framework to guide responsible, defensible AI use in law, medicine, and insurance. Her work connects courtroom-tested judgment with cutting-edge system design, helping professionals use AI without compromising legal integrity or care quality.

As the only California attorney dual-certified in Workers’ Compensation and Immigration Law, Nikki brings 27+ years of frontline experience into every conversation. Through The MedLegal Professor™, she equips lawyers, doctors, and insurers with tools, trainings, and tech to modernize how we serve the injured—without losing what matters most.

Nikki Mehrpoo, Esq.

Nikki Mehrpoo is The MedLegal Professor™—a former California Workers’ Compensation Judge turned LegalTech Strategist, AI Ethics Advisor, and national educator shaping the future of compliance. She leads as Courts Functional Lead for the EAMS Modernization Project and created the AI + HI™ Framework to guide responsible, defensible AI use in law, medicine, and insurance. Her work connects courtroom-tested judgment with cutting-edge system design, helping professionals use AI without compromising legal integrity or care quality. As the only California attorney dual-certified in Workers’ Compensation and Immigration Law, Nikki brings 27+ years of frontline experience into every conversation. Through The MedLegal Professor™, she equips lawyers, doctors, and insurers with tools, trainings, and tech to modernize how we serve the injured—without losing what matters most.

LinkedIn logo icon
Instagram logo icon
Youtube logo icon
Back to Blog

📣 Like what you read? Let’s keep you one step ahead.

Stay connected with The MedLegal Professor™ and join a growing movement of legal, medical, and insurance professionals rethinking compliance, claims, and care through AI + HI™.

Copyright 2025. The MedLegal Professor. All Rights Reserved.