AI Ethics in Law and Healthcare:
AI Ethics in Law and Healthcare:
How Smarter Tools Can Lead to Riskier Results Without Oversight
By Nikki Mehrpoo | The MedLegal Professor™
In my work with leaders across law, medicine, and insurance, I’ve seen firsthand the rush to adopt artificial intelligence. While the promise of efficiency is alluring, I must caution you: a dangerous gap is widening between what these powerful tools can do and how well we, as professionals, understand them. Relying on AI without rigorous oversight isn't just embracing innovation—it's exposing your license, your clients, and your reputation to profound risk.
I want to share some critical insights into these risks, not to create fear, but to empower you with the knowledge to lead responsibly in this new era. The conversation begins with a story that is becoming all too common.
It Looked Like a Real Case. It Wasn’t.
I recently learned of a partner at a law firm who submitted a demand letter drafted by AI. The citations it produced were confident and precise. The problem? They were also completely fabricated.
The client relied on it. The opposing counsel reviewed it. But when the case moved to litigation and those citations made it into the official complaint, the court spotted the error. One case cited didn’t exist. Neither did the others. The fallout was swift and severe. This isn’t a fictional tale; this is happening right now in our industry.
This is more than a technical error; it’s a professional risk event. And it points to a startling statistic from a 2024 MIT study: over 64% of professionals using AI cannot explain how their tools generate answers. Let me be clear: that is not innovation. That is exposure.
Speed Without Oversight Is Malpractice in Motion
To understand the mechanics of this risk, I want to draw your attention to a critical study released in March of this year by researchers from the University of Minnesota and the University of Michigan. In their paper, AI-Powered Lawyering, they tested AI-assisted legal work under real-world conditions. Their key takeaway confirms what I have been teaching for years: speed without oversight is malpractice in motion. Oversight is no longer a best practice—it is an absolute legal and ethical requirement.
The study compared two main types of AI, and the difference between them is something every professional must understand.
What the Study Reveals About AI Hallucinations and Legal Accuracy
The researchers assigned complex legal tasks to three groups of law students: one with no AI, one using a Retrieval-Augmented Generation (RAG) model, and one using a legal reasoning model.
Here are the results:
Vincent AI (RAG) users were fastest and most accurate.
o1-preview (reasoning model) users showed advanced logic but created more hallucinations.
No-AI users underperformed both groups.
RAG models consistently produced verifiable results with lower risk.
The key difference lies in how these models work. A Retrieval-Augmented Generation (RAG) model functions like a highly trained legal assistant. It retrieves information from a closed set of real documents and then generates answers based only on that verified data. As I like to say, “It’s the difference between quoting a law book and making one up.” Because it pulls from trusted data, the risk of an AI hallucination—inventing a fake case name, a repealed statute, or a made-up policy—drops significantly.
A Legal Reasoning Model, on the other hand, is trained to simulate legal logic based on patterns. It doesn’t search or verify anything in real time. While the output can sound polished and authoritative, a smart-sounding answer is not the same as a legally reliable one. A hallucination isn't a glitch; it's a legal hazard.
Your License is on the Line
These findings are not academic. In our fields, AI tools now handle legal briefs, claims summaries, and clinical assessments. But the truth is simple: if the tool is wrong, the risk is yours. The tool doesn’t carry a license. You do.
This is why I teach a simple but powerful framework at The MedLegal Professor™:
AI + HI™ = Artificial Intelligence + Human Intelligence
AI can help.
But you are in charge.
Your name is on it. Your judgment must back it.
You cannot ethically or legally rely on tools you do not understand. The AI + HI™ framework means never outsourcing your license to a black box. The single greatest risk in this new landscape is not the technology itself, but our own overtrust in it. When an AI sounds right, we believe it’s right, and that can lead to unintentional misrepresentation.
Five Questions to Ask Before Using AI
Before you trust any AI with your professional work, I urge you to ask these five questions:
Do I know where this model gets its information?
Can I tell if it is citing real sources or just guessing?
Would I override it if something feels off?
Could I explain this to a judge, client, or compliance officer?
Would I sign this with my name and reputation?
If you answer “no” to any of these, then your AI tool or your internal training is not ready.
Using a tool you don’t understand is not bold; it’s negligence in motion. Using an AI that hallucinates is not innovation; it’s liability in disguise. AI + HI™ is not just a framework. It is the future of ethical work. This is how we preserve trust, ensure compliance, and protect our credibility.
💡 Want to Lead Safely in the Age of AI?
Stay connected with The MedLegal Professor™ and join a growing movement of legal, medical, and insurance professionals rethinking compliance, claims, and care through AI + HI™.
📅 Join Us Live – Every Monday at Noon (PST)
We host two powerful webinar formats each month:
🧠 Monthly Webinar (First Monday of the Month)
Explore a signature concept, compliance strategy, or MedLegal tool designed to empower professionals across law, medicine, and insurance.
🔗 Register Here🛠️ Office Hours (All Following Mondays)
Dive deeper into real-world case studies, platform walkthroughs, and AI-powered workflows—often featuring guest experts or sponsored solutions.
🔗 Access Office Hours Schedule
💡 Want more from The MedLegal Professor™?
📰 Subscribe to the Blog
Get fresh insight on compliance, ethics, AI + HI™, and system transformation.
🔗 Subscribe Now🧰 Explore the TMP Client Portal
Access exclusive tools, courses, and guided frameworks for transforming your practice.
🔗 Log In or Request Access📬 Get MedLegal Alerts
Be the first to know when new content drops, webinars launch, or industry shifts happen.
🔗 Join the Mailing List📱 Text “TMP” to +1(888) 976-1235
Get exclusive compliance resources and direct invites delivered to your phone.
🔗 Dial to meet The MedLegal Professor AI
👉 Visit MedLegalProfessor.ai to learn more and take the next step.

