From Infusion Pumps to Algorithms: Why Clinical AI Needs a Medical Device Grade Safety Assurance Mindset

Share:
Clinicians feel this uncertainty too! They’re being asked to trust recommendations from models they cannot interrogate or completely understand

When I first stepped into the world of infusion pumps back in 2017, I was handed an unexpected assignment: build a Safety Assurance Case for an upcoming FDA submission. At the time, I didn’t understand the value of a Safety Assurance Case. We have design documentation, risk management files, verification reports, basically the full suite of artifacts every medical device development team produces. What additional value could a Safety Assurance Case possibly add on top of the rigor we already poured into development and design controls?

I began researching and reading everything I could find about the FDA’s push for assurance cases, especially for infusion pumps. Slowly, a pattern emerged. It wasn’t about paperwork or about satisfying a new regulatory checkbox. It was about the growing number of field events and customer complaints in the real world – the subtle use‑related failures, the unexpected interactions between complex software and even more complex clinical workflows. They were signals that the industry needed a more transparent, structured way to explain why a device was safe, not just how it was built.

That realization changed everything for me. It was the moment I understood that a Safety Assurance Case wasn’t just another deliverable for a regulatory submission, it was a different mindset, it was the implementation of a concept of ‘first-principle of patient safety’…. and as I began working on the Safety Assurance Case during product design and development, is when I saw how powerful a structured safety argument could be when technology, human behavior, and clinical reality collide.

AI is now standing at that same inflection point.

Instead of mechanical components and embedded firmware, we’re dealing with probabilistic models, opaque decision pathways, and algorithms that evolve with new data. Yet the underlying challenge is identical: How do we demonstrate safety when the system’s behavior cannot be fully predicted through testing alone?

Clinicians feel this uncertainty too! They’re being asked to trust recommendations from models they cannot interrogate or completely understand. Manufacturers feel it as well, as they try to translate model outcomes into meaningful performance metrics. Regulators, meanwhile, are tasked with evaluating systems whose risks don’t fit neatly into traditional categories.

This is precisely the environment where Safety Assurance Cases are the most valuable!

Why Safety Assurance Cases Fit AI So Well

A safety assurance case is not a document. It’s a structured argument with a living, evolving narrative that connects claims about safety to the evidence that supports those claims. It builds boundaries around system architecture scope, clinical environment, indications of use and essentially forces clarity and exposes gaps. AI desperately needs that kind of structure and overall considerations.

  1. Assurance cases make the invisible visible

AI models often behave like black boxes. Even when explainability tools exist, they rarely translate into clinical intuition. An assurance case compels manufacturers to articulate:

  • What the model is intended to do
  • What it is not intended to do
  • What are the related performance assumptions
  • What evidence demonstrates that those assumptions hold

This is the kind of transparency that we, as manufacturers, must be able to explain to our clinicians and regulators.

  1. They unify engineering, human factors, cybersecurity, and real‑world evidence

An assurance case forces cross-functional domains in the design and development team to converge and work towards ‘engineering-for-safety’:

  • Engineering provides model architecture, training data structure, requirements traceability, and performance metrics.
  • Human factors ensure the model’s outputs integrate safely into clinical workflows.
  • Cybersecurity evaluates adversarial risks, data integrity, and resilience.
  • Post-market evidence validates that the model behaves as expected in the wild.
  1. They reduce post-market surprises

AI models fail in ways that traditional devices do not. They drift, degrade and encounter unanticipated edge cases. Assurance cases create a mechanism to:

  • Track assumptions
  • Monitor real‑world performance and other industry state-of-the-art
  • Trigger updates when evidence diverges from expectations

This is essential for adaptive AI, where safety is not a one‑time certification but an ongoing commitment.

The Clinician’s Perspective: Trust Is Earned, Not Assumed

Clinicians don’t need to understand the math behind a model. Instead, they need to understand its boundaries and limitations.

When we talk to clinicians about AI, their concerns are remarkably consistent:

  • What happens when the model is wrong?
  • How do I know when not to trust it?
  • What evidence shows this works for patients like mine?
  • Who is accountable if something goes wrong?

Assurance cases answer these questions in a way that is accessible and understandable. They translate technical complexity into clinical clarity. They also give clinicians a seat at the table by embedding workflow analysis, usability evidence, and human‑AI interaction risks directly into the safety argument.

It’s when clinicians see that their real‑world challenges are reflected in the safety narrative, that the manufacturer-clinician trust grows.

The Manufacturer’s Perspective: A Blueprint for Responsible AI

Manufacturers are under immense pressure to innovate quickly, differentiate in a crowded market, and satisfy regulators and customers who are still defining the rules of the game. Assurance cases help by providing:

A structured development pathway

Instead of retrofitting safety at the end, teams build the safety argument as they go. This reduces rework, accelerates regulatory readiness, and improves engineering for safety.

A mechanism for documenting model evolution

AI models change through retraining, new data, or algorithmic updates. Assurance cases provide a traceable, auditable way to show:

  • What changed
  • Why it changed
  • How safety was preserved

This is invaluable for both internal governance, clinical workflow integrity and regulatory submissions.

A competitive advantage

Manufacturers who can demonstrate transparent, evidence‑based safety will stand out in a market increasingly defined by trust.

The Regulator’s Perspective: Clarity in a Landscape of Ambiguity

Regulators are constantly asking for clarity in logic.

Assurance cases give them:

  • A structured argument they can interrogate
  • A clear mapping between claims and evidence
  • A way to evaluate whether risks are understood and controlled
  • A framework that scales across diverse AI applications

Most importantly, assurance cases reduce the burden on reviewers by presenting information in a format designed for critical evaluation which can also be perceived as strategic marketing.

Practical Steps: Bringing MedTech‑Grade Safety Thinking to AI

If the industry wants to adopt assurance cases for AI, we need a pragmatic roadmap. Here’s where I recommend starting:

  1. Define the clinical intent with precision

AI systems often suffer from vague or overly broad intended uses. Narrow the scope, interview Key Opinion Leaders in healthcare and be explicit about:

  • Clinical context
  • User population
  • Decision boundaries
  • Known limitations
  1. Build the safety argument early

Don’t wait until later in the development process. Start drafting the argument during model development. Let the Safety Assurance Case help guide:

  • Data selection
  • Model architecture choices
  • Human factors studies
  • Risk controls

The argument becomes the backbone of the development process.

  1. Integrate real‑world evidence from day one

AI safety cannot rely solely on premarket testing. Plan for:

  • Prospective monitoring
  • Drift detection
  • Performance thresholds
  • Feedback loops with clinical partners
  1. Align cross‑functional teams around shared claims

Bring engineering, clinical, regulatory, cybersecurity, and quality teams into the same room. Let the assurance case be the common language.

What the Industry Must Align On

For assurance cases to become the norm in clinical AI, manufacturers, clinicians, and regulators must align with a few key principles and priorities:

  • Transparency over opacity
  • Evidence over intuition
  • Lifecycle safety over point‑in‑time narrative
  • Clinical workflow integration over a siloed algorithmic view

If we can agree with these, AI will not just be innovative, it will be equally trustworthy.

Closing Thoughts: A Familiar Path Forward

When infusion pumps became too complex for traditional safety frameworks, we didn’t abandon innovation. We built better tools to understand and manage risk. Safety assurance cases emerged from that necessity, and they transformed how we evaluate high‑risk medical products.

Safety assurance cases are also consistently used in other high-risk industries like aviation, transport, etc. AI in MedTech is now at the same crossroads, with potential high-risk applications and several unknowns. So then why not utilize the concept of a Safety Assurance Case to our advantage in developing AI models for clinical workflows as well?

We can either continue treating AI as a mysterious black box, hoping that performance metrics alone will satisfy clinicians and regulators to provide clarity, or we can adopt a structured, transparent, logical and evidence‑driven approach that has already proven its value in MedTech and other high-risk industries.

I believe the choice is clear, and I believe the time is now!