← Back to Insights
UXmedtechregulatoryusabilityproduct designCE marking

Why Your Product's UX Is a Regulatory Risk: What Medtech Founders Miss

Jonas Weiss 4 September 2024

When medtech founders think about regulatory risk, they think about clinical evidence. They think about classification, conformity assessment routes, notified body timelines, and the gap between their algorithm’s performance on a validation dataset and what regulators will accept as proof of clinical utility. These are real risks and they deserve serious attention. But there is a category of regulatory risk that receives far less attention and that surfaces, often at the worst possible moment, in almost every medtech product we work with. It is the risk embedded in the product’s user interface.

This is not a design observation. It is a regulatory one. Under the EU Medical Device Regulation, usability engineering is not optional and it is not a box to be ticked at the end of the development process. IEC 62366-1, the international standard for usability engineering for medical devices, is harmonised under MDR and directly referenced in the conformity assessment process. The FDA has its own human factors guidance that is equally demanding. What both frameworks require is a structured, documented, and evidence-based process for designing and validating the interface between the user and the device. If your technical file does not contain a complete usability engineering file, your submission is incomplete, regardless of how strong the clinical evidence is.

Most founders discover this late. Some discover it during notified body review. A few discover it in a gap analysis they commission before formal regulatory engagement. Almost none of them have integrated usability engineering into their development process from the beginning, which is the only approach that does not create expensive rework.

What usability engineering actually requires

IEC 62366-1 is more demanding than most non-specialists expect. It requires the identification of the intended users of the device and a thorough characterisation of their knowledge, capabilities, and limitations. It requires an analysis of the intended use environment, because a device designed to be used in a well-lit, quiet clinical office performs very differently in an emergency department, an operating theatre, or a patient’s home. It requires the systematic identification of use-related hazards: the specific ways in which the design of the interface could lead to use error, and the clinical consequences of those errors.

This last requirement is the one that most consistently surprises product teams who encounter the standard for the first time. Use error, in the regulatory sense, is not a software bug or a system failure. It is a situation in which the user interacts with the device in a way that the design makes possible but that produces an unintended and potentially harmful clinical outcome. The device worked as designed. The user did what the interface allowed or implied they should do. The patient was harmed. Under MDR, the manufacturer is responsible for identifying these failure modes in advance, designing them out where possible, and providing evidence that residual risks have been adequately mitigated.

The standard distinguishes between formative and summative usability evaluation. Formative evaluation happens iteratively during development: observational studies, think-aloud protocols, task analysis with representative users in realistic use environments. Summative evaluation happens at the end: a formal, protocol-driven study with a statistically adequate sample of representative users, demonstrating that the device can be used safely and effectively by its intended user population. Both are required. The summative study is the one that goes into the technical file. The formative studies are the evidence that the design process was rigorous enough to produce a device worth submitting to summative evaluation.

A summative study that fails, because users make critical use errors at an unacceptable rate, does not just delay a regulatory submission. It sends the product back through a redesign cycle with a documented failure on the record. Regulators will ask what formative evaluation was conducted that failed to identify the use errors observed in summative testing. The answer to that question determines how the path forward is navigated.

The clinical workflow problem

The deeper UX risk in most medtech products is not that the interface is poorly designed in the conventional sense. It is that the interface was designed without sufficient understanding of the clinical workflow it is intended to support. This is a systems problem, and it is one that architectural training makes visible in a way that conventional product design thinking often does not.

A clinical environment is a complex system with its own load paths, its own coordination mechanisms, its own tolerance for friction, and its own failure modes. A device that is introduced into that system without a thorough understanding of how it will interact with existing workflows, existing roles, existing information flows, and existing pressures will generate friction that was not anticipated, resistance that was not predicted, and use patterns that were not designed for. Some of that friction is inconvenient. Some of it is a patient safety risk.

I have worked on projects at extraordinary levels of operational complexity, coordinating multidisciplinary teams across multiple geographies on buildings where the failure of a single system had consequences for every adjacent system. The discipline that experience instils is a particular sensitivity to the interfaces between systems: the junctions where one set of components meets another, where design assumptions made in isolation are tested by reality, and where the most consequential failures typically originate. In medtech products, the interface between the device and the clinical workflow is exactly this kind of junction.

Designing for clinical workflow integration requires going further than user research in a controlled setting. It requires sustained observation of clinical practice in the actual environments where the device will be used. It requires an understanding of the cognitive load that clinicians are already carrying, the interruptions they routinely manage, the time constraints they operate under, and the error-recovery strategies they have developed to cope with systems that do not work as intended. A device that demands attentional resources that clinicians do not have, or that introduces a new step into a workflow without accounting for the steps it displaces, will not be adopted regardless of its clinical performance. And a device that is not adopted does not generate the real-world evidence that post-market surveillance requires.

The early decisions that constrain the regulatory pathway

The UX decisions that create the most significant regulatory risk are not the ones made late in development, when the interface is being refined. They are the ones made at the beginning, when the fundamental architecture of the product is being defined. These decisions are made quickly, under pressure, often without regulatory input, and they are very difficult to reverse once the product has been built around them.

The most consequential of these decisions is the classification of the device’s output and the role that output is intended to play in the clinical decision-making process. A device that generates a recommendation that a clinician is expected to act on without independent verification is a very different regulatory proposition from a device that provides information that a clinician uses to inform their own judgment. The difference determines the risk classification, the clinical evidence requirements, and the usability evaluation protocol. It also determines the human factors risk profile: a device whose output is treated as authoritative creates a specific category of automation bias risk that needs to be designed out or mitigated explicitly.

These are not decisions that can be made by the product team alone. They require the involvement of regulatory expertise, clinical expertise, and design expertise simultaneously, because each perspective constrains the others in ways that are not visible from any single vantage point. The product architecture, the regulatory pathway, and the usability engineering strategy are not three separate workstreams that converge at submission. They are a single integrated design problem, and they need to be treated as such from the beginning.

What good looks like

The medtech companies that move through regulatory processes most efficiently share a characteristic that is less common than it should be. They have treated usability engineering as a core product discipline rather than a regulatory compliance activity. Their formative evaluation studies have been running since early in the development process, generating insights that have shaped product decisions rather than validating decisions already made. Their intended user characterisation is specific and evidence-based rather than generic. Their use environment analysis reflects genuine fieldwork in clinical settings rather than assumptions about how clinical environments work.

The practical implication for founders is not that UX should receive a larger budget or a higher headcount, though both are often warranted. It is that usability engineering needs to be planned and resourced at the same time as the clinical evidence strategy and the regulatory pathway, not after them. The timeline for a complete usability engineering file, from initial task analysis through formative evaluation to summative testing, is longer than most product teams expect, and it is not compressible without increasing risk.

The regulatory framework is, in this respect, pointing in the right direction. A device that cannot be used safely and effectively by its intended users in their actual working environment is not a safe device, regardless of how well it performs in controlled conditions. The requirement to demonstrate usability through a structured, evidence-based process is not a bureaucratic imposition. It is a design standard, and meeting it produces better products as well as better submissions.

Jonas Weiss is Director of Product and Operations at GoldWhite. Trained at the Bartlett School of Architecture at UCL and Bauhaus University Weimar, he co-founded and operated a technology company as COO and applies systems-design thinking to startup operations, product strategy, and UX design for regulated environments.

More from GoldWhite

Enjoyed this? Book a strategy call.

Book a Strategy Call →