When Outliers Matter Most: Human Factors and User Safety

Feb 18, 2026

A single glowing lightbulb standing out among a group of unlit lightbulbs on a pink background, representing the concept of an outlier among the norm.
A single glowing lightbulb standing out among a group of unlit lightbulbs on a pink background, representing the concept of an outlier among the norm.

This article was written by Amanda Bakkum, Human Factors Consultant at ClariMed.

Human factors (HF) design and fundamental research share the same intellectual roots. They rely on similar methods, draw on the same psychological theories, and often ask overlapping questions about how people perceive, think, and act. At first glance, they can appear to be different applications of the same discipline.

The distinction emerges, however, in what each ultimately seeks to optimise.

  • Fundamental research focuses on what is typical (i.e., what generally holds true across people and contexts).

  • Human factors design focuses on what is consequential (i.e., what makes a meaningful difference to performance, safety, or outcomes in real-world settings).

This distinction shapes how each field treats averages, variability, and, most importantly, outliers.

Both researchers and human factors practitioners use experiments, statistics, cognitive models, and observation. Many of the concepts central to HF work, such as reaction time, workload, attention limits, and signal detection, originated in tightly controlled laboratory studies. The separation between the two fields is not about rigour or scientific quality. It is about purpose.

Fundamental research tends to ask: What is generally true about human behaviour?

Human factors design asks: What happens when real people interact with real systems under real conditions?

Those questions lead to very different priorities:

  • In research, averages are essential. Means, medians, and distributions allow us to generalise findings, test theories, and build explanatory models. Variability is something to be controlled, reduced, or statistically managed. This approach is entirely appropriate when the goal is understanding how cognition works in general.

  • In applied systems, however, averages can be dangerously reassuring. A system that works well for the “typical” user may still fail in precisely the situations that matter most. This includes moments when users are fatigued, distracted, or interrupted; when conditions are time-pressured or ambiguous; or when assumptions about attention, memory, and interpretation quietly break down.

Human factors design starts from a simple but uncomfortable premise: real systems are not used under average conditions for very long.

The importance of outliers

This is where outliers become critical. In academic research, outliers are often treated as statistical inconveniences. They are trimmed from datasets, excluded by criteria, or explained away as noise. When the aim is to uncover general mechanisms, this is usually the right thing to do.

In human factors work, however, those same outliers often provide the most critical insights.

They may represent a novice encountering the system for the first time, an experienced user operating under extreme workload, a fatigued clinician on a night shift, or someone who misinterprets an interface in a way that is entirely plausible. Human factors design does not ask whether these behaviours are common. It asks whether they are possible, and what happens when they occur.

From this perspective, outliers are not inconvenient anomalies. They are stress tests for the design.

The idea of an “average user” begins to unravel here. Designing for the average assumes stable abilities, clear mental models, and ideal conditions. In reality, no system is used exclusively by average people, in average states, on average days. Stress reduces cognitive capacity, fatigue slows perception and response, and interruptions fundamentally reshape decision-making. Over time, everyone becomes an outlier. Human factors design treats this variability not as a problem to be eliminated, but as a fundamental reality to be accommodated.

Medical devices make the consequences of this especially clear. Consider an infusion pump, a patient monitor, or a ventilator interface. In usability testing, most clinicians may navigate menus correctly, interpret alarms as intended, and complete key tasks within acceptable time limits. From an average-performance perspective, the device appears usable.  Clinical environments, however, are rarely average.

The clinicians who struggle, those who respond more slowly, misinterpret an alert, or select the wrong setting, are often at the end of a long shift, managing several patients at once, interrupted mid-task, working under intense time pressure, or using a rarely accessed function in an emergency. In research terms, these users might look like outliers. In healthcare, they are entirely expected.

This is how “rare” errors become patient safety risks. A device may perform well for most users and still fail when an alarm is silenced incorrectly during a high-workload moment, when two settings look similar but have very different consequences, or when a confirmation step relies on memory rather than visibility. After an incident, these events are often described as “user error”.  From a human factors perspective, they instead point to designs that did not adequately account for realistic variability in human performance.

As a result, HF work in medical devices focuses less on making expert users faster and more on making systems resilient. The aim is to make errors difficult to commit, to limit the consequences when they do occur, and to support clinicians when they are tired, rushed, or interrupted.

This difference does not mean research is wrong to disregard outliers. Fundamental research must isolate variables, control confounds, achieve statistical power, and produce generalisable findings. Human factors design operates under a different set of responsibilities. It must accept variability as inevitable, anticipate misuse and misunderstanding, design systems that degrade gracefully, and prevent rare events from becoming catastrophic.

Research asks whether a behaviour is typical. Human factors asks whether a system is safe when it is not.

Ultimately, the two fields are complementary:

  • Fundamental research helps us understand humans as they generally are.

  • Human factors design ensures systems work with humans as they actually behave: variable, distracted, fatigued, and imperfect.

Outliers may be statistically inconvenient, but they are operationally inevitable. In many safety-critical domains, success is not defined by how well a system supports the centre of the distribution, but by how well it protects people at the edges. That is where systems are most likely to fail, and where good design matters most.


Let's work together!

We’re always looking for new opportunities. If you would like to partner with us, please get in touch.

Let's work together!

We’re always looking for new opportunities. If you would like to partner with us, please get in touch.

Let's work together!

We’re always looking for new opportunities. If you would like to partner with us, please get in touch.