Safety-Critical Usability: Pattern-based Reuse of Successful Design Concepts

Andrew Hussey
Software Verification
Research Centre
The University of Queensland
Brisbane, Qld, 4072, Australia
email: ahussey@ svrc. uq. edu. au

Michael Mahemoff
Department of Computer Science
and Software Engineering
The University of Melbourne
Parkville, Vic, 3052, Australia
email: moke@ cs. mu. oz. au

Users of safety-critical systems are expected to effectively
control or monitor complex systems, with errors poten-tially
leading to catastrophic consequences. For such high-consequence
systems, safety is of paramount importance
and must be designed into the human-machine interface.
There are many case studies available which show how in-adequate
design practice led to poor safety and usability,
but concrete guidance on good design practices is scarce.
This paper argues that the pattern language paradigm�
which is widely used in the software design community� is
a suitable means of documenting appropriate design strate-gies.
We discuss how typical usability-related properties
(e. g., flexibility) need some adjustment to be used for as-sessing
safety-critical systems and document a pattern lan-guage
which is based on corresponding "safety-usability"

Keywords: Safety-critical, user-interface, usability, design

1 Introduction
Almost every safety-critical system involves some element
of human-machine interaction. When an accident occurs,
incorrect behaviour by operators is often blamed. Although
some operators really are guilty of irresponsible actions (or
inactions), a more frequent scenario is that the designers did
not pay enough attention to user abilities and limitations.
For example, humans are not good at monitoring of auto-mated
systems (e. g., autopilots) [2]. When, after several
hours of monitoring the system, an operator fails to respond

adequately to a system malfunction the true accident cause
may be the system design, which did not account for human

While this sort of advice is straightforward, in practice pro-ducing
systems that are both usable and safe can be a jug-gling
act. In any case, while it may be clear to a designer
how not to design, the characteristics of good design are
more elusive. The present paper explains how design pat-terns
can provide a step in the right direction, enabling de-signers
to reuse ideas which have been proven successful
over time.

1.1 Background
A system's usability can be divided into five sub-properties:
robustness (likelihood of user error and ease with which
users can correct errors), task efficiency, reusability of
knowledge, effectiveness of user-computer communication,
and flexibility [17] In the safety-critical context, we are
primarily concerned with the robustness component; the de-signer
of a safety-critical interactive system should strive to
reduce system-induced user error. At the same time, de-signers
should not ignore the other usability properties, be-cause
they can also influence the system's overall robust-ness.
This is a consequence of the non-orthogonality among
traditional usability attributes. For example, a system with
poor user-computer communication could lead users to mis-interpret
indications of a hazard. Furthermore, the normal
motivations for usability, such as productivity and worker
satisfaction, still hold and should be achieved to the max-imum
extent possible, within the safety constraint. When

 For the purposes of this paper, comprehensibility has been removed from Mahemoff and Johnston's original list in [17] because it is essentially
a subset of user-computer communication. 1
1 Page 2 3
pure safety concerns are combined with user psychology, a
complex design task emerges. This paper applies the pattern
language concept to deal with the overall issue of safety-usability.
Safety-usability defines usability properties and
principles for safety-critical interactive systems.

Safety-usability is particularly challenging because in the
field of safety, disaster reports and other negative case
studies far outweigh public documentation of best prac-tice.
Reason [28, p. 484] has commented that "just as in
medicine, it is probably easier to characterise sick systems
rather than healthy ones. Yet we need to pursue both of
these goals concurrently if we are to understand and then
create the organisational bases of system reliability". There
is no guarantee that showing someone how something was
performed incorrectly means that they will be able to infer
how to do it correctly. We should consider how we may
reuse knowledge which has been gained while designing
systems which have stood the test of time.

1.2 The Pattern Paradigm
Patterns describe common problems designers encounter
and document solutions which have proven to be useful.
Considerable attention has been focused on design patterns
in recent years, from industry and academia alike. Alexan-der
formulated the idea of a pattern language as a tool
for documenting features of architecture and town-planning
[1]. More recently, the software community has captured
numerous patterns. These include patterns for software de-sign
[10], user-interfaces [30], and development process
[5]. In this paper, we look at the specific domain of safety-critical
systems, and consider the usability of such systems,
rather than their detailed design. We refer to the resulting
patterns as safety-usability patterns.

An illustrative example is the software design pattern, "Fa-cade"
[10]. This pattern suggests that it is sometimes use-ful
to create a class which acts as the only interface to a
multiple-class module. The pattern promotes the principle
of reuse. However� as with all patterns� it is not a princi-ple
itself, but an observation of a feature which frequently
occurs in software written with this principle in mind. In
this way, patterns are an excellent way to demonstrate what
is implied by underlying principles or goals. In the present
context, we are concerned with patterns which show how to
build usable safety-critical systems.

Together, a collection of related patterns is said to form a
pattern language� a hierarchical structure where patterns
provide basic solutions and delegate details and variants to
other patterns. This guidance boosts developer productivity
and also contributes to a better overall understanding of the
patterns. Patterns provide several benefits to system devel-opers

Communication: a common language for dis-cussing
design issues and for communi-cating
principles of design;

Documentation: a common basis for under-standing
a system;

Reuse: a storehouse of available solutions for
common design scenarios.

As well as inheriting the regular benefits of software pat-tern
languages, usability-oriented patterns enjoy additional
advantages. They are closer to Alexander's original concept
of patterns, which focused on the ways users interact with
their environments. Furthermore, our inability to accurately
predict human cognition means that it is even more impor-tant
to reuse existing knowledge about usability than it is
to reuse detailed design information. There is now a sub-stantial
amount of material in the field of interaction pat-tern
languages [8], but as Section 3 discusses, certain us-ability
issues exist which are peculiar to safety-critical sys-tems;
hence the need to provide a domain-specific pattern

1.3 Methodology for Pattern Discovery
To capture safety-usability patterns, the following tasks
were conducted in an iterative fashion:

1. We considered how the five sub-properties of usabil-ity
must be adjusted to fit our concept of safety-usability.
One sub-property� robustness� was con-sidered
in depth, and we have described several prin-ciples
which improve robustness. However, we were
also interested in the relevance of traditional usability
properties such as task efficiency to the safety-critical

2. Case studies were located in which the system at least
partially supported safety-usability. For example, a
system might provide feedback to the user in a man-ner
that improved safety. The sub-properties of us-ability
are an explicit statement of the criteria we used
to determine how appropriate these features are. The
case studies were derived from the literature and from
industrial clients.

3. The set of appropriate features which occur in the
various systems, were informally placed into groups.
Each group contained a family of system features
which addressed the same sort of problem. Most of
these groups evolved into design patterns.

4. We looked for relationships among the patterns, and
this led to the formation of a well-structured pattern
language. 2
2 Page 3 4
1.4 Organisation of this Paper
Section 2 lists several principles which facilitate robust
human-computer interaction. Section 3 considers the impli-cations
of usability-related properties other than robustness.
In that section, we summarise the distinguishing character-istics
of safety-critical systems and consider the extent to
which usability criteria can be satisfied in the safety-critical
domain. Section 4 discusses the structure of our pattern lan-guage
for safety-critical interactive systems and gives an ex-ample
pattern. Section 5 considers the outcomes of this pa-per
and future work. Appendix A gives the complete pattern
language including examples drawn from case studies. Ap-pendix
B summarises several illustrative case studies (e. g.,
the Druide air-traffic control system) that we use to demon-strate
good design practices.

2 Robust Human-Computer Interaction: De-sign

Hussey [12] provides high-level principles that summarise
the content of design guidelines for safety-critical systems,
as given by Leveson [16, Ch. 12], Reason [27] and Redmill
and Rajan [29]. The principles can be grouped according to
the error mitigation strategy [13] that they fall within.

2.1 Error Prevention
Where possible, the user should be prevented from placing
the system in an unsafe state. Ideally, unsafe states can be
identified in advance and the system can be designed such
that the unsafe state is impossible (unsafe actions or inputs
are not accepted by the system). Other error prevention
mechanisms include hardware interlocks that prevent haz-ardous
system actions and automation of user tasks.

2.2 Error Reduction
Errors can be reduced by addressing causes of autonomous
execution errors and rule selection errors (principles for
Slips and Rule-based Mistakes, Clear Displays and User
and planning errors (Knowledge-based Reason-ing).

Slips and Rule-based Mistakes: Rather than forcing the
user to choose safe actions, the user procedures, inter-face
and training may be designed so that the chance
of the user making an unsafe choice is low. Such re-duction
measures are referred to by Norman [22] as
forcing functions. Such a design approach is weaker
than forcing safe actions.

Clear Displays: Provide the user with a clear representa-tion
of the system's modes and state. The display of
information to the user provides the user's view of
the state of the system. The user's ability to diag-nose
and correct failures and hazards depends on the
clarity and correctness of the information that is dis-played.

User Awareness: The user should be involved in the rou-tine
operation of the system to ensure the user's men-tal
model is current and de-skilling does not occur [2].
When a failure occurs, the user will be better able to
respond to the failure in a constructive and correct
manner that restores the system to a safe state.

Knowledge-based Reasoning: Provide
the user with memory aids and training to support
knowledge-based construction of plans. If the user
does not have an existing rule for the situation with
which they have been confronted, they will have to
construct a rule from their broader knowledge base.

2.3 Error Recovery
Provide mechanisms for users to recover from errors (where
possible) by providing feedback, checking procedures, su-pervision
and automatic monitoring of performance (Warn
and Independent information/ Interrogation) and the ability
to return the system to a safe state (Safe State).

Safe State: Tolerate user errors by enabling the user to
recognise potential hazards and return the system
from a potentially hazardous state to a safe state.

Warn: If the system has entered (or is about to enter) a haz-ardous
state, the user should be notified of the state
that the system is currently in and the level of urgency
with which the user must act.

Independent information/ Interrogation:
The user should be able to check the correctness of
warnings and double-check the status of the system
through multiple independent sources. If the state of
the system can only be judged from one source, the
system has a single point of failure and the user may
either believe there to be a hazard when there is none
or believe there to be no hazard when one exists.

3 Safety-Usability
Safety-critical interactive systems should be both safe (ac-ceptable
risk) and usable. In this section we examine the ef-fect
on usability of applying the robustness enhancing prin- 3
3 Page 4 5
ciples given in the previous section. We explore the differ-ences
between conventional usability and safety-usability.

We assume an appropriate development process occurs and
we assume users are trained where appropriate, even though
this may not be the case in reality. This is because the pat-terns
in the next section are intended to facilitate design of
safe usable systems, rather than act as a snapshot of the
present reality (which is nevertheless another valid appli-cation
of patterns [4]).

To produce safety-usability patterns, it is necessary to con-sider
exactly how usability is constrained by safety consid-erations.
In safety-critical systems, safety must always be
given first priority. Ideally safety-enhancing design deci-sions
should also improve usability but in practice this is not
always possible. The following section considers the dif-ferences
between safety-critical and conventional systems,
motivating the need for special treatment of usability in the
safety-critical context.

3.1 Distinguishing Features of Safety� Critical Systems

Following are several ways by which safety-critical systems
differ from conventional desktop applications (e. g., word-processors,
browsers), and the implications of this differ-ence
for safety-usability:

 Aspects of usability relating to robustness and relia-bility (as measured by failures, mistakes, accidents,
etc.) take preference over other usability attributes
such as efficiency, reusability of knowledge (learn-ability
and memorability), effectiveness of commu-nication
and flexibility, relative to their importance in
non-safety-critical systems. Of course, these other at-tributes
are still important, especially where they en-hance
or have a neutral effect on safety.

 Safety-critical systems often involve monitoring and/ or control of physical components (aeroplanes,
nuclear reactors, etc.). These systems usually oper-ate
in real-time. Another implication is that physical
interaction environments may be quite diverse, and a
model of the user quietly sitting at their desk is often
inadequate. Furthermore, the systems may also be
distributed across many users and/ or locations, e. g., a
plane trip involves co-ordination between the plane it-self,
its start and end locations, as well as other planes
and flight controllers along the way.

 Safety-critical systems often involve automation. Cruise control and auto-pilots are well-known exam-ples.
Automation alone is not sufficient to remove hu-

man error. Bainbridge describes the following ironies
of automation [2]:

Designers may assume that the human should
be eliminated due to unreliability and ineffi-ciency,
but many operating problems come from
designer errors.

Automated system are implemented because
they can perform better than the operator,
yet the operator is expected to monitor their

The operator is reduced mostly to monitoring,
but this leads to fatigue, which in turn reduces

In the long-term, automation reduces physical
and cognitive skills of workers, yet these skills
are still required when automation fails. In fact,
the skills may be in more demand than usual
when automation fails, because there is likely
to be something wrong. De-skilling also affects
workers' attitudes and health adversely.

 Users are generally trained. Although the user in-terface should be as intuitive as possible, it is usu-ally
not necessary to assume novice usage will take
place. This contrasts with the principles which might
be required for a mobile phone or a web browser. Be-cause
users are trained, the diminished flexibility that
may arise from enhanced robustness is of less con-sequence
than would be the case in a conventional

 Likewise, we assume that a quality ethic is prevalent in safety-critical projects, to achieve robustness and
to ensure that the system functions as it is designed
to. Developers should be highly qualified and the pro-cess
should allow for steps which might not normally
be considered, such as formal methods and parallel
design [20].

3.2 From Usability to Safety� Usability
To produce safety-usability properties, we have adjusted
the existing set of usability properties given in Mahemoff
and Johnston [17] (robustness, task efficiency, reuse, user-computer
communication, and flexibility). Robustness has
been described sufficiently in section 2 and is essential for
safety-critical systems. Each remaining property will now
be summarised, followed by a consideration of how it is
affected by the robustness principle discussed in section 2,
and the distinguishing characteristics of safety-critical sys-tem,
listed in section 3.1 above. 4
4 Page 5 6
Task Efficiency: Software should help users of varied ex-perience
levels to minimise effort in performing their tasks.

Implications: In real-time systems involving co-ordination
among different people and systems, there is a large risk
of overloading user's cognitive capabilities. This makes it
important to carry out task analyses, and develop systems
that are compatible with user's tasks and ensure that users
are given appropriate workloads. Matching the system to
the task has implications for both software and hardware
design. Automation is another tool which can be used to
improve system efficiency. At the same time, the overall
experience of users must be considered to ensure that au-tomation
is effective. Some systems over-automate and this
has led to accidents in the past (e. g., [21]).

Error reduction techniques enhance task efficiency because
they reduce the likelihood of errors occurring, rather than
requiring that users correct errors after they have occurred.
Error prevention techniques (forcing functions) prevent
rather than reduce the likelihood of errors and therefore also
enhance efficiency. Error recovery techniques may reduce
overall efficiency if the likelihood of user error is relatively
low. For example, incremental and redundant inputs enable
detection of errors but slow user input. Efficiency might be
a good thing if it reduces user fatigue and improves their
motivation to work with the system. On the other hand, ef-ficiency
generally implies that the user only has to enter in-formation
once (as indicated by principles such as "Provide
output as input" [24]), but in some situations it is safer to
force a user to enter the same information twice; a familiar
example is changing one's password on most Unix systems.
Redundant inputs catch many execution slips (autonomous
errors such as mistyping), but reduce the efficiency with
which a user can perform the input task.

Reuse: Ensure that users can reuse existing effort and
knowledge. Knowledge reuse is usually achieved via con-sistent

Implications: The safety requirement means that tech-niques
for reuse� such as consistency� can be a double-edged
sword. It is certainly an important way to reduce cog-nitive
workload; safety can be improved by making routine
actions become subconscious so that users can concentrate
on higher-level issues [19]. But reuse of previous erroneous
inputs and reuse of actions from one task in another task can
also be a source of error. Reuse circumvents error reduction
techniques such as redundancy and checking, incremental
tasks and avoidance of capture errors. Capture errors occur
whenever two different action sequences have their initial
stages in common, with one sequence being unfamiliar and
the other being well practised. It is rare for the unfamil-iar
sequence to capture the familiar one, but common for
the familiar sequence to capture the unfamiliar. For exam-

ple, driving to the office rather than the store on a Sunday
(the intent to go to the store remains unchanged). By pro-viding
one distinct action sequence for each task, the like-lihood
of capture errors can be diminished. For example,
reuse enables a user to rapidly move through a dialogue,
entering reused inputs, potentially inappropriately. Another
common example of error arising from reuse is when a user
habitually clicks "Yes" when asked if they wish to remove
a file. One cause of the Therac-25 disaster (in which several
people were overdosed by a medical radiation therapy ma-chine
and subsequently died) was a feature which let users
continually hit the Return key to request the default [16].

User-Computer Communication: Facilitate collabora-tion
between humans and computers by appropriately rep-resenting
changes to the system (which have been instigated
by humans or computers).

Implications: Real-time systems represent the state of dy-namic
systems, and if the state changes too frequently, users
can become confused. The representation should account
for such variances, and show only those changes that are
necessary. An example of designing the appropriate repre-sentation
is a car speedometer, which is analogue because
a changing digital meter would be incomprehensible [23].
The physical environment should also take into account
human input and output needs. Leveson cites an exam-ple
where this principle was clearly not applied: a nuclear
power plant where controls are placed thirty feet from the
panel meters, yet users must be within a few feet to read the
display. Automation should be clearly displayed, as many
accidents have arisen when users were unaware of the extent
and effects of computer control (e. g., [21]). The system also
needs an appropriate way for dynamically distributing con-trol
between the various human and computer agents work-ing
in the system. The diversity of human agents is also an
issue here: the system must provide a representation com-patible
with their needs and capabilities. A rail control sys-tem
should show the driver upcoming conditions, indicate
the status of trains in the vicinity to traffic controllers, and
provide passengers with the information they desire: when
the next train is coming.

Flexibility: Account for users of different backgrounds
and experience by allowing for multiple ways to perform
tasks and future changes to the system.

Implications: Error prevention techniques such as forcing
functions essentially introduce additional modes to a sys-tem;
moded systems are, by definition, less flexible: the
user must follow the order of actions to perform the task
that was envisaged by the designer of the system. Error re-duction
mechanisms may diminish flexibility because cap-ture
errors can arise from autonomous competition between 5
5 Page 6 7
different action sequences for performing a task.
User-initiated flexibility is often unacceptable for safety-critical
systems, because frequent changes to the user-interface
may lead to errors if users forget the change was
made, and this is certainly possible in emergency situa-tions.
Another possibility is one user taking over another
user's environment; this could be a source of error if each
user has customised their interface. In contrast, a system
like a word-processor should help users to customise their
interface "on the fly", i. e., without switching to a special
Options mode.

4 Safety-Usability Patterns
In this section, we document the safety-usability pattern
language. The patterns are based on elements of safety-critical
systems which are considered usable according to
our safety-usability criteria defined in the previous two sec-tions.
Therefore, designing safety-critical systems accord-ing
to the patterns should improve safety-usability. Read-ers
unfamiliar with patterns should note that the technique
is not supposed to be foolproof or strictly-algorithmic. In-stead,
designers should learn from the patterns and feel free
to alter them according to their own views or needs.

4.1 Pattern Template
The patterns contain several fields. The convention for pre-senting
each pattern is shown below.

Name of Pattern
The preconditions which define situations where
the pattern is relevant.

Problem: The problem to which the pattern provides a
solution. This section motivates why the pattern is
likely to prove useful to designers. Note that a given
context might have more than one problem to solve.

Forces: Principles that influence the decision-making pro-cess
when a designer is confronted with the problem
in this context. Forces can conflict with one another.

Solution: The solution describes a way to take the forces
into account and resolve them.

Examples: Real-life examples of successful system fea-tures
that embody the solution. The validity of the
pattern can be enhanced by considering relevant ex-amples
from situations not directly concerned with

safety-usability. For this reason, we sometimes con-sider
patterns of detailed software design or non-safety-
critical applications. At the same time, each
pattern contains at least one directly-relevant exam-ple.
The examples are described in detail in Appendix

Design Issues: The decisions which may arise in imple-menting
the solution (Optional).

Resulting Context: The post-state of the system, after ap-plying
the pattern. In particular, this section discusses
what other patterns may be applicable after this pat-tern
has been applied. If a specific pattern name is
cited, it will appear in Sans-serif font.

4.2 Overview
The Safety-Usability pattern language is given in full in Ap-pendix
A. This section provides an overview of the lan-guage,
explaining what patterns are contained in the lan-guage
and how they relate to one another. Figure 1 shows
the patterns and pattern groups within the language and the
references to other patterns in the resulting context for each

There are four groups of patterns, each concerned with re-ducing
user error or the consequences of user error by con-sidering
the following aspects of the human-computer inter-action:

Task Management : the control flow of the human-computer

Task Execution : the physical mechanisms by which users
perform tasks;

Information : the information that is presented to users;
Machine Control : removing responsibility for the correct
operation of the system from the user, and instead
placing such responsibility on machines.

Within each group there are several patterns. The following
sections give brief summaries of the problem addressed by
each pattern and the solution provided. Appendix A gives
full explanations, including examples for six of the patterns.

4.2.1 Task Management
: If the system enables users to place the system in
a hazardous state, then users need a facility to recover
to a safe state (when this is possible). 6
6 Page 7 8
Behaviour Constraint
Execution Task
Management Task



Reality Mapping


Abstract Redundant


Memory Aid
Warning Automation Shutdown

Machine Control

Recover Stepladder Transaction Conjunction Task

Distinct Interaction

Figure 1. Pattern Language Structure
: Systems that require operators to perform
complex tasks should explicitly split the tasks into a
hierarchy of simpler tasks.

Task Conjunction : The system can check that a user's
action matches their intention by requiring that the
user repeat tasks and checking for consistency.

Transaction : The need for recovery facilities in the sys-tem
is lessened if related task steps are bundled into
transactions, so that the effect of each step is not re-alised
until all the task steps are completed and the
user commits the transaction.

4.2.2 Task Execution
: User-interfaces should be designed so that
affordances (user-interface design features) are pro-vided
that reduce the likelihood that an error (an un-intended
action) will occur when the user executes a

Separation : Components of the user-interface that are
operated similarly and for which incorrect operation
could be hazardous, should be physically or logically

Distinct Interaction : Components of the user-interface
that could be confused if they were operated similarly
should be operated by distinct physical actions.

Preview : Where the consequences of an action could be
undesirable, the user may obtain a preview of the out-come
of an action.

Behaviour Constraint : Users should be prevented from
requesting hazardous actions by anticipating such ac-tions
and allowing only those that are safe.

4.2.3 Information
Reality Mapping
: To facilitate user understanding of the
state of the system, the system should provide a close
mapping to reality where possible and supplement it
with abstract representations that enable a rapid as-sessment
to be made where necessary.

Abstract Mapping : Where reality mapping is infeasible,
unneccessary to ensure safety, or would detract from
safety, abstract representations should be used that
enable rapid assessment of the system state.

Redundant Information : Where information presented
to the user is complex or could be misinterpreted, pro- 7
7 Page 8 9
vide multiple views so that the likelihood of error is

Trend : Humans are not good at monitoring, so when the
system state changes, the system should compare and
contrast the current state with previous states.

Interrogation : Presenting the entire state of the system to
the user would be overwhelmingly complex in many
cases. The user should be presented only with the
most salient features of the system state, but be able
to interrogate the system for more information where

Memory Aid : If users have to perform interleaved tasks,
mechanisms (memory aids) should be provided for
recording information about the completion status of

4.2.4 Machine Control
: Interlocks provide a hardware-based way of de-tecting
and blocking hazards, so that even if errors
occur, they cannot lead to harmful outcomes.

Automation : If a task is unsuitable for human perfor-mance,
because it involves continuous or close moni-toring
or exceptional skill or would be dangerous for
a user to perform, then it should be automated.

Shutdown : When shutting down the system is simple and
inexpensive, and leads to a safe, low risk state, the
system should shut down in the event that a hazard

Warning : To ensure that users will notice hazardous sys-tem
conditions when they have arisen and take ap-propriate
action, warning devices should be provided
that are triggered when identified safety-critical mar-gins
are approached.

4.3 Example Pattern
In this section we give the complete text of an example pat-tern
from the Task Management group, to illustrate instan-tiation
of the pattern template from section 4.1.

A task has been designed which could lead to a
hazardous state, it is possible to recover to a safe state

 Risk is known;

 Risk can be effectively reduced by providing re-covery paths for users, rather than reducing er-ror
 Risk is relatively low compared to cost of pre-vention.

Problem: Howcan we reduce the likelihood of accidents
arising from hazardous states?

 Hazardous states exist for all safety-critical sys-tems; it is often too complex and costly to trap

every state by modelling all system states and
user tasks;

 Risk can be effectively reduced by reducing the consequence of error rather than its likelihood;

 When a hazardous state follows a non-hazardous state, it may be possible to return to
a non-hazardous state by applying some kind of
recovery operation.

Solution: Enable users to recover from hazardous ac-tions
they have performed.
Recovering a task is
similar to undoing it, but promises to return the sys-tem
to a state that is essentially identical to the one
prior to the incorrect action. In many safety-critical
systems, this is impossible. If a pilot switched to a
new hydraulic reservoir five minutes ago, then it is
impossible to undo a loss of fluid in the meantime if
the associated servodyne for that reservoir is leaking
(see Figure 3). However, it may be useful to provide
a Recover operation giving a fast, reliable, mecha-nism
to return to the initial reservoir. Recovering a
task undoes as much of the task as is necessary (and
possible) to return the system to a safe state.

This function can be assisted by:

1. helping users to anticipate effects of their ac-tions,
so that errors are avoided in the first place;

2. helping users to notice when they have made an
error (provide feedback about actions and the
state of the system);

3. providing time to recover from errors;
4. providing feedback once the recovery has taken

If this solution is not feasible, a more extreme way
to deal with unwanted system states is to perform a

Examples: The Druide system (see Appendix B) imple-ments
message-sending via a Transaction mech-anism.
Air-traffic controllers can compose a message 8
8 Page 9 10
to an aircraft but then cancel it without sending (see
Figure 5). In the Hypodermic Syringe case study,
users can recover from an error by using the +/-but-ton
to alter the entered value to a safe value.

Resulting Context: After applying this pattern, it should
be possible for users to recover from some of their
hazardous actions. The StepLadder pattern facili-tates
recovery by breaking tasks into sub-steps, each
of which may be more easily recovered than the orig-inal
task. The user should be informed of what the
previous state is that the system will revert to, hence
Trend may help users execute Recover.

5 Conclusions
Conventional principles of usability require adjustments in
the safety-critical context. We have presented patterns that
promote safety of interactive systems, with minimal detri-mental
impact on the "raw" usability of a system. The pat-terns
should help developers of safety-critical systems to
learn from past successes during design. They are not just a
catalogue of patterns; they form a well-connected language.
Developers can apply a pattern and its Resulting Context
will give clues as to which patterns might then be relevant.
The development process becomes a sequence of pattern ap-plications
that can be documented for future reference by
system maintainers. In addition, we have described exam-ples
that illustrate each pattern. Patterns can also be used to
justify a design decision and might therefore be integrated
into traditional safety rationale methods, e. g., safety cases.

There are four groups of patterns in our language, each con-cerned
with reducing user error. Task management patterns
consider the control flow and task structure of the human-computer
interaction; task execution patterns deal with the
mechanisms by which users perform tasks; information pat-terns
are concerned with the information that is presented to
users; machine control patterns promote removing respon-sibility
for the correct operation of the system from the user,
and instead placing such responsibility on machines. In to-tal
there are 14 patterns.

We encourage more research aimed at uncovering the ways
in which successful systems are built. It is tempting to
only spot faults in systems, especially when well-publicised
catastrophes have occurred. Certainly, it is vital to do so.
However, design knowledge is built on good examples as
well as bad, and currently documentation of bad examples
is far more prevalent. Successful features do not have to be
breathtaking works of art; they are simply design concepts
which: (a) are based on valid principles, and (b) have been
applied enough times with success to make us confident that
we can learn from them.

The authors thank George Nikandros (Queensland Rail) for
permitting the Railway Control system case study to be used
for research purposes and David Tombs (Software Verifica-tion
Research Centre) for his time in explaining how the
system worked. We also thank Philip Dart (The University
of Melbourne) and Anthony MacDonald (Software Verifi-cation
Research Centre) for their suggested improvements
to drafts of this paper.

A Pattern Language
This Appendix gives the entire safety-usability pattern lan-guage,
as described in section 4, except Recover from the
Task Management and Control Flow group, which has been
already given in section 4.3.

A. 1 Task Management Patterns
The system is defined by a set of tasks that are
decomposed into logically simpler tasks and the ef-fect/
consequence of misperforming a task cannot be
readily diminished.

Problem: How can we guide the user through complex

 It is desirable for the user to remain familiar with low-level tasks, so they are capable of deal-ing

with novel situations;
 When performing a complex task, it is easy for users to forget what they have and have not
done. This is especially true when there are
other distractions;

 Users may eventually see the task sequence as a single, aggregate, task.

Solution: Identify complex tasks and explicitly split
them into sequences of simpler tasks.
In some
cases, the task sequence may form a new task it-self,
for example a Wizard in MS-Windows is con-sidered
a separate task which enables several smaller
tasks to be performed in a sequence. In other
cases, there is no explicit representation; it is sim-ply
a design consideration which has led to the cre-ation
of several individual tasks. Even in this case,
though, the user's tasks may be controlled by the sys-tem's
mode, and Behaviour Constraints and 9
9 Page 10 11
Affordance can be applied to help the user iden-tify
what task comes next.

Examples: The concept of explicit procedures is well-established
in safety-critical systems design. Aircraft
crew are provided with reference cards explaining
step-by-step what to do in emergencies [9]. Some-times,
controls on machines are arranged in a way
which suggests a certain execution order. As an
example from the non-safety-critical domain, Mel-bourne's
public transport ticket machines require the
user to specify three variables (zone, concession, ex-piry
time), place money in the machine, and then col-lect
the ticket. Even though the three variables can be
entered in any order, the design of the machine sug-gests
a particular order, arbitrary though it may be.
The overall left-to-right ordering of controls provides
a Affordance as to the appropriate sequence and
suggests to users what to do next.

In the Hypodermic Syringe case study, in the usual
case, several simpler actions are required to equate to
the corresponding action when using a keypad. The
positioning of the +/-buttons affords the appropriate

Resulting Context: After applying this pattern, the user
should have an improved idea of what tasks need to
be performed at any given time. The pattern works
best in tandem with Transaction. Each few rungs
of the stepladder forms a Transaction. This way,
many of the individual tasks will be easily recovered
from, because they will not be conveyed immediately
to the broader system. By splitting the original task
into sub-tasks, the consequence of each step may be
less than for the original task and Recover may be-come
easier to apply.

The stepladder can be used to structure

Task Conjunction
A task has been designed which has a relatively
high risk of producing a hazardous state and error
cannot be prevented, e. g., because the task involves
data-entry. The task is not time critical so it can be
replaced by a duplicate task, all of which must be
completed before the users actions take effect.

Problem: How can we check whether a user's action
matches their intention?


 Redundancy is widely used in the safety indus-try to avoid hazards arising due to a component
of the system failing. The system is said to have
no single point of failure;

 Entry fields, or screens in a user-interface can be regarded as components of the system.

Solution: Reduce errors by requiring that the user per-form
tasks multiply.
The user's performances on
each iteration of the task are compared and the out-come
used to determine whether the task has been
correctly performed.

Redundant tasks are an error detection technique. Re-dundancy
reduces the efficiency with which users can
perform tasks and therefore the "raw" usability of the
system, but often enhances system safety by enabling
error detection. Another variant is requiring the same
task to be performed by two different users, as in a
detonator which can only be activated by two people.

Task Conjunction is similar
to Transaction, in that it requires several actions
before any commitment is made. However, the inten-tion
differs. In Task Conjunction, there is only
one fundamental change, but it is subject to verifi-cation
actions. In Transaction, each action pro-vides
new information about desired changes.

The conjunction must not be able to be circumvented,
e. g., on the Therac-25 machine, users could press
"Enter" to confirm a value rather than re-type the
value. Pressing "Enter" soon became automatic for
the users [16].

Examples: Redundancy in software has a long history.
Communication protocols, for example, use check-bits
and other data to enable error detection and/ or

The Railway Control system requires that codes be
exchanged between controllers and train drivers three
times before a command is issued. The redundancy
helps reduce the likelihood of an inconsistency be-tween
the controller's perception of the command
that is issued and the driver's perception of the com-mand.

Resulting Context: The original task is redefined as
a conjunction of redundant sub-tasks, to which
Transaction may be applied.

Actions are not time-critical and hence can be
"stored-up" before being applied and: 10
10 Page 11 12
 Sub-steps can be undone;
 The effect of the task as a whole is difficult to undo;

 Risk is relatively low compared to cost of pre-vention.
Problem: For many real-time systems, it is difficult to
provide a Recover action which has any practi-cal
value, because the system usually changes irre-versibly
by the time the user tries to recover from the
unwanted action.

How can we improve reversibility?

 It is relatively easy to recover tasks that do not impact on real-world objects.

 Often, reversal is useful to iteratively define a task's parameters.
 Transactions are used in data-processing to en-able the effect of a sequence of actions to be
"rolled-back" to the state at the commencement
of the transaction;

 Transactions bundle a sequence of task steps into a single task, hence they are ideal for struc-turing
interaction in terms of overall goals and

Solution: Bundle several related task steps into a trans-action,
such that the effect of each step is not re-alised
until all the task steps are completed and
the user commits the transaction.
By grouping task
steps in this way, it becomes very easy to Recover
the effect of a sub-step before the transaction as a
whole has been committed. In addition, because er-rors
are deferred until the transaction is committed,
users have more time to consider their actions and to
recover from them if appropriate.

Each transaction should involve task executions and
information that is physically grouped on the users
console or display. For example, a data entry transac-tion
might be implemented as a pop-up window with
commit and abort buttons to either accept or reject the
information entered by the user.

Examples: In the Druide system, messages to aircraft are
first constructed, and then sent to the aircraft using the
"SUBMIT" button. The user can cancel a message
before they submit it. A similar facility is available
in some email systems which actually hold mail for
several minutes before sending it.

A standard dialogue box supports this pattern. The
user can easily enter data via text fields and buttons,

but none of these choices matter until they hit a con-firmation
button (e. g., one labeled "OK").

Resulting Context: Task steps are grouped into transac-tions
with commit and abort options for each group.
The commit step in a transaction can quickly be-come
automatic for the skilled user. To reduce the
chance of users committing transactions that they
meant to abort, the Affordance, Separation
and Distinct Interaction patterns should be
applied. If it is appropriate for the transaction's sub-tasks
to be constructed iteratively, then the transac-tion
can be viewed as a form of Stepladder. A
sequence of transactions themselves can also form a

A. 2 Task Execution Patterns
A task has a limited range of valid ways in which
it can be performed and failure to perform the task
correctly has hazardous consequences. For many sys-tems,
it is possible for the user to perform a variety of
physical actions at each point in performing the task;
not all will produce the required actions for the task.

Problem: How can we enhance assurance that the phys-ical
actions performed by users produce the de-sired

 It is possible for the user to have the right in-tentions, but perform the wrong action due to a

 Slips can be reduced by providing appropriate affordances [22].

Solution: Provide cues to an operator that enhance the
likelihood that the operator will perform the phys-ical
actions appropriate to performing a particu-lar

The cues are effectively memory aids that remind the
user to perform an action by a particular sequence
of executions, avoiding slips. Such cues include dis-tinctive
executions for distinct actions and distinctive
identifiers for distinct objects that the user can manip-ulate.
The actions performed are matched to the out-comes
of the actions and the user's physical expecta-tions
and capabilities. In addition, the same execution
should not have several different outcomes according
to the model or type of equipment that the user is us-ing;
Incorrect executions should clearly indicate that
the operation has not been performed successfully. 11
11 Page 12 13
Examples: Norman [22] gives several examples of doors
that provide affordances, the physical characteristics
of the door indicate the way in which the door should
be used: for example a door knob may indicate by
its shape that it should be twisted, whereas a flat bar
on a door indicates that the door should simply be
pushed. Failure to perform the operation of opening
the door is indicated clearly because the door remains
shut. Similarly, the landing gear on an aircraft is op-erated
by a lever that is pulled downwards, mimick-ing
the effect of the operation on the position of the
aircraft's undercarriage.

Graphical toolkit components such as tabbed dia-logues
afford correct action by the user, because they
mimic "real-world" entities that offer particular oper-ations
and indicate the availability of those operations
by physical cues. Similarly, data blocks in Druide
afford clicking because that is the characteristic op-eration
to be applied to regions of marked text in a
graphical display.

Design Issues: Selection of cues depends on the user's per-ceptual
limitations. A user may not notice small dif-ferences
in shape if they are occupied by other activ-ities.
General user characteristics must also be con-sidered,
e. g., distinguishing between red and green
might not be appropriate if colour-blind males inter-act
with the system. Customisation of the interface
may be necessary to accommodate the needs of all

Resulting Context: Certain types of user error (" slips" as
described by Norman [22]) are less likely. Warning
should be used to notify the user if an operation
has not succeeded. There should be a Reality
to provide the user with the state of the
system, so they can determine whether the opera-tion
has occurred correctly. Because error may oc-cur,
Recover should be applied to enable the ef-fects
of error to be recovered from where possible.
For those errors for which risk is too high for this
pattern to be applied, Behaviour Constraint
or Interlock should be considered. If affordance
involves use of toolkit components, Separation
should be considered. A Preview is a simple way
of affording correct actions by showing what the out-come
of an action will be.

The system provides several actions for which
the corresponding executions are very similar. Al-ternatively,
several information displays are provided

for which the layout and presentation are very similar.
When one is appropriate, the other is not and possi-bly
vice versa. In addition, it is not feasible to predict
hazards and remove the potentially hazardous action.

Problem: A system is constructed from components that
limit the scope for distinct executions and presenta-tions
(e. g., a graphical toolkit), so that the potential
for confusion between components in different con-texts

How can we reduce the likelihood that users will
inadvertently perform the wrong action or misin-terpret

 We would like to reuse components because modern systems usually incorporate graphical

interfaces and it is impracticable to not use them
where safety will not be compromised;

 Even if a toolkit is custom-built, the widgets and interaction mechanisms must then be reused in
the design;
 Systems are always built within a budget;
 When widgets are reused, unless the customisa-tion is extensive, components will often appear

similar to users;
 Reusing commercial components means we cannot as easily customise them;

 The potential for user error increases as the sim-ilarity and proximity of controls increases.
Solution: Separate two controls (physically or logically) if
they are operated in a similar way.

Examples: In the Druide system, the pop-up menu sepa-rates
the buttons corresponding to the "SEND" and
"ABORT" actions. Most style guides recommend
separation for distinct operations that are accessed by
similar controls (e. g., most Windows programs sepa-rate
the "OK" and "Cancel" buttons in dialogues [6]).

Resulting Context: Affordance and Distinct
may also be used to reduce operator
execution errors.

The same physical action has different outcomes
according to the system mode. The user cannot be
reasonably expected to recall the current mode.

Problem: How can we provide hints as to the outcomes
of physical executions within the constraints of
graphical toolkits?
12 Page 13 14
 Graphical toolkits diminish the extent to which affordances can be incorporated into the design

of a system;
 Use of toolkits enhances the consistency of a system design and economic viability of a sys-tem;

 Affordances provide cues that indicate to the user the likely outcome of physical executions.
Solution: Provide an explicit preview of the outcome of
a physical execution for a system mode.
only works well when there is only one execution that
can be performed and the issue is whether the user
should perform the execution or not, rather than what
execution they should perform.

Examples: Changing the mouse cursor, according to the
effect of physical executions for the screen region that
the mouse is over. Postage stamp sized pictures of
screen shots for viewer software.

Distinct Interaction
Two or more tasks are performed by a similar se-quence
of physical actions, and confusion between
the tasks (e. g., by performing the terminating steps
of task B following the commencing steps of task A)
may result in a hazard. In addition, it is not possible to
predict hazards and remove the potentially hazardous
action at the point at which it might be erroneously

Problem: How can we reduce the likelihood that users
will confuse similar tasks?

 Reuse of graphical toolkit components en-hances consistency and makes a user-interface

easier to learn;
 Reuse of such components also increases the consistency of the operations required to per-form
tasks, so that distinct tasks may be ac-cessed
by similar physical executions;

 Tasks that have similar executions sequences are likely to be confused by users;

 Users confuse tasks because of memory fail-ures.
Solution: Distinct actions that can be confused, lead-ing
to hazardous consequences, should be accessed
by distinct physical executions.
However, distinct
physical executions reduce reuse, making the system

harder to learn. Training and memory aids can help
overcome errors arising from users not remembering
the correct execution to perform a task.

Examples: The Hypodermic Syringe system uses align-ment
of +/-buttons with the corresponding display
digit to reduce motor errors in which a wrong button
is pressed.

Resulting Context: Two or more controls are operated by
distinct physical interactions. The interactions re-quired
to operate a control should be afforded by
the control (see Affordance). Separation and
Preview are alternative solutions.

Behaviour Constraint
It is possible to determine system states where
certain actions would lead to an error.

Problem: The most direct solution to improving assurance
through diminished user error is to prevent users from
making errors.

How can we prevent users from requesting unde-sirable

 Even if we provide appropriate information and guide users by providing Affordances,
users will inevitably make some slips and mis-takes;

 The system has a large amount of data available concerning the state of objects;
 In some circumstances, designers will be aware that certain tasks will be undesirable in certain
 It is preferable to trap user errors before they impact on the system, rather than detecting in-correct
system states and then attempting to rec-tify
the situation.

Solution: For any given system state, anticipate erro-neous
actions and disallow the user from perform-ing
The idea here is to prevent the action
from occurring in the first place, rather than dealing
with whatever the user does. The logic could be pro-grammed
directly, e. g., "If the plane is in mid-air,
disable Landing Gear button". It could also be im-plemented
via some intelligent reasoning on the ma-chine's
behalf, which would require the machine to
understand what tasks do and where they are appro-priate. 13
13 Page 14 15
This is a very fragile approach which should be used
with extreme caution. It assumes that the condition is
measured accurately and that ignoring the constraints
always implies a worse hazard than following them.
It is therefore risky in unforeseen circumstances. This
can be somewhat alleviated by a mechanism allowing
an appropriate authority to override the constraint.

A further disadvantage is that this pattern leads to
a less flexible user-interface (i. e., the interface is
moded) and the user may become frustrated with the
structured form of interaction if they do not properly
understand the tasks they are performing. In a safety-critical
domain, this should be a less severe prob-lem
than normal, because the user should be highly

Behaviour constraints are usually implemented as an
additional system mode, e. g., by "greying out" menu
items or buttons where appropriate. As such, be-haviour
constraints require close automatic monitor-ing
of system state and therefore a frequently used
partner pattern is Automation.

Examples: Kletz [14] gives everyday examples, such as
components that will only fit together in one way,
making incorrect assembly impossible. Norman [22]
also gives such everyday examples, such as fire stairs
that have a rail blocking access to the basement stairs;
the error of proceeding all the way to the basement is
prevented. In the Druide system, users are only given
access to menus to change the system state, prevent-ing
errors associated with entering invalid data. In the
Railway Control system, users cannot direct a train to
move to an occupied section of track.

Resulting Context: Some user errors are no longer possi-ble.
The system will usually be more heavily moded
than prior to applying the pattern. The system may
inform the user that a requested operation is unavail-able
(i. e., applying the Warning pattern).

A. 3 Information Patterns
Abstract Mapping
The system enables users to interact with com-plex
real-world objects.

Problem: Modern computer-based systems may have ex-tremely
complex internal state. To operate the sys-tem,
the user needs to be able to understand what the
state of the system is. How can we reveal to the user
what they need to know about the state of the sys-tem
without swamping them with information?

 Humans have a limited capacity for processing information.

 For many safety-critical systems, the amount of information that is relevant to the users opera-tion
of the system exceeds the processing ca-pacity
of the user.

 Many decisions that the user must make in en-suring safe operation of the system depend on
the overall trend of the system state, rather than
precise details.

 Many systems allow some margin for error so that precise representation of the system state is
not necessary for safe operation.

Solution: Provide an abstract representation of com-plex
information so that it is comprehensible by
the user.
Abstract representations can be used when
it is not feasible to directly represent the real-world
objects. One benefit of computers is that we can
construct summaries� indirect mappings� which aid
human decision-making. More abstract representa-tions
can also be used in situations where we are
prevented from showing real-world objects. For in-stance,
bandwidth might be constrained. Display-ing
unnecessary parameters increases system load but
redundant parameters can be used to automatically
check the consistency of information which can then
be displayed as a single value [19].

Examples: The Druide interface provides the plane's
speed, an abstract mapping. This is a variable which
could be derived from the reality mapping, but it is
better to simply show it to the user, because a user's
estimate would lack accuracy, consume time, and dis-tract
them from other tasks.

Resulting Context: Some information will not be suited
to abstract representation. In that case Reality
should be considered.

Redundant Information
The user is required to perceive and interpret in-formation
and act on that information in maintaining
the safe function of the system. The information is
complex, or could be misperceived or misinterpreted.

Problem: How can we enhance the likelihood that the
user correctly perceives and interprets safety-critical

Forces: 14
14 Page 15 16
 Safety-critical systems may be complex, with large amounts of information that needs to be
available to the user;
 The amount of display space available is lim-ited;

 Providing too much information will swamp the user; the information that is displayed needs to
be chosen carefully.
Solution: Provide more than one way of viewing the in-formation,
so that the likelihood that it is misun-derstood
or ignored is lessened.
Redundancy in
user-interface designs can be viewed as an extension
of the usual safety-critical viewpoints advocating re-dundant
hardware and "n-version programming" (but
note that such approaches are often unsuccessful in
software design). The field of communication studies
has looked at situations such as combination of audio
and visual cues in television (e. g., both auditory and
visual warnings).

Examples: The use of the international phonetic alphabet
(" Alfa, Bravo, Charlie ..." instead of "A, B, C ...")
[14] is a form of redundant input that has been in
use for many years to improve human-human com-munication.
Similarly, the purpose of levers can
be more readily comprehended if there are adequate
cues beyond just a label, e. g., shape, position, colour
(see [14]).

Resulting Context: Redundant Information will
usually take the form of an Abstract Mapping.

Reality Mapping
The system enables users to interact with real-world

Problem: Digital systems usually create a degree of sep-aration
between users and physical objects. In some
aircraft, for example, the visual feedback is entirely
virtual. Benefits notwithstanding, there is a risk that
vital information may be unavailable.

How can the user check that objects under their
own control, or under the system's control, are
aligned with their expectations?

 Information concerning the current state of a safety-critical system needs to be displayed
clearly and succinctly, so that the user can
quickly ascertain whether a hazard has arisen
or could arise and can take appropriate action.

While low-level idioms are useful for improv-ing
display clarity, a holistic approach to display
design is necessary to ensure understandability
of the information that is being presented.

 When parts of the system have been automated, there is a temptation to assume that the user
does not need to see the state of certain objects.
However, failures do arise, and human interven-tion
is often necessary[ 21]. Information about
the environment is necessary to help humans
monitor the system, and, if necessary, intervene.

Solution: Provide a close mapping to reality where pos-sible
and supplement it with abstract representa-tions.
Users should not have to perform complicated
mental operations to assess the margins of system op-eration
[19]. To help the user build an accurate model
of the domain, it is important to maintain a close map-ping
between physical objects and their virtual coun-terparts.
A close mapping to reality will help the user
observe and diagnose problems effectively.

Examples: Druide provides an accurate,
directly-manipulable, display of the airspace and the
aircraft in it. The Oil Pressure system provides ana-logue
displays of oil pressure and uses proximity of
components to convey relationships.

Design Issues: When mapping to reality, the appropriate
level of detail will be guided by knowledge of the
user's characteristics and tasks. Object-orientation
provides a good way to reduce semantic distance (dis-tance
between the human's model and the real world)
because each object in the display for the system rep-resents
a corresponding object in the task domain.
Analogue displays also reduce semantic distance be-cause
they enable direct comparison of magnitudes
(e. g., [19]). Similarly, minimise articulator distance
so that physical actions mimic their meanings.

In situations when the representation is complex, ab-stract
representations can be used to extract from re-ality
any information which is likely to help users in
their task.

Resulting Context: The result is a mapping from reality
into suitable display objects. Since the display will
not be optimal for all cases, the Interrogation
pattern can be used to help the user refine the infor-mation
provided. An Abstract Mapping may be
used when detailed representation of state is not nec-essary
for safety. 15
15 Page 16 17
Users need to formulate and follow task plans
that involve attention to the change in state of the sys-tem,
e. g., where an action must occur if the state is in
a certain configuration, or when a state change oc-curs.

Problem: How can the user be notified that the state has
changed (i. e., the trend of the system is towards a
hazardous state)?

 Many user errors stem from memory limita-tions. Users may not notice that the state of the
system has changed and that they should take

 Memory-based errors may occur even when the user has previously formulated a plan to per-form
a particular action when the state of the
system reaches a particular configuration. For
example, in air-traffic control, the user may
need to change the altitude of an aircraft before
it reaches a particular waypoint but may not be
able to do so immediately because of other more
pressing concerns; a hazard arises when the user
fails to return to the original aircraft and change
its altitude, after resolving the immediate con-cern.

Solution: Allow the user to compare and contrast the
current state with previous states.
This will help
users assess the current situation and formulate plans
for the future.

Examples: The Druide system displays aircraft as a trail
of locations, with the most prominent location dis-played
being the immediate location (see Figure 5).
The Oil Pressure system displays oil pressures in the
left and right ailerons and a shaded region that indi-cates
the oil pressure in the previous 5 minute interval
(see Figure 2).

Design Issues: One common technique is to overlay previ-ous
states on the same display, showing the previous
state in a less visually striking manner (e. g., muted
colours). This is particularly effective for motion, as
a trail of past motion can be formed. If this technique
causes too much clutter, a replica of the past state can
be placed alongside the current state. This, however,
occupies valuable screen real estate, and may hamper
direct comparison.

Resulting Context: State changes are explicitly displayed
to the user. Display of the state change is a Reality

Mapping. The change in state may also be brought
to the users attention via a Warning, if it has a read-ily
identifiable safety implication.

 The system is complex, with much information of potential use to the user in performing their

work and not all attributes can be displayed at
one time;

 Some of the information is more salient, or more often necessary than other components of
the information;
 Some of the information is more readily dis-played than other components of the informa-tion.

Problem: Most safety-critical interactive systems display
to the user a representation of the system state. For
many such systems, the state of the system is complex
and cannot be represented in a comprehensible way.
For such systems, displaying the entire system state
at one time may obscure the most important compo-nents
of the state and represent a potential source of
user error.

How can the user have access to the entire state of
the system without being overwhelmed by infor-mation?

 Users have limited attentional capacity. For ex-ample, Halford [11] has shown that an upper

limit of about 4 items can be processed in par-allel;

 Display devices have limited resolution and ca-pacity and can quickly become cluttered with
information. Users have difficulty locating spe-cific
features on cluttered screens and this is par-ticularly
problematic when urgent information
or control is desired;

 Designers cannot realistically envisage every possible information requirement.

Solution: Provide ways for the user to request addi-tional
This way, not all information
needs to be shown at once.

If the user is monitoring automatic systems, provide
independent information on the state of the system in
case instrumentation fails; instrumentation meant to
help users deal with a malfunction should not be able
to be disabled by the malfunction itself. 16
16 Page 17 18
The capability to interrogate should be obvious or
intuitive; menus at the top of the screen or window
are preferable to pop-up menus or mouse actions per-formed
on the object of interest. The results of the
interrogation should be able to be saved and reviewed
at a later time.

Examples: The Druide system displays aircraft locations,
relative speed, direction of travel, position trail, ra-dio
frequency, call-sign, altitude and beacon. Aircraft
have many other attributes such as their flight plan,
which are not displayed at all times because the re-sult
would produce unmanageable display complex-ity.
Instead, controllers may query aircraft for such
additional information, when it is needed.

Resulting Context: The mapping of system state to the
display is now a mapping of only part of the state, the
remainder of the state is hidden. The result is more
efficient use of the display.

Memory Aid
The task being performed by a user enables arbi-trary
interleaving of actions, with the possibility that
an action will be forgotten.

Problem: Some safety-critical systems, such as air-traffic
control, require the user to perform several tasks con-currently,
with interleaving of task actions. In such
systems, the potential is much greater than in non-interleaved
systems for actions to be forgotten, lead-ing
to hazards arising.

How can users reliably perform interleaved tasks?

 The user must remember to finish all the tasks, including interrupted tasks;

 The user must not inadvertently perform a task step more often than is required (for some sys-tems
and steps, a hazard may result).
Solution: Provide ways to record the completion status
of steps.
This will help the user to recommence later
on without omitting or repeating tasks. Such memory
aids may be either components of the computer sys-tem
itself, or adjuncts to the computer system. Mem-ory
aids may be proactive, cuing the user to perform
an action at a particular point in time, or when the
system reaches a particular state; such memory aids
may be warnings.

Examples: The Druide system uses paper strips to record
flight details and the instructions that have been given

to aircraft; such paper strips are commonly used in
air-traffic control systems. The paper strips provide
context for the controllers, they enable controllers to
determine whether an aircraft is currently behaving
in accordance with the instructions that have previ-ously
been issued and they enable the last action per-formed
for a particular aircraft to be recorded. How-ever
because paper strips are an adjunct to the com-puterised
ATC system, they cannot actively cue the
user to perform an action at a particular point in time,
or when the system reaches a particular state. Fields
[9] describe checklists as a simple memory aid to en-sure
that all the steps in a safety-critical procedure are
completed (e. g., piloting an aircraft).

Design Issues: Proactive memory aids may be set by the
user or by the system. However system initiated
warnings require that the system be aware that a user
task has not been completed. Memory aids should
cue the user with an urgency corresponding to that
set by the user or in the case of system initiated warn-ings,
with an urgency corresponding to risk. Passive
memory aids should be visible to the user at all times,
e. g., tags associated with an object in the display.

Resulting Context: The user is provided with active and
passive memory aids. Passive memory aids may re-quire
Reality Mapping. Active memory aids
may use Warning to notify the user that a condition
(user or system defined) has been reached (and that
the user should take appropriate action). Trend dis-plays
are a form of passive memory aid (see pattern

A. 4 Machine Control Patterns
Risk is sufficiently high that measures to block
the occurrence of error do not give sufficient assur-ance
and the bounds of acceptable system outputs can
be defined.

Problem: How can we be sure that errors cannot lead
to high risk hazards, even if they occur?

 Risk is sufficiently high that measures to dimin-ish user errors are not necessarily sufficient as-surance;

 Behavioural Constraints may not pre-vent all incorrect commands because systems
are too complex to predict all possible states and
events; 17
17 Page 18 19
 Measures to diminish user errors should not necessarily be regarded as sufficient evidence of
system safety and additional evidence may be
required if risk is sufficiently high.

Solution: Anticipate errors and place interlocks in the
system to detect and block the hazards that would
otherwise arise.
Interlocks can be embodied in hard-ware
or software, preferably both. But there is no
point creating an interlock if the system failure causes
the interlock itself to work incorrectly.

Examples: Many modern motor cars come equipped with
Anti-Lock Braking Systems (ABS). Such systems are
interlocks, screening aberrant driver behaviour. If
the driver presses too hard on the brake pedal, the
ABS will override the driver's actions and maintain
brake pressure at an optimum rate. The Therac-20
and Therac-25 machines are medical equipment de-signed
to administer radiation doses to patients [15].
The Therac-20 machine has a hardware interlock that
prevents extreme doses of radiation but the Therac-25
machine does not. The Therac-25 machine was in-volved
in several well-publicised accidents that arose
because of an error in the software which failed to
correctly update the dose after the user had corrected
it on screen. Detection of errors assumes that the haz-ard
situation can be formulated in a straight-forward
and error-free way. As an example in which this was
not so, consider the Warsaw accident in which A320
braking was prevented because braking logic required
both wheels on the ground (e. g., see [15]). These is-sues
are considered further in Automation.

Design Issues: If a system is designed using only in-terlocks
to prevent hazards arising from user er-ror,
removal of the interlocks opens the system to
the possibility of hazardous operation. Interlocks
therefore should always be used with Behaviour
and Intended Action to pro-vided
'defence in depth'.

Consider this pattern if performing a function in-volves
danger to a user, performing the function re-quires
exceptional skill, e. g., as when the response
time is far shorter than a human can normally achieve
or performing the function requires tedious or repeti-tive

Problem: Many safety-critical processes, such as nuclear
power generation or aircraft control, also require ma-nipulation
of a large number of parameters to keep the

system within safe margins of operation. However
humans are not very good at monitoring and control-ling
a large number of parameters in real time. Ma-chines
are good at such monitoring and control but
typically cannot detect and correct all possible abnor-mal
component failures.

How can system parameters be reliably main-tained
within safety margins even in the presence
of component failure?

 Part of the operation of the system involves maintaining parameters within defined safety

 Users become fatigued when monitoring pa-rameters to ensure they stay within safety mar-gins;

 Users need to remain informed of what the sys-tem is doing so they can intervene in the event
of component failure.
Bainbridge [2] maintains that it is not possible for
even a highly motivated user to maintain attention to-ward
a source of information on which little happens
for more than half an hour. Hence it is humanly im-possible
to carry out the basic monitoring function
needed to detect unlikely abnormalities. In addition,
the user will rarely be able to check in real-time the
decisions made by the computer, instead relying on a
meta-level analysis of the 'acceptability' of the com-puters
actions. Such monitoring for abnormalities
must therefore be done by the system itself and ab-normalities
brought to the user's attention via alarms.

Solution: Automate tasks which are either too difficult
or too tedious for the user to perform.
Mill sug-gests
that a function should be automated if [19]:

 performing the function involves danger to a user [19];

 performing the function requires exceptional skill, e. g., as when the response time is far
shorter than a human can normally achieve;
 performing the function requires tedious or repetitive work.

and should not be fully automated (i. e., a human
should be included in the control loop with respon-sibility
for decisions) if a decision must be made that:

 cannot be reduced to uncomplicated algorithms;
 involves fuzzy logic or qualitative evaluation;
 requires shape or pattern recognition. 18
18 Page 19 20
Appropriate design of automatic systems should as-sume
the existence of error, it should continually
provide feedback, it should continually interact with
users in an effective manner and it should allow for
the worst situations possible [21].

Examples: Typical examples of automation are cruise con-trol
in cars and auto-pilots. Autopilots reduce pilot
workload; without autopilots pilot fatigue would be a
very significant hazard in flight. Further, some highly
sophisticated aircraft (e. g., military aircraft) could not
be flown without automatic control of some aircraft
functions. However experience from case studies
of crashes indicates that there is not always enough
feedback and that this can lead to accidents [21].

Design Issues: The automated system needs to provide
continuous feedback on its behaviour in a non-obtrusive
manner. Careful consideration is therefore
required of what aspects of the controlled process are
most salient to safety and to ensure prominence of
their display. If the system has been given sufficient
intelligence to determine that it is having difficulty
controlling the process, then active seeking of user
intervention is appropriate.

Resulting Context: Automation of a system will usually
require that the system also notify the user of failures
(see Warning).

Shutting down the system leads to a fail-safe state
and the service provided by the system can be halted
for at least a temporary period, and:

 failure can lead to a serious accident within a very short time;

 shutdown is simple and low cost;
 reliable automatic response is possible.

Problem: How can safety be assured if a serious hazard
has occurred?

 In the event of component failure, a safety-critical system may continue to function, with
graceful degradation of service, however such
a malfunctioning system is inherently less safe
than a fully-functional system.

 Systems such as factory plant can often be shut down in a safe state;

 Systems where failure of service is a hazard usually cannot be shutdown, e. g., aircraft, Air-Traffic-
Control services, missiles.
Solution: When shutdown is simple and inexpensive,
and leads to a safe, low risk state, the straightfor-ward
solution is to shut down automatically.

When a system cannot be shut down automatically
because of cost or complexity, then the system must
instead be stabilised, either manually or automati-cally
[2]. If a failure can lead to a serious accident
within a very short time, without shutdown, then reli-able
automatic response is necessary, and if this is not
possible, then the system should not be built if risk is

Examples: McDermid and Kelly [18] give an example of
an industrial press that automatically moves to a safe
failure state (i. e., press closed) when the sensors for
press movement are inconsistent (i. e., and therefore
mitigations against user error are not operative so that
a hazard exists).

Design Issues: Shutdown should be annunciated by a
warning to the user that the system is no longer oper-ating.

Resulting Context: If a system cannot be simply shut-down,
then Automation might be used to stabilise
the system and Warning is indicated to bring the
failure to the user's attention.

Identifiable safety margins exist so that likely
hazards can be determined automatically and warn-ings

Problem: Computers are better than humans at monitoring
the environment to check if a certain set of conditions
are true. This is because humans become fatigued
performing a tedious task repetitively. Furthermore,
their attention can be distracted by other tasks, which
might lead to them missing a critical event, or, upon
returning, forgetting what they were monitoring.

How can we be confident the user will notice new
system conditions and take appropriate action?

 Computers are good at monitoring changes in state;

 Computers are good at maintaining a steady state in the presence of minor external aberra-tions
that would otherwise alter system state; 19
19 Page 20 21
 Computers are not good at determining the im-plications of steady state changes and appropri-ate
hazard recovery mechanisms;
 Although users should still be alert for failures, their workload can be lightened and the over-all
hazard response time decreased, if searching
for failures is at least partially automated. This
requires a mechanism to inform users when a
failure has occurred;

 Conditions may be user defined.
Solution: Provide warning devices that are triggered
when identified safety-critical margins are ap-proached.
The warnings provided by the system
should be brief and simple. Spurious signals and
alarms should be minimised and the number of
alarms should be reduced to a minimum [19]. Users
should have access to straightforward checks to dis-tinguish
hazards from faulty instruments. Safety criti-cal
alarms should be clearly distinguishable from rou-tine
alarms. The form of the alarm should indicate
the degree of urgency and which condition is respon-sible.
The user should be guided to the disturbed part
of the system and aided in the location of disturbed
parameters in the affected system area [3]. In addi-tion
to warning the user if identifiable hazards may
occur, the system also should inform the user when
a significant change has taken place in the system
state: see the Trend pattern. The absence of an
alarm should not be actively presented to users [19].
Warnings should be provided well in advance of the
point at which a serious accident is likely. Warnings
should be regarded as supplementary information; a
system should never be designed to operate by an-swering
alarms [19].

Examples: The Oil Pressure system raises an alarm when
the oil pressure in the current aileron falls below a
threshold. The HALO system (discussed in [3]) uses
logical expressions to generate a reduced number of
alarms from a total array of alarm signals. HALO
alarms are formed on the basis of a combination
of signals in contrast to conventional systems where
alarms arise from violation of a single parameter. The
Druide system alerts the user when a separation con-flict
develops between two or more users.

Design Issues: For serious failures constituting actual haz-ards,
warnings may be moded, requiring the user to
acknowledge the warning before proceeding further.
Such moded warnings are called alerts. Alerts will
often be auditory because hearing is a primary sense
that is detected automatically; auditory alerts are less
prone to being ignored due to "tunnel vision" [26].

Patterson [26] has shown that only four to six differ-ent
auditory alerts can be distinguished reliably and
that careful attention must be paid to ensuring sounds
do not conflict, making combinations of alerts diffi-cult
to distinguish. Auditory alerts should be loud
and urgent initially, softer for an intervening period
allowing user action and cancelling of the alert, then
load again if no action has occurred after the inter-vening

Resulting Context: Hazards are identified and logic in-stalled
in the system to warn users when hazard mar-gins
are approached. Warnings should be replaced
by Automation where possible. Warnings may be
structured hierarchically, so that only primary fail-ures
that are responsible for a system or subsystem
failure are displayed initially, with secondary failures
appearing only at the user's request (i. e, applying the
Interrogation pattern) [19]. If a warning is trig-gered,
the user should have access to mechanisms for
recovery (Recover) where possible.

B Case Studies
In this Appendix we give summaries of the case studies that
are used as examples for the patterns described in section 4
and Appendix A.

B. 1 Oil Pressure System
Fields and Wright [9] describe a aircraft hydraulic monitor-ing
system. The system consists of two dials which show
the current hydraulic fluid level in each of two reservoirs
and two switches (one for each reservoir) which indicate
which of two control surfaces the reservoir currently sup-plies.
A reservoir can supply both rudder and aileron but a
rudder or aileron can be supplied by only one reservoir. The
design of the system is shown in Figure 2.

Blue Green


Figure 2. UI design for the hydraulic system
(adapted from Fields and Wright [9]).
20 Page 21 22
When confronted with a loss of fluid from either reservoir,
the pilot of the aircraft must select a setting of the switches
that minimises fluid loss and simultaneously determine the
parts of the system that are leaking. The structure of the
system is represented in Figure 3.

Reservoir Blue 2 1 1 2

Reservoir Green



Figure 3. The physical hydraulics system en�vironment from Fields and Wright

represent the display informally as shown in Figure 2.
In the worst correctable case, there may be a leak in a ser-vodyne
of one colour and a reservoir. For example, both
the blue rudder servodyne and also the blue reservoir may
be leaking. To correct the leaks, the pilot must switch both
control surfaces to the green reservoir.

B. 2 Hypodermic Syringe
Dix [7, p. 6] describes an automatic (computerised) syringe.
The primary task engaged in by a user is to enter a dose
the syringe before applying the device to a patient. The
original user-interface for the system has a calculator style
that enables doses to be rapidly entered (see Fig-ure
4( a)). However, because the syringe could be inject-

1 4 7 2

(a) (b)
4 5

8 9


1 7 2 4 4


Figure 4. Syringe design: (a) original; (b)
vised (adapted from Dix [7, p. 6]).

ing pharmaceuticals that are lethal outside a safe range, the
original design does not sufficiently consider risk. When
risk is taken account of, a better design is given in Figure
4( b).

In the modified design, the user cannot enter doses as
quickly and more effort is required to do so (so usability
is reduced), but the system is safer because a single extra
key press is less likely to produce an order of magnitude
dose error. Additionally, the modified system provides er-
ror tolerance by allowing the dose to be changed (which
also enhances usability).

B. 3 Druide
The Druide system is a prototype Air-Traffic Control (ATC)
system under development by the French aviation author-(adapted
ity, CENA (Centre � d'Etudes de la Navigation Aerienne).
The prototype has formed the basis for a CHI (Computer-Human
Interaction Conference) workshop on designing
user-interfaces for safety-critical systems [25]. Our anal-We
ysis of Druide in this paper is based on the descriptions of
Druide given in [25]. A typical instantiation of the interface
for Druide is shown in Figure 5.

The Druide ATC system is based on a data-link channel that
is accessed from a menu-driven graphical user-interface, a
radar screen annotated with flight information for each air-craft
(call-sign, speed, heading and next beacon on its route)
and paper strips that describe the flight plan for each air-craft.
The paper strips are originally produced from flight
plans submitted by the airlines. A controller is responsible
for a sector of airspace including maintaining separation be-for
tween aircraft. When changes are made to the flight plan for
an aircraft in a sector, the changes are recorded on the cor-interface
responding paper strip. When an aircraft enters a sector,
its pilot must communicate with the controller. The con-troller
manages that aircraft while it is in their sector before
"shooting" the aircraft to a new sector (notifying the next
controller of the handover of control). Managing an aircraft
involves communicating via radar view and manipulating
the paper strips. The controller may request that an aircraft
change altitude, beacon, frequency, heading or speed.

In Figure 5, the user is shown communicating a new altitude
(CFL or Current Flight Level) to an aircraft. The selected
aircraft with which the user is communicating is displayed
in a distinguishable colour; selecting an aircraft produces
a pop-up menu. When the user clicks on the "CFL" entry
in the menu, a further menu of altitude possibilities appears.
The user selects from this menu and then clicks the "SEND"

re�button (alternatively, if a mistake has been made, the user may click "ABORT").

B. 4 Railway Control
The Railway Control system is an Australian system, cur-
rently under development, for controlling the movement of 21
21 Page 22 23




Figure 5. The Druide radar display (adapted from [25]); because the picture is greyscale in this paper, we have circled the selected aircraft for ease of identification.

trains on inland freight and passenger lines, preventing col-
lisions between trains and maximising usage. The system
is similar to an ATC system (with rail-traffic controllers and
train drivers in lieu of air-traffic controllers and pilots) but
is concerned with train movement, for which traffic
ment and hence collision risk is more restricted. Figure 6
shows the controllers screen for the system.

The controller and driver are in communication via a mo-
bile phone or radio. For example, controllers may request
drivers to move their trains from one location to another.
When a controller issues such a command to a driver the
sequence is as follows:

1. controller (command)  driver
2. controller  (confirm) driver
3. controller (reissue)  driver

The first step sends the command in coded form. The sec-ond
step sends it back in coded form using a simple transfor-
mation of the original message. The third step sends aux-
iliary information that is compared against the command
sent in the first step as a double-check of correctness. In
case, the recipient of a message types the code that
is received into an entry field on their display. If all three
steps are passed, the driver and controller will be presented
with a text version of the command which is checked by the
driver reading the text to the controller and the controller

confirming that the command received is correct. Finally,
if agreement is reached that the message has been correctly
received, the driver commences execution of the command
(e. g., by moving their train to a new location).
move-Commands to move a train to a new location are formulated

by the train controller double-clicking on the train that is
the subject of the command and then on the location that
the train is to move to (see Figure 6). In Figure 6, the train
is at Torrens Creek station and has been cleared through to
just after Warreah station. Once a train has been instructed
to move to a new location, all the track that the train must
occupy in the course of executing that command becomes
unavailable for use by any other trains. As the train moves
along the track, the driver of the train contacts the con-
troller to release sections of the track, so that other trains
may move onto that track.


[1] C. Alexander, S. Ishikawa, M. Silverstein, M. Jacobson,
I. Fiksdahl-King, and S. Angel. A Pattern Language. Oxford
University Press, New York, 1977. [2] L. Bainbridge. New Technology and Human Error, chap-each

ter 24, pages 271� 283. John Wiley and Sons Ltd., 1987. [3] L. Bainbridge and S. A. R. Quintanilla, editors. Developing
Skills with Information Technology.
John Wiley and Sons
Ltd., 1989. [4] E. Bayle, R. Bellamy, G. Casaday, T. Erickson, S. Fincher,

B. Grinter, B. Gross, D. Lehder, H. Marmolin, B. Moore, 22
22 Page 23 24
Figure 6. Controller's screen for the Railway Control system
C. Potts, G. Skousen, and J. Thomas. Putting it all together.
Towards a pattern language for interaction design: A CHI
97 workshop. SIGCHI Bulletin, 30( 1): 17� 23, Jan. 1998.
[5] J. O. Coplien. A generative development-process pattern
language. In J. O. Coplien and D. C. Schmidt, editors,
Pattern Languages of Program Design, pages 183� 237.
Addison-Wesley, Reading, MA, 1995.
[6] M. Corporation. The Windows Interface Guidelines for Soft-ware
Microsoft Press, Redmond, WA, 1995.
[7] A. Dix, J. Finlay, G. Abowd, and R. Beale. Human-Computer
Prentice Hall, 1998.
[8] T. Erickson.
Interaction design patterns. http:// www. pliant. org/ personal/ -Tom
Erickson/ InteractionPatterns. html.
[9] R. Fields and P. Wright. Safety and human error in activity
systems: A position. CHI'98 Workshop (5) on Designing
User Interfaces for Safety Critical Systems, 1998.
[10] E. Gamma, R. Helm, R. Johnson, and R. Vlissides. Design
Patterns: Elements of Reusable Object-Oriented Software.
Addison-Wesley, Reading, MA, 1995.
[11] G. S. Halford, W. H. Wilson, and S. Phillips. Processing
capacity defined by relational complexity: Implications for
comparative, developmental, and cognitive psychology. Be-havioural
and Brain Sciences,
21( 6): 803� 831, 1998.
[12] A. Hussey. Patterns for Safer Human-Computer Interfaces.
To appear in SAFECOMP'99, 1999.
[13] B. Kirwan. Human reliability assessment. In Evaluation of
Human Work,
chapter 28. Taylor and Francis, 1990.
[14] T. Kletz. Plant design for safety : a user-friendly approach.
Hemisphere, 1991.
[15] N. Leveson. Final Report: Safety Analysis of Air Traffic
Control Upgrades.
www. cs. washington. edu/ homes/ leveson/ CTAS/ ctas. html,

[16] N. G. Leveson. Safeware, system safety and computers.
Addison-Wesley, 1995.
[17] M. J. Mahemoff and L. J. Johnston. Principles for a
usability-oriented pattern language. In P. Calder and
B. Thomas, editors, OZCHI '98 Proceedings, pages 132�
139. IEEE Computer Society, Los Alamitos, CA, 1998.
[18] J. McDermid and T. Kelly. Industrial Press: Safety Case.
High Integrity Systems Engineering Group, University of
York, 1996.
[19] R. C. Mill, editor. Human Factors in Process Operations.
Institution of Chemical Engineers, 1992.
[20] J. Nielson. Usability Engineering. AP Professional, New
York, 1993.
[21] D. Norman. The 'problem' with automation: inappropriate
feedback and interaction, not 'over-automation'. Philosph-ical
Transactions of the Royal Society of London, Series B,
327( 1241): 585� 593, 1990.
[22] D. A. Norman. The Design of Everyday Things. Doubleday,
[23] D. A. Norman. Things That Make Us Smart. Addison-Wesley,
Reading, MA, 1993.
[24] Open Software Foundation. OSF/ Motif Style Guide: Revi-sion
Prentice Hall International, Englewood Cliffs, NJ,
[25] P. Palanque, F. Paterno, and P. Wright. CHI'98 Workshop
(5) on Designing User Interfaces for Safety Critical Systems.
ACM SIGCHI Conference on Human Factors in Computing
Systems: "Making the Impossible Possible", 1998.
[26] R. D. Patterson. Auditory warning sounds in the work envi-ronment.
Philosphical Transactions of the Royal Society of
London, Series B,
327( 1241): 485� 492, 1990.
[27] J. Reason, editor. Human Error. Cambridge University
Press, 1990. 23
23 Page 24
[28] J. Reason. The contribution of latent human failures to the
breakdown of complex systems. Philosphical Transactions
of the Royal Society of London, Series B,
327( 1241): 475�
484, 1990.
[29] F. Redmill and J. Rajan. Human Factors in Safety-Critical
Butterworth Heinemann, 1997.
[30] J. Tidwell. Interaction patterns, 1998. http:// -jerry.
cs. uiuc. edu/~ plop/ plop98/ final submissions. 24

Page Navigation Panel

1 2 3 4 5 6 7 8 9
10 11 12 13 14 15 16 17 18 19
20 21 22 23 24

About This Document

This document is a machine-generated (ie ugly) text-only version of the paper. The original PDF format can be downloaded from the papers page.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License.