Chapter 2
Human Factors in System Design

Human Factors

Human Factors and Ergonomics have over 100 years history in the UK and USA, from humble beginnings at the turn of the last century to the current day. A detailed account of the historical developments in both the USA and UK may be found in Meister (1999). This account even covers the pre-history of the discipline. To cut a long story short, the discipline emerged out of the recognition that analysis of the interaction between people and their working environment revealed how work could be designed to reduce errors, improve performance, improve quality of work and increase the work satisfaction of the workers themselves. Two figures stand out at the early beginnings of the discipline in the 1900s, Frank and Lillian Gilbreth (Stanton, 2006). The Gilbreths sought to discover more efficient ways to perform tasks. By way of a famous example of their work, they observed that bricklayers tended to use different methods of working. With the aim of seeking the best way to perform the task, they developed innovative tools, job aids and work procedures. The resultant effect of these changes to the work meant that the laying of a brick had been reduced dramatically from approximately 18 movements by the bricklayer down to some four movements. Thus the task was therefore performed much more efficiently. This analysis amongst others led to the discovery of ‘laws of work’, or ‘Ergo-nomics’ as it was called (Oborne, 1982). Although the discipline has become much more sophisticated in the way it analyses work (as indicated in the next section), the general aims to improve system performance and quality of working life remain the principle goals. Human Factors and Ergonomics has been defined variously as ‘the scientific study of the relationship between man and his working environment’ (Murrell, 1965), ‘a study of man’s behaviour in relation to his work’ (Grandjean, 1980), ‘the study of how humans accomplish work-related tasks in the context of human-machine systems’ (Meister, 1989), ‘applied information about human behaviour, abilities, limitations and other characteristics to the design of tools, machines, tasks, jobs and environments’ (Sanders & McCormick, 1993), and ‘that branch of science which seeks to turn human-machine antagonism into human-machine synergy’ (Hancock, 1997). From these definitions, it may be gathered that the discipline of Human Factors and Ergonomics is concerned with: human capabilities and limitations, human-machine interaction, teamwork, tools, machines and material design, environments, work and organisational design. The definitions also place some implied emphasis on system performance, efficiency, effectiveness, safety and well-being. These remain important aims for the discipline.

The role Ergonomics has to play in the design of displays of information and input controls is particularly pertinent to the contents of this book, as the main focus is on the design of digital Mission Planning and Battlespace Management (MP/BM) systems. The genus of Ergonomics in military systems display and control design can be traced back to the pioneering works of Paul Fitts and Alphonse Chapanis in aviation. Chapanis (1999) recalls his work at the Aero Medical Laboratory in the early 1940s where he was investigating the problem of pilots and co-pilots retracting the landing gear instead of the landing flaps after landing. His investigations in the B-17 (known as the ‘Flying Fortress’) revealed that the toggle switches for the landing gear and the landing flaps were both identical and next to each other. Chapanis’s insight into human performance enabled him to understand how the pilot might have confused the two toggle switches, particularly after the stresses of a combat mission. He proposed coding solutions to the problem: separating the switches (spatial coding) and/or shaping the switches to represent the part they control (shape coding), so the landing flap switch resembles a ‘flap’ and the landing gear switch resembles a ‘wheel’. Thus the pilot can tell by looking at, or touching, the switch what function it controls. In his book, Chapanis also proposed that the landing gear switch could be deactivated if sensors on the landing struts detected the weight of the aircraft.

Grether (1949) reports on the difficulties of reading the traditional three-needle altimeter which displays the height of the aircraft in three ranges: the longest needle indicates 100s of feet, the broad pointer indicates 1,000s of feet and the small pointer indicates 10,000s of feet. The work of Paul Fitts and colleagues had previously shown that pilots frequently misread the altimeter. This error had been attributed to numerous fatal and non-fatal accidents. Grether devised an experiment to see if different designs of altimeter could have an effect on the interpretation time and the error rate. If misreading altimeters was really was a case of ‘designer error’ rather than ‘pilot error’ then different designs should reveal different error rates. Grether tested six different variations of the dial and needle altimeter containing combinations of three, two and one needles with and without an inset counter as well as three types of vertically moving scale (similar to a digital display). Pilots were asked to record the altimeter reading. The results of the experiment showed that there were marked differences in the error rates for the different designs of the altimeters. The data also show that those displays that took longer to interpret also produced more errors. The traditional three-needle altimeter took some 7 seconds to interpret and produced over 11 per cent errors of 1,000 feet or more. By way of contrast, the vertically moving scale altimeters took less than 2 seconds to interpret and produced less than 1 per cent errors of 1,000 feet or more.

Both of these examples, one from control design and one from display design, suggest that it is not ‘pilot error’ that causes accidents; rather it is ‘designer error’. This notion of putting the blame on the last person in the accident chain (for example, the pilot), has lost credibility in modern Ergonomics. Modern day researchers take a systems view of error, by understanding the relationships between all the moving parts in a system, both human and technical, from concept, to design, to manufacture, to operation and maintenance (including system mid-life upgrades) and finally to dismantling and disposal of the system. These stages map nicely onto the UK MoD’s CADMID life cycle stages (Concept – Assessment – Design – Manufacture – In-service – Disposal).

The term ‘Human Factors’ seems to have come from the USA to encompass any aspect of system design, operation, maintenance and disposal that has bearing on input or output. The terms Human Factors and Ergonomics are often used interchangeably or together. In the UK, Ergonomics is mostly used to describe physiological, physical, behavioural and environmental aspects of human performance whereas Human Factors is mostly used to describe cognitive, social and organisational aspects of human performance. Human Factors and Ergonomics encompass a wide range of topics in system design, including: Manpower, Personnel, Training, Communications Media, Procedures, Team Structure, Task Design, Allocation of Function, Workload Assessment, Equipment Design, System Safety and Health Hazards. The term Human Factors will be used throughout this book, although this may also mean Ergonomics. Modern day Human Factors focuses on integration with other aspects of System Engineering. According to the UK MoD, Human Factors Integration is about ‘…providing a balanced development of both the technical and human aspects of equipment provision. It provides a process that ensures the application of scientific knowledge about human characteristics through the specification, design and evaluation of systems.’ (MoD, 2000, p. 6). This book focuses on the examination of a digital command and control system that was developed for both mission planning and battlespace management. Human Factors methods have been developed over the past century, to help design and evaluate new systems. These methods are considered in the following section.

Human Factors Methods

Human Factors methods are designed to improve product design by understanding or predicting user interaction with the devices (Stanton & Young, 1999); these approaches have a long tradition in system design and tend to have greater impact (as well as reduced cost) when applied early on in the design process (Stanton & Young, 1999), long before the hard-coding and hard-build has begun. The design life cycle has at least ten identifiable stages from the identification of product need up to product release, namely: identification of design need, understanding the potential context of product use, development of concepts, presentation of mock-ups, refinement of concepts, start of coding and hard-build, iterative design process, release of a prototype, minor refinements and then release of the first version of the product. As illustrated in Figure 2.1 by the brown shaded area, there is often far too little Human Factors effort involved far too late in the design process (the dark shaded area is a caricature of the Human Factors effort involved in developing the digital MP/BM system). Ideally, the pattern should be reversed, with the effort front-loaded in the project (as illustrated by the light shaded area). Such a strategy would undoubtedly have led to a much improved digital MP/BM system, at considerably reduced cost and with more timely delivery.

As a rough heuristic, the more complex a product is the more important Human Factors input becomes. Complexity is not a binary state of either ‘complex’ or ‘not complex’, the level of complexity lies on a non-numerical scale that can be defined through a set of heuristics (Woods, 1988):

• Dynamism of the system: To what extent can the system change states without intervention from the user? To what extent can the nature of the problem change over time? To what extent can multiple ongoing tasks have different time spans?

• Parts, variables and their interconnections: The number of parts and the extensiveness of interconnections between the parts or variables. To what extent can a given problem be due to multiple potential causes and to what extent can it have multiple potential consequences?

• Uncertainty: To what extent can the data about the system be erroneous, incomplete or ambiguous – how predictable are future states?

• Risk: What is at stake? How serious are consequences of users’ decisions?

The environment that the command and control system operates within can be seen to be highly complex. The system is dynamic as it frequently changes in unexpected ways, there are a huge number of interconnected parts within the system, data within the system is frequently incorrect or out of date, and the risk inherent in the system is ‘life or death’.

It is to the British Army’s immense credit that even the most recalcitrant of equipment issues ‘can be made to work’, but the new era of networked interoperability (and the complexity this brings) challenges even this ability. Whilst convoluted ‘workarounds’ may appear to overcome some of the system design problems in the short term, opportunities may be lost in gaining greater tempo, efficiency, effectiveness, flexibility and error reduction. Indeed, this practice is, arguably, fast becoming an optimum strategy for increasing error potential, reducing tempo, efficiency and operational effectiveness. A new era of networked interoperability requires a new approach, one that confronts the challenges of harnessing human capability effectively using structured methodologies.

There are a wide range of Human Factors methods available for the analysis, design and evaluation of products and systems. A detailed description of the most commonly used approaches can be found in Stanton et al. (2005a, b). The choice of method used is influenced by a number of factors; one of these factors is the stage in the design process. By way of an example of how structured approaches to human/system integration can be employed, Figure 2.2 relates a number of specific methods to the design life cycle.

image

Figure 2.1 Illustration showing Human Factors effort is better placed in early stages of the design process

Figure 2.2 shows at least 11 different types of Human Factors methods and approaches that can be used through the design life cycle of a new system or product (Stanton & Young, 1999; Stanton et al., 2005a, 2005b). As the figure shows, many of these are best applied before the software coding and hard-build of a system starts. The approach places emphasis on the analysis and development of early prototypes. The assessment described within this book has come very late in the design process, far too late to make fundamental changes in system design; even small design modifications would be costly at this stage. It is extremely likely that an early involvement of Human Factors expertise and methodology would have resulted in a better implementation of the digital MP/BM system. The Human Factors methods advocated within the design life cycle are described further below. The methods starred (thus*) were applied to the case study presented within this book.

Cognitive Work Analysis (CWA)* is a structured framework for considering the development and analysis of complex socio-technical systems. The framework leads the analyst to consider the environment the task takes place within and the effect of the imposed constraints on the system’s ability to perform its purpose. The framework guides the analyst through the process of answering the questions of why the system exists and what activities are conducted within the domain, as well as how this activity is achieved and who is performing it. The analysis of constraints provides the basic formulation for development of the early concept for the system and the likely activities of the actors within it. Thus CWA offers a formative design approach.

Systems design methods are often used to provide structure to the design process, and also to ensure that the end-user of the product or system in question is considered throughout the design process. For example, allocation of function analysis is used by system designers to determine whether jobs, tasks and system functions are allocated to human or technological agents within a particular system. Focus group approaches use group interviews to discuss and assess user opinions and perceptions of a particular design concept. In the design process, design concepts are evaluated by the focus group and new design solutions are offered. Scenario-based design involves the use of scenarios or storyboard presentations to communicate or evaluate design concepts. A set of scenarios depicting the future use of the design concept are proposed and performed, and the design concept is evaluated. Scenarios typically use how, why and what-if questions to evaluate and modify a design concept.

image

Figure 2.2 Application of Human Factors methods by phase of the design process

Workload, Error, Situation Awareness, Time and Teamwork (WESTT) is a Human Factors tool produced under the aegis of the HFI DTC. The aim of the tool is to integrate a range of Human Factors analyses around a tripartite closely-coupled network structure. The three networks are of Task, Knowledge and Social networks, and are analysed to identify their likely effects on system performance. The WESTT tool developed by the HFI DTC models potential system performance and therefore enables the analyst to consider alternative system structures.

Usability testing* methods are used to consider the usability of software on three main dimensions from IS09241-11: effectiveness (how well does the product performance meet the tasks for which it was designed?); efficiency (how much resource, for example, time or effort, is required to use to the product to perform these tasks?) and attitude (for example, how favourably do users respond to the product?). It is important to note that it is often necessary to conduct separate evaluations for each dimension rather than using one method and hoping that it can capture all aspects.

Human Error Identification (HEI)* methods can be used either during the design process to highlight potential design induced error, or to evaluate error potential in existing systems. HEI works on the premise that an understanding of an employee’s work task and the characteristics of the technology being used allows us to indicate potential errors that may arise from the resulting interaction (Baber and Stanton, 1996). The output of HEI techniques usually describes potential errors, their consequences, recovery potential, probability, criticality and offers associated design remedies or error reduction strategies. HEI approaches can be broadly categorised into two groups, qualitative and quantitative techniques. Qualitative approaches are used to determine the nature of errors that might occur within a particular system, whilst quantitative approaches are used to provide a numerical probability of error occurrence within a particular system. There is a broad range of HEI approaches available to the HEI practitioner, ranging from simplistic External Error Mode (EEM) taxonomy-based approaches to more sophisticated human performance simulation techniques.

Hierarchical Task Analysis (HTA)* is used to describe systems in terms of their goals and sub-goals. HTA works by decomposing activities into a hierarchy of goals, subordinate goals, operations and plans, which allows systems to be described exhaustively. There are at least 12 additional applications to which HTA has been put, including interface design and evaluation, training, allocation of functions, job description, work organisation, manual design, job aid design, error prediction and analysis, team task analysis, workload assessment and procedure design. These extensions make HTA particularly useful in system development when the design has begun to crystallise.

Interface Evaluation* methods are used to assess the human-machine interface of a particular system, product or device. These methods can be used to assess a number of different aspects associated with a particular interface, including user performance, user satisfaction, error, layout, labelling, and the controls and displays used. The output of interface analysis methods is then typically used to improve the interface through redesign. Such techniques are used to enhance design performance, through improving the device or system’s usability, user satisfaction, and reducing user errors and interaction time.

Design and Test studies are needed to determine if any measured differences between the new systems and their baselines are real, statistically significant, differences that are likely to generalise beyond the cases studied. There are two broad approaches for Design and Test studies: quantitative and qualitative. Quantitative testing is a formal, objective, systematic process in which numerical data is utilised to obtain information. Quantitative testing tends to produce data that compare one design over another or data that compare a design against a benchmark. Qualitative testing considers opinions and attitudes toward designs. Whilst the attitudes can be measured on scales, often the approach involves an in-depth understanding of the reasons underlying human behaviour. Whilst quantitative studies are concerned with relative differences in performance, qualitative studies are concerned with the reasons for those differences. Typically quantitative research requires larger random samples whereas qualitative research requires smaller purposely selected samples.

Teamwork Assessment* methods are used to analyse those instances where actors within a team or network coordinate their behaviour in order to achieve tasks related to the team’s goals. Team-based activity involves multiple actors with multiple goals performing both teamwork and task-work activity. The activity is typically complex (hence the requirement for a team) and may be dispersed across a number of different geographical locations. Consequently there are a number of different team performance techniques available to the Human Factors practitioner, each designed to assess certain aspects of team performance in complex systems. The team performance techniques can be broadly classified into the following categories: team task analysis techniques; team cognitive task analysis techniques; team communication assessment techniques; team Situation Awareness (SA) measurement techniques; team behavioural assessment techniques; and team Mental Work Load (MWL) assessment techniques.

Workload Assessment should be used throughout the design life cycle, to inform system and task design as well as to provide an evaluation of workload imposed by existing operational systems and procedures. There are a number of different workload assessment procedures available. Traditionally, using a single approach to measure workload has proved inadequate, and as a result a combination of the methods available is typically used. The assessment of workload may require a battery of techniques, including primary task performance measures, secondary task performance measures (reaction times, embedded tasks), physiological measures and subjective rating techniques.

Situation Awareness (SA)* refers to an individual’s, team’s or system’s awareness of ‘what is going on’ (Endsley, 1995a). SA measures are used to measure level of awareness during task performance. The assessment of SA can be used throughout the design life cycle, either to determine the levels of SA provided by novel technology or design, or to assess SA in existing operational systems. SA measures are necessary in order to evaluate the effect of new technologies and training interventions upon SA, to examine factors that affect SA, to evaluate the effectiveness of processes and strategies for acquiring SA and in investigating the nature of SA itself. There are a number of different SA assessment approaches available; in a review of SA measurement techniques, Salmon et al. (2006) describe various approaches, including performance measures, external task measures, embedded task measures, subjective rating techniques (self and observer rating), questionnaires (post-trial and on-line) and the freeze technique.

An integrated design approach is advocated, where the project team includes Human Factors throughout the design life cycle, which will be able to translate user requirements into design specification. As indicated by the stars (thus*) a variety of methods were used in the case study presented in the following chapters. The Human Factors approach also advocates an ‘interface first’ approach, within a rapid prototyping, development and testing cycle. As the interface concepts crystallise, then hard-coding and hard-build may begin. Details of all the methods discussed may be found in Stanton et al. (2005a, 2005b). This should be the starting point for any future work.