In this page you will find a selection of Human Factors methods used in our research and consultancy activities
The NASA Task Load Index (NASA-TLX) is a multy dimensional rating procedure that rates perceived workload in order to assess a task, system, or team's effectiveness or other aspects of performance. It allows to evaluates subjects’ workload on six subscales: mental demand, physical demand, temporal demand, own performance, effort, and frustration. This instrument is widely used in a variety of domains, including aviation, healthcare and other complex socio-technical domains. The NASA-TLX has had a major influence in human factors research, originally developed as a paper and pencil questionnaire by NASA Ames Research Center’s (ARC) Sandra Hart in the 1980s, NASA TLX has become the gold standard for measuring subjective workload across a wide range of applications
Instantaneous self-assessment (ISA) is a technique that intends to measure the intensity of workload and stress to provide immediate subjective ratings of work demands during the performance of primary work tasks such as air traffic control. Each working position is provided with a keyboard having five buttons in a vertical line. When a LED lights at the position, the controller indicates his current level of work state: very low, low, fair, high, very high.
The modified Cooper Harper scale is a uni-dimensional measure that uses a decision tree to elicit operator mental workload. It is based on the Cooper-Harper Handling Qualities Rating Scale (HQRS), a pilot rating scale based on a set of criteria designed by flight test engineers to evaluate the ergonomic characteristics of aircraft while performing a task during a flight test.
The scale ranges from 1 to 10, with 1 indicating the best characteristics and 10 the worst.
Administered post-trial, the modified version allows to give an indication of the workload perceived by the pilot, analysing the tasks performed on the aircraft.
The Situation awareness global assessment technique (SAGAT) was proposed by Endsley in 1988, because pilot-vehicle interface designs must be driven by the goal of establishing and maintaining high pilot situation awareness. It’s been developed to assist in this process. SAGAT is considered to represent a substantial improvement in the evaluation of pilot-vehicle interface designs, facilitating the development of cockpits which assist the pilot in surviving combat.
Hierarchical Task Analysis is the most popular task analysis method and is perhaps the most widely used Human Factors method available. It involves describing the activities under analysis in terms of hierarchi of goals, sub-goals, operations and plans. The final result is a detailed description of specific task activities. HTA act as an input for many advanced Human Factors methos and this is the reason for it's enduring popularity. It has been applaied accross a variety of domains such as process control and power generation industries, emergency services, military application, civil aviation, driving, public technology and retail.
The Applied Cognitive Task Analysis (ACTA) procedure was developed by Militello and Hutton in 2000 as part of a Navy Personnel Research and Development Centre funded project as a solution to the inaccessibility and difficulty associated witht the application of existing cognitive task analisys type methods. ACTA offers a toolkit of interview methods that can be used to analyse the cognitive demands associated with a particular task or scenario. The main purpose of this procedure is to allow system designers to extract the critical cognitive elements of a particular task.
The Questionnaire for User Interaction Satisfaction (QUIS) is a measurement tool designed to assess a computer user's subjective satisfaction with the human-computer interface.
The QUIS elicits user opinions and evaluates user acceptance of a computer interface. The questionnaire asks the user to rate the interface in areas such as ease of use, consistency, system capability, and learning. The questions relate to human-computer interfaces and responses are normally measured on an ascending scale from 1 to 10.
QUIS has pen and paper and PC software versions for administration. Operators use a 10-point scale to rate 21 items that relating to the system's usability. These ratings produce data for the overall reaction to a system's usability on 6 factors.
The Task-centered system design (TCSD) is a method that helps developers design and evaluate interfaces based on users' real-world tasks. As part of design, it becomes a user-centered requirements analysis (with the requirements being the tasks that need to be satisfied). Effective task and user analysis is based on a close personal contact between members of the design team and the people who will actually be using the system. The process is structured around specific tasks that the user will want to accomplish with the system being developed. These tasks are chosen early in the design effort, then used to raise issues about the design, to aid in making design decisions, and to evaluate the design as it is developed. The evaluator can also do a walk-through of the prototype, using the tasks to generate a step by step scenario of what a user would have to do with the system.
The event tree analysis (ETA) is an inductive procedure that shows all the possible outcomes resulting from an accidental (initial) event, taking into account if the installed safety barriers are functioning or not, and additional events and factors. This analysis technique is used to analyze the effects of functioning or failed systems given that an event has occurred. This technique may be applied to a system early in the design process to identify potential issues that may arise, rather than correcting the issues after they occur. By studying all relevant accidental events the ETA can be used to identify all potential accident scenarios and sequences in a complex system.
The Keystroke-Level Model (KLM), proposed by Card, Moran and Newell predicts the task execution time related to a specified design and specific task scenario. The model is an 11-step method that can be used by individuals or companies seeking ways to estimate the time it takes to complete simple data input tasks using a computer and mouse. Basically, the user must list the sequence of keystroke-level actions the user must perform to accomplish a certain task, and then add up the times required by the specific actions. It is not essential to have a mocked-up or implemented design; the KLM requires only that the user interface is specified in enough detail to determine the sequence of actions required to perform the tasks. The calculations and the number of steps required to accurately compute the overall task time increase quickly as the number of tasks involved increases.
The Sistematic Human Error Reduction and Prediction Approach (SHERPA) was developed by Embrey in 1986. It's first application was for nuclear reprocessing industry and nowadays is the most used Human Error Identification approach with further application in domains such as aviation, public technology, in-car devices, and many more. It starts with a Hierarchical Task Analysis of the scenario under consideration, further applying an error mode taxonomy linked to a behavioural taxonomy to it. Literature has showed that SHERPA is the most successful Human Error Identification system methods in terms of accuracy of error prediction.
The critical incident technique (CIT), developed by John Flanagan, is a research method in which the research participants are asked to describe a situation when an occurrence, behaviour or action impacted (positively or negatively) a specified outcome (for example, the accomplishment of a given goal). The critical CIT is a systematic procedure for obtaining rich, qualitative information about significant incidents from observers with direct experience, that helps researchers to understand the critical requirements and needs for systems, individuals or processes. This method is constituted by a set of procedures designed to collect observations of human behaviour from simplifying their transcription into decision-making processes and in particular to improve problem-solving.
Human error assessment and reduction technique (HEART) is a technique proposed by J.C. Williams in 1985. Since its development, HEART has been applied in several non-healthcare applications and has more recently been applied in a few different healthcare settings, for the purposes of evaluating the probability of a human error occurring throughout the completion of a specific task. HEART method is based upon the principle that every time a task is performed there is a possibility of failure and that the likelihood of this is affected by one or more Error Producing Conditions (EPCs) – distraction, tiredness, cramped conditions etc. It is assumed that each EPC has a constant effect on human reliability, and that this effect always reduces human reliability, minimizing risk.