This study aimed to examine the effects of accumulating nursing work on maximal and rapid strength characteristics in female nurses and compare these effects in day versus night shift workers.
Nurses exhibit among the highest nonfatal injury rates of all occupations, which may be a consequence of long, cumulative work shift schedules. Fatigue may accumulate across multiple shifts and lead to performance impairments, which in turn may be linked to injury risks.
Thirty-seven nurses and aides performed isometric strength-based performance testing of three muscle groups, including the knee extensors, knee flexors, and wrist flexors (hand grip), as well as countermovement jumps, at baseline and following exposure to three 12-hour work shifts in a four-day period. Variables included peak torque (PT) and rate of torque development (RTD) from isometric strength testing and jump height and power output.
The rigorous work period resulted in significant decreases (–7.2% to –19.2%) in a large majority (8/9) of the isometric strength-based measurements. No differences were noted for the day versus night shift workers except for the RTD at 200 millisecond variable, for which the night shift had greater work-induced decreases than the day shift workers. No changes were observed for jump height or power output.
A compressed nursing work schedule resulted in decreases in strength-based performance abilities, being indicative of performance fatigue.
Compressed work schedules involving long shifts lead to functional declines in nurse performance capacities that may pose risks for both the nurse and patient quality of care. Fatigue management plans are needed to monitor and regulate increased levels of fatigue.
To investigate how people’s sequential adjustments to their position are impacted by the source of the information.
There is an extensive body of research on how the order in which new information is received affects people’s final views and decisions as well as research on how they adjust their views in light of new information.
Seventy college-aged students, 60% of whom were women, completed one of eight different randomly distributed booklets prepared to create the eight different between-subjects treatment conditions created by crossing the two levels of information source with the four level of order conditions. Based on the information provided, participants estimated the probability of an attack, the dependent measure.
Confirming information from an expert intelligence officer significantly increased the attack probability from the initial position more than confirming information from a longtime friend. Conversely, disconfirming information from a longtime friend decreased the attack probability significantly more than the same information from an intelligence officer.
It was confirmed that confirming and disconfirming evidence were differentially affected depending on information source, either an expert or a close friend. The difference appears to be due to the existence of two kinds of trust: cognitive-based imbued to an expert and affective-based imbued to a close friend.
Purveyors of information need to understand that it is not only the content of a message that counts but that other forces are at work such as the order in which information is received and characteristics of the information source.
Four studies were conducted to assess bicyclist conspicuity enhancement at night by the application of reflective tape (ECE/ONU 104) to the bicycle rear frame and to pedal cranks.
Previous studies have tested the benefits of reflective markings applied to bicyclist clothing. Reflective jackets however need to be available and worn while reflective markings enhance conspicuity without any active behavior by the bicyclist.
In the first study, reflective tape was applied to the rear frame. Detection distance was compared in four conditions: control, rear red reflector, high visibility jacket, and reflective tape. In the second study, the same conditions were studied with night street lighting on and off. In the third study, detection and recognition distances were evaluated in rainy conditions. In the fourth study, visibility was assessed with the reflective tape applied to pedal cranks.
In the first study, the application of reflective markings resulted in a detection distance of 168.28 m. In the second study, the detection distance with reflective markings was 229.74 m with public street light on and 256.41 m with public street light off. In rainy conditions, detection distance using the reflective markings was 146.47 m. Reflective tape applied to pedal cracks resulted in a detection distance of 168.60 m.
Reflective tape applied to the rear bicycle frame can considerably increase bicyclist conspicuity and safety at night.
Reflective tape is highly recommended to complement anterior and rear lights in bicycle riding at night.
The objective for this study was to investigate the effects of prior familiarization with takeover requests (TORs) during conditional automated driving on drivers’ initial takeover performance and automation trust.
System-initiated TORs are one of the biggest concerns for conditional automated driving and have been studied extensively in the past. Most, but not all, of these studies have included training sessions to familiarize participants with TORs. This makes them hard to compare and might obscure first-failure-like effects on takeover performance and automation trust formation.
A driving simulator study compared drivers’ takeover performance in two takeover situations across four prior familiarization groups (no familiarization, description, experience, description and experience) and automation trust before and after experiencing the system.
As hypothesized, prior familiarization with TORs had a more positive effect on takeover performance in the first than in a subsequent takeover situation. In all groups, automation trust increased after participants experienced the system. Participants who were given no prior familiarization with TORs reported highest automation trust both before and after experiencing the system.
The current results extend earlier findings suggesting that prior familiarization with TORs during conditional automated driving will be most relevant for takeover performance in the first takeover situation and that it lowers drivers’ automation trust.
Potential applications of this research include different approaches to familiarize users with automated driving systems, better integration of earlier findings, and sophistication of experimental designs.
The aim of this study was to determine whether a sequence of earcons can effectively convey the status of multiple processes, such as the status of multiple patients in a clinical setting.
Clinicians often monitor multiple patients. An auditory display that intermittently conveys the status of multiple patients may help.
Nonclinician participants listened to sequences of 500-ms earcons that each represented the heart rate (HR) and oxygen saturation (SpO2) levels of a different simulated patient. In each sequence, one, two, or three patients had an abnormal level of HR and/or SpO2. In Experiment 1, participants reported which of nine patients in a sequence were abnormal. In Experiment 2, participants identified the vital signs of one, two, or three abnormal patients in sequences of one, five, or nine patients, where the interstimulus interval (ISI) between earcons was 150 ms. Experiment 3 used the five-sequence condition of Experiment 2, but the ISI was either 150 ms or 800 ms.
Participants reported which patient(s) were abnormal with median 95% accuracy. Identification accuracy for vital signs decreased as the number of abnormal patients increased from one to three, p < .001, but accuracy was unaffected by number of patients in a sequence. Overall, identification accuracy was significantly higher with an ISI of 800 ms (89%) compared with an ISI of 150 ms (83%), p < .001.
A multiple-patient display can be created by cycling through earcons that represent individual patients.
The principles underlying the multiple-patient display can be extended to other vital signs, designs, and domains.
The goal of the present study was to examine the effects of domain-relevant expertise on running memory and the ability to process handoffs of information. In addition, the role of active or passive processing was examined.
Currently, there is little research that addresses how individuals with different levels of expertise process information in running memory when the information is needed to perform a real-world task.
Three groups of participants differing in their level of clinical expertise (novice, intermediate, and expert) performed an abstract running memory span task and two tasks resembling real-world activities, a clinical handoff task and an air traffic control (ATC) handoff task. For all tasks, list length and the amount of information to be recalled were manipulated.
Regarding processing strategy, all participants used passive processing for the running memory span and ATC tasks. The novices also used passive processing for the clinical task. The experts, however, appeared to use more active processing, and the intermediates fell in between.
Overall, the results indicated that individuals with clinical expertise and a developed mental model rely more on active processing of incoming information for the clinical task while individuals with little or no knowledge rely on passive processing.
The results have implications about how training should be developed to aid less experienced personnel identify what information should be included in a handoff and what should not.
A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types.
With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people’s error rates, as does the presence of other information to maintain in an active state.
The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay.
The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton’s Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair’s Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task.
Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory.
The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.
The present paper presents findings from two studies addressing the effects of the employee’s intention to have rest breaks on rest-break frequency and the change of well-being during a workday.
Rest breaks are effective in avoiding an accumulation of fatigue during work. However, little is known about individual differences in rest-break behavior.
In Study 1, the association between rest-break intention and the daily number of rest breaks recorded over 4 consecutive workdays was determined by generalized linear model in a sample of employees (n = 111, 59% females). In Study 2, professional geriatric nurses (n = 95 females) who worked over two consecutive 12-hour day shifts recorded well-being (fatigue, distress, effort motivation) at the beginning and the end of their shifts. The effect of rest-break intention on the change of well-being was determined by multilevel modeling.
Rest-break intention was positively associated with the frequency of rest breaks (Study 1) and reduced the increase of fatigue and distress over the workday (Study 2).
The results indicate that individual differences account for the number of breaks an employee takes and, as a consequence, for variations in the work-related fatigue and distress.
Strengthening rest-break intentions may help to increase rest-break behavior to avoid the buildup of fatigue and distress over a workday.
The goals of this study were to assess the risk identification aspect of mental models using standard elicitation methods and how university campus alerts were related to these mental models.
People fail to follow protective action recommendations in emergency warnings. Past research has yet to examine cognitive processes that influence emergency decision-making.
Study 1 examined 2 years of emergency alerts distributed by a large southeastern university. In Study 2, participants listed emergencies in a thought-listing task. Study 3 measured participants’ time to decide if a situation was an emergency.
The university distributed the most alerts about an armed person, theft, and fire. In Study 2, participants most frequently listed fire, car accident, heart attack, and theft. In Study 3, participants quickly decided a bomb, murder, fire, tornado, and rape were emergencies. They most slowly decided that a suspicious package and identify theft were emergencies.
Recent interaction with warnings was only somewhat related to participants’ mental models of emergencies. Risk identification precedes decision-making and applying protective actions. Examining these characteristics of people’s mental representations of emergencies is fundamental to further understand why some emergency warnings go ignored.
Someone must believe a situation is serious to categorize it as an emergency before taking the protective action recommendations in an emergency warning. Present-day research must continue to examine the problem of people ignoring warning communication, as there are important cognitive factors that have not yet been explored until the present research.
The overall purpose was to understand the effects of handoff protocols using meta-analytic approaches.
Standardized protocols have been required by the Joint Commission, but meta-analytic integration of handoff protocol research has not been conducted.
The primary outcomes investigated were handoff information passed during transitions of care, patient outcomes, provider outcomes, and organizational outcomes. Sources included Medline, SAGE, Embase, PsycINFO, and PubMed, searched from the earliest date available through March 30th, 2015. Initially 4,556 articles were identified, with 4,520 removed. This process left a final set of 36 articles, all which included pre-/postintervention designs implemented in live clinical/hospital settings. We also conducted a moderation analysis based on the number of items contained in each protocol to understand if the length of a protocol led to systematic changes in effect sizes of the outcome variables.
Meta-analyses were conducted on 34,527 pre- and 30,072 postintervention data points. Results indicate positive effects on all four outcomes: handoff information (g = .71, 95% confidence interval [CI] [.63, .79]), patient outcomes (g = .53, 95% CI [.41, .65]), provider outcomes (g = .51, 95% CI [.41, .60]), and organizational outcomes (g = .29, 95% CI [.23, .35]). We found protocols to be effective, but there is significant publication bias and heterogeneity in the literature. Due to publication bias, we further searched the gray literature through greylit.org and found another 347 articles, although none were relevant to this research. Our moderation analysis demonstrates that for handoff information, protocols using 12 or more items led to a significantly higher proportion of information passed compared with protocols using 11 or fewer items. Further, there were numerous negative outcomes found throughout this meta-analysis, with trends demonstrating that protocols can increase the time for handover and the rate of errors of omission.
These results demonstrate that handoff protocols tend to improve results on multiple levels, including handoff information passed and patient, provider, and organizational outcomes. These findings come with the caveat that publication bias exists in the literature on handoffs. Instances where protocols can lead to negative outcomes are also discussed.
Significant effects were found for protocols across provider types, regardless of expertise or area of clinical focus. It also appears that more thorough protocols lead to more information being passed, especially when those protocols consist of 12 or more items. Given these findings, publication bias is an apparent feature of this literature base. Recommendations to reduce the apparent publication bias in the field include changing the way articles are screened and published.
Analysis of the effect of mental fatigue on a cognitive task and determination of the right start time for rest breaks in work environments.
Mental fatigue has been recognized as one of the most important factors influencing individual performance. Subjective and physiological measures are popular methods for analyzing fatigue, but they are restricted to physical experiments. Computational cognitive models are useful for predicting operator performance and can be used for analyzing fatigue in the design phase, particularly in industrial operations and inspections where cognitive tasks are frequent and the effects of mental fatigue are crucial.
A cyclic mental task is modeled by the ACT-R architecture, and the effect of mental fatigue on response time and error rate is studied. The task includes visual inspections in a production line or control workstation where an operator has to check products’ conformity to specifications. Initially, simulated and experimental results are compared using correlation coefficients and paired t test statistics. After validation of the model, the effects are studied by human and simulated results, which are obtained by running 50-minute tests.
It is revealed that during the last 20 minutes of the tests, the response time increased by 20%, and during the last 12.5 minutes, the error rate increased by 7% on average.
The proper start time for the rest period can be identified by setting a limit on the error rate or response time.
The proposed model can be applied early in production planning to decrease the negative effects of mental fatigue by predicting the operator performance. It can also be used for determining the rest breaks in the design phase without an operator in the loop.
We investigated the effects of automatic target detection (ATD) on the detection and identification performance of soldiers.
Prior studies have shown that highlighting targets can aid their detection. We provided soldiers with ATD that was more likely to detect one target identity than another, potentially acting as an implicit identification aid.
Twenty-eight soldiers detected and identified simulated human targets in an immersive virtual environment with and without ATD. Task difficulty was manipulated by varying scene illumination (day, night). The ATD identification bias was also manipulated (hostile bias, no bias, and friendly bias). We used signal detection measures to treat the identification results.
ATD presence improved detection performance, especially under high task difficulty (night illumination). Identification sensitivity was greater for cued than uncued targets. The identification decision criterion for cued targets varied with the ATD identification bias but showed a "sluggish beta" effect.
ATD helps soldiers detect and identify targets. The effects of biased ATD on identification should be considered with respect to the operational context.
Less-than-perfectly-reliable ATD is a useful detection aid for dismounted soldiers. Disclosure of known ATD identification bias to the operator may aid the identification process.
To propose a driver attention theory based on the notion of driving as a satisficing and partially self-paced task and, within this framework, present a definition for driver inattention.
Many definitions of driver inattention and distraction have been proposed, but they are difficult to operationalize, and they are either unreasonably strict and inflexible or suffer from hindsight bias.
Existing definitions of driver distraction are reviewed and their shortcomings identified. We then present the minimum required attention (MiRA) theory to overcome these shortcomings. Suggestions on how to operationalize MiRA are also presented.
MiRA describes which role the attention of the driver plays in the shared "situation awareness of the traffic system." A driver is considered attentive when sampling sufficient information to meet the demands of the system, namely, that he or she fulfills the preconditions to be able to form and maintain a good enough mental representation of the situation. A driver should only be considered inattentive when information sampling is not sufficient, regardless of whether the driver is concurrently executing an additional task or not.
The MiRA theory builds on well-established driver attention theories. It goes beyond available driver distraction definitions by first defining what a driver needs to be attentive to, being free from hindsight bias, and allowing the driver to adapt to the current demands of the traffic situation through satisficing and self-pacing. MiRA has the potential to provide the stepping stone for unbiased and operationalizable inattention detection and classification.
The purpose was to determine if Soldier rucksack load, marching distance, and average heart rate (HR) during shooting affect the probability of hitting the target.
Infantry Soldiers routinely carry heavy rucksack loads and are expected to engage enemy targets should a threat arise.
Twelve male Soldiers performed two 11.8 km marches in forested terrain at 4.3 km/hour on separate days (randomized, counterbalanced design). The Rifleman load consisted of protective armor (26.1 kg); the Rucksack load included the Rifleman load plus a weighted rucksack (48.5 kg). Soldiers performed a live-fire shooting task (48 targets) prior to the march, in the middle of the march, and at the end of the march. HR was collected during the shooting task. Data were assessed with multilevel logistic regression controlling for the multiple observations on each subject and shooting target distance. Predicted probabilities for hitting the target were calculated.
There was a three-way interaction effect between rucksack load, average HR, and march (p = .02). Graphical assessment of predicted probabilities indicated that regardless of load, marching increases shooting performance. Increases in shooting HR after marching result in lower probability of hitting the target, and rucksack load has inconsistent effects on marksmanship.
Early evidence suggests that rucksack load and marching may not uniformly decrease marksmanship but that an inverted-U phenomenon may govern changes in marksmanship.
The effects of load and marching on marksmanship are not linear; the abilities of Soldiers should be continuously monitored to understand their capabilities in a given scenario.
The objective of the present research was to understand drivers’ interaction patterns with hybrid electric vehicles’ (HEV) eco-features (electric propulsion, regenerative braking, neutral mode) and their relationship to fuel efficiency and driver characteristics (technical system knowledge, eco-driving motivation).
Eco-driving (driving behaviors performed to achieve higher fuel efficiency) has the potential to reduce CO2 emissions caused by road vehicles. Eco-driving in HEVs is particularly challenging due to the systems’ dynamic energy flows. As a result, drivers are likely to show diverse eco-driving behaviors, depending on factors like knowledge and motivation. The eco-features represent an interface for the control of the systems’ energy flows.
A sample of 121 HEV drivers who had constantly logged their fuel consumption prior to the study participated in an online questionnaire.
Drivers’ interaction patterns with the eco-features were related to fuel efficiency. A common factor was identified in an exploratory factor analysis, characterizing the intensity of actively dealing with electric energy, which was also related to fuel efficiency. Driver characteristics were not related to this factor, yet they were significant predictors of fuel efficiency.
From the perspective of user–energy interaction, the relationship of the aggregated factor to fuel efficiency emphasizes the central role of drivers’ perception of and interaction with energy conversions in determining HEV eco-driving success.
To arrive at an in-depth understanding of drivers’ eco-driving behaviors that can guide interface design, authors of future research should be concerned with the psychological processes that underlie drivers’ interaction patterns with eco-features.
The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors.
Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players’ attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes.
The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented.
A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity.
The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience.
The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users.
We examine how transitions in task demand are manifested in mental workload and performance in a dual-task setting.
Hysteresis has been defined as the ongoing influence of demand levels prior to a demand transition. Authors of previous studies predominantly examined hysteretic effects in terms of performance. However, little is known about the temporal development of hysteresis in mental workload.
A simulated driving task was combined with an auditory memory task. Participants were instructed to prioritize driving or to prioritize both tasks equally. Three experimental conditions with low, high, and low task demands were constructed by manipulating the frequency of lane changing. Multiple measures of subjective mental workload were taken during experimental conditions.
Contrary to our prediction, no hysteretic effects were found after the high- to low-demand transition. However, a hysteretic effect in mental workload was found within the high-demand condition, which degraded toward the end of the high condition. Priority instructions were not reflected in performance.
Online assessment of both performance and mental workload demonstrates the transient nature of hysteretic effects. An explanation for the observed hysteretic effect in mental workload is offered in terms of effort regulation.
An informed arrival at the scene is important in safety operations, but peaks in mental workload should be avoided to prevent buildup of fatigue. Therefore, communication technologies should incorporate the historical profile of task demand.
The aim of this study was to evaluate the long-lasting effects of prolonged standing work on a hard floor or floor mat and slow-pace walking on muscle twitch force (MTF) elicited by electrical stimulation.
Prolonged standing work may alter lower-leg muscle function, which can be quantified by changes in the MTF amplitude and duration related to muscle fatigue. Ergonomic interventions have been proposed to mitigate fatigue and discomfort; however, their influences remain controversial.
Ten men and eight women simulated standing work in 320-min experiments with three conditions: standing on a hard floor or an antifatigue mat and walking on a treadmill, each including three seated rest breaks. MTF in the gastrocnemius-soleus muscles was evaluated through changes in signal amplitude and duration.
The significant decrease of MTF amplitude and an increase of duration after standing work on a hard floor and on a mat persisted beyond 1 hr postwork. During walking, significant MTF metrics changes appeared 30 min postwork. MTF amplitude decrease was not significant after the first 110 min in any of the conditions; however, MTF duration was significantly higher than baseline in the standing conditions.
Similar long-lasting weakening of MTF was induced by standing on a hard floor and on an antifatigue mat. However, walking partially attenuated this phenomenon.
Mostly static standing is likely to contribute to alterations of MTF in lower-leg muscles and potentially to musculoskeletal disorders regardless of the flooring characteristics. Occupational activities including slow-pace walking may reduce such deterioration in muscle function.
The objectives were to (a) implement theoretical perspectives regarding human–automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance.
Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation.
The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system.
Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions.
The three model-based tools offer useful ways to predict operator performance in complex systems.
The three tools offer ways to predict the effects of different automation designs on operator performance.
Our aim was to test if highlighting and placement of substance name on medication package have the potential to reduce patient errors.
An unintentional overdose of medication is a large health issue that might be linked to medication package design. In two experiments, placement, background color, and the active ingredient of generic medication packages were manipulated according to best human factors guidelines to reduce causes of labeling-related patient errors.
In two experiments, we compared the original packaging with packages where we varied placement of the name, dose, and background of the active ingredient. Age-relevant differences and the effect of color on medication recognition error were tested. In Experiment 1, 59 volunteers (30 elderly and 29 young students), participated. In Experiment 2, 25 volunteers participated.
The most common error was the inability to identify that two different packages contained the same active ingredient (young, 41%, and elderly, 68%). This kind of error decreased with the redesigned packages (young, 8%, and elderly, 16%). Confusion errors related to color design were reduced by two thirds in the redesigned packages compared with original generic medications.
Prominent placement of substance name and dose with a band of high-contrast color support recognition of the active substance in medications.
A simple modification including highlighting and placing the name of the active ingredient in the upper right-hand corner of the package helps users realize that two different packages can contain the same active substance, thus reducing the risk of inadvertent medication overdose.
I introduce the automation-by-expertise-by-training interaction in automated systems and discuss its influence on operator performance.
Transportation accidents that, across a 30-year interval demonstrated identical automation-related operator errors, suggest a need to reexamine traditional views of automation.
I review accident investigation reports, regulator studies, and literature on human computer interaction, expertise, and training and discuss how failing to attend to the interaction of automation, expertise level, and training has enabled operators to commit identical automation-related errors.
Automated systems continue to provide capabilities exceeding operators’ need for effective system operation and provide interfaces that can hinder, rather than enhance, operator automation-related situation awareness. Because of limitations in time and resources, training programs do not provide operators the expertise needed to effectively operate these automated systems, requiring them to obtain the expertise ad hoc during system operations. As a result, many do not acquire necessary automation-related system expertise.
Integrating automation with expected operator expertise levels, and within training programs that provide operators the necessary automation expertise, can reduce opportunities for automation-related operator errors.
Research to address the automation-by-expertise-by-training interaction is needed. However, such research must meet challenges inherent to examining realistic sociotechnical system automation features with representative samples of operators, perhaps by using observational and ethnographic research. Research in this domain should improve the integration of design and training and, it is hoped, enhance operator performance.
This article describes a closed-loop, integrated human–vehicle model designed to help understand the underlying cognitive processes that influenced changes in subject visual attention, mental workload, and situation awareness across control mode transitions in a simulated human-in-the-loop lunar landing experiment.
Control mode transitions from autopilot to manual flight may cause total attentional demands to exceed operator capacity. Attentional resources must be reallocated and reprioritized, which can increase the average uncertainty in the operator’s estimates of low-priority system states. We define this increase in uncertainty as a reduction in situation awareness.
We present a model built upon the optimal control model for state estimation, the crossover model for manual control, and the SEEV (salience, effort, expectancy, value) model for visual attention. We modify the SEEV attention executive to direct visual attention based, in part, on the uncertainty in the operator’s estimates of system states.
The model was validated using the simulated lunar landing experimental data, demonstrating an average difference in the percentage of attention ≤3.6% for all simulator instruments. The model’s predictions of mental workload and situation awareness, measured by task performance and system state uncertainty, also mimicked the experimental data.
Our model supports the hypothesis that visual attention is influenced by the uncertainty in system state estimates.
Conceptualizing situation awareness around the metric of system state uncertainty is a valuable way for system designers to understand and predict how reallocations in the operator’s visual attention during control mode transitions can produce reallocations in situation awareness of certain states.
To assess whether identifying (or ignoring) learned alarm sounds interferes with performance on a task involving working memory.
A number of researchers have suggested that auditory alarms could interfere with working memory in complex task environments, and this could serve as a caution against their use. Changing auditory information has been shown to interfere with serial recall, even when the auditory information is to be ignored. However, previous researchers have not examined well-learned patterns, such as familiar alarms.
One group of participants learned a set of alarms (either a melody, a rhythmic pulse, or a spoken nonword phrase) and subsequently undertook a digits-forward task in three conditions (no alarms, identify the alarm, or ignore the alarm). A comparison group undertook the baseline and ignore conditions but had no prior exposure to the alarms.
All alarms interfered with serial recall when participants were asked to identify them; however, only the nonword phrase interfered with recall when ignored. Moreover, there was no difference between trained and untrained participants in terms of recall performance when ignoring the alarms, suggesting that previous training does not make alarms less ignorable.
Identifying any alarm sound may interfere with immediate working memory; however, spoken alarms may interfere even when ignored.
It is worth considering the importance of alarms in environments requiring high working memory performance and in particular avoiding spoken alarms in such environments.
We use signal detection theory to measure vulnerability to phishing attacks, including variation in performance across task conditions.
Phishing attacks are difficult to prevent with technology alone, as long as technology is operated by people. Those responsible for managing security risks must understand user decision making in order to create and evaluate potential solutions.
Using a scenario-based online task, we performed two experiments comparing performance on two tasks: detection, deciding whether an e-mail is phishing, and behavior, deciding what to do with an e-mail. In Experiment 1, we manipulated the order of the tasks and notification of the phishing base rate. In Experiment 2, we varied which task participants performed.
In both experiments, despite exhibiting cautious behavior, participants’ limited detection ability left them vulnerable to phishing attacks. Greater sensitivity was positively correlated with confidence. Greater willingness to treat e-mails as legitimate was negatively correlated with perceived consequences from their actions and positively correlated with confidence. These patterns were robust across experimental conditions.
Phishing-related decisions are sensitive to individuals’ detection ability, response bias, confidence, and perception of consequences. Performance differs when people evaluate messages or respond to them but not when their task varies in other ways.
Based on these results, potential interventions include providing users with feedback on their abilities and information about the consequences of phishing, perhaps targeting those with the worst performance. Signal detection methods offer system operators quantitative assessments of the impacts of interventions and their residual vulnerability.
The objective was to characterize multitask resource reallocation strategies when managing subtasks with various assigned values.
When solving a resource conflict in multitasking, Salvucci and Taatgen predict a globally rational strategy will be followed that favors the most urgent subtask and optimizes global performance. However, Katidioti and Taatgen identified a locally rational strategy that optimizes only a subcomponent of the whole task, leading to detrimental consequences on global performance. Moreover, the question remains open whether expertise would have an impact on the choice of the strategy.
We adopted a multitask environment used for pilot selection with a change in emphasis on two out of four subtasks while all subtasks had to be maintained over a minimum performance. A laboratory eye-tracking study contrasted 20 recently selected pilot students considered as experienced with this task and 15 university students considered as novices.
When two subtasks were emphasized, novices focused their resources particularly on one high-value subtask and failed to prevent both low-value subtasks falling below minimum performance. On the contrary, experienced people delayed the processing of one low-value subtask but managed to optimize global performance.
In a multitasking environment where some subtasks are emphasized, novices follow a locally rational strategy whereas experienced participants follow a globally rational strategy.
During complex training, trainees are only able to adjust their resource allocation strategy to subtask emphasis changes once they are familiar with the multitasking environment.
This study evaluates the effectiveness of a training program designed to improve cross-functional coordination in airline operations.
Teamwork across professional specializations is essential for safe and efficient airline operations, but aviation education primarily emphasizes positional knowledge and skill. Although crew resource management training is commonly used to provide some degree of teamwork training, it is generally focused on specific specializations, and little training is provided in coordination across specializations.
The current study describes and evaluates a multifaceted training program designed to enhance teamwork and team performance of cross-functional teams within a simulated airline flight operations center. The training included a variety of components: orientation training, position-specific declarative knowledge training, position-specific procedural knowledge training, a series of high-fidelity team simulations, and a series of after-action reviews.
Following training, participants demonstrated more effective teamwork, development of transactive memory, and more effective team performance.
Multifaceted team training that incorporates positional training and team interaction in complex realistic situations and followed by after-action reviews can facilitate teamwork and team performance.
Team training programs, such as the one described here, have potential to improve the training of aviation professionals. These techniques can be applied to other contexts where multidisciplinary teams and multiteam systems work to perform highly interdependent activities.
This study explored whether working memory and sustained attention influence cognitive lock-up, which is a delay in the response to consecutive automation failures.
Previous research has demonstrated that the information that automation provides about failures and the time pressure that is associated with a task influence cognitive lock-up. Previous research has also demonstrated considerable variability in cognitive lock-up between participants. This is why individual differences might influence cognitive lock-up. The present study tested whether working memory—including flexibility in executive functioning—and sustained attention might be crucial in this regard.
Eighty-five participants were asked to monitor automated aircraft functions. The experimental manipulation consisted of whether or not an initial automation failure was followed by a consecutive failure. Reaction times to the failures were recorded. Participants’ working-memory and sustained-attention abilities were assessed with standardized tests.
As expected, participants’ reactions to consecutive failures were slower than their reactions to initial failures. In addition, working-memory and sustained-attention abilities enhanced the speed with which participants reacted to failures, more so with regard to consecutive than to initial failures.
The findings highlight that operators with better working memory and sustained attention have small advantages when initial failures occur, but their advantages increase across consecutive failures.
The results stress the need to consider personnel selection strategies to mitigate cognitive lock-up in general and training procedures to enhance the performance of low ability operators.
The aim of this study was to develop a scale for the "psychological cost" of making control responses in the nonstereotype direction.
Wickens, Keller, and Small suggested values for the psychological cost arising from having control/display relationships that were not in the common stereotype directions. We provide values of such costs specifically for these situations.
Working from data of Chan and Hoffmann for 168 combinations of display location, control type, and display movement direction, we define values for the cost and compare these with the suggested values of Wickens et al.’s Frame of Reference Transformation Tool (FORT) model.
We found marked differences between the values of the FORT model and the data of our experiments. The differences arise largely from the effects of the Worringham and Beringer visual field principle not being adequately considered in the previous research.
A better indication of the psychological cost for use of incorrect control/display stereotypes is given. It is noted that these costs are applicable only to the factor of stereotype strength and not other factors considered in the FORT model.
Effects of having controls and displays that are not arranged to operate with population expectancies can be readily determined from the data in this paper.
The goal for this study was to develop an English translation of the Attention-Related Driving Errors Scale (ARDES-US) and to determine its potential relationship with driver history and other demographic variables.
Individual differences in performance on vigilance and cognitive tasks are well documented, but less is known about susceptibility to attention-related errors while driving. The ARDES has been developed and administered in both Spanish and Chinese but to our knowledge has never been administered or examined in an English-speaking population.
Two hundred ninety-six English-speaking individuals completed a series of self-report measures, including the ARDES-US, Attention-Related Cognitive Errors Scale, Mindful Attention Awareness Scale, and Cognitive Failures Questionnaire.
A confirmatory factor analysis using maximum-likelihood estimates with robust standard errors revealed results largely consistent with previous versions of the ARDES, namely, the ARDES-Spain and ARDES-Argentina. Additionally, a number of new results emerged. Specifically, women, drivers who received traffic tickets within the previous 2 years, and those with a lower level of education all had a greater propensity toward self-reported driver inattention as measured by the ARDES-US. Further analyses revealed that these findings were independent of age, years of driving experience, and driving frequency.
These results suggest that the ARDES-US is a valid and reliable measure of driver inattention with an English-speaking American sample.
Potential applications of the ARDES-US include identifying individuals who are at greater risk of attention-related errors while driving and suggesting individually tailored training and safety countermeasures.
The aim of this study was to examine the human–automation interaction issues and the interacting factors in the context of conflict detection and resolution advisory (CRA) systems.
The issues of imperfect automation in air traffic control (ATC) have been well documented in previous studies, particularly in conflict-alerting systems. The extent to which the prior findings can be applied to an integrated conflict detection and resolution system in future ATC remains unknown.
Twenty-four participants were evenly divided into two groups corresponding to a medium– and a high–traffic density condition, respectively. In each traffic density condition, participants were instructed to perform simulated ATC tasks under four automation conditions, including reliable, unreliable with short time allowance to secondary conflict (TAS), unreliable with long TAS, and manual conditions. Dependent variables accounted for conflict resolution performance, workload, situation awareness, and trust in and dependence on the CRA aid, respectively.
Imposing the CRA automation did increase performance and reduce workload as compared with manual performance. The CRA aid did not decrease situation awareness. The benefits of the CRA aid were manifest even when it was imperfectly reliable and were apparent across traffic loads. In the unreliable blocks, trust in the CRA aid was degraded but dependence was not influenced, yet the performance was not adversely affected.
The use of CRA aid would benefit ATC operations across traffic densities.
CRA aid offers benefits across traffic densities, regardless of its imperfection, as long as its reliability level is set above the threshold of assistance, suggesting its application for future ATC.
Based on the line operations safety audit (LOSA), two studies were conducted to develop and deploy an equivalent tool for aircraft maintenance: the maintenance operations safety survey (MOSS).
Safety in aircraft maintenance is currently measured reactively, based on the number of audit findings, reportable events, incidents, or accidents. Proactive safety tools designed for monitoring routine operations, such as flight data monitoring and LOSA, have been developed predominantly for flight operations.
In Study 1, development of MOSS, 12 test peer-to-peer observations were collected to investigate the practicalities of this approach. In Study 2, deployment of MOSS, seven expert observers collected 56 peer-to-peer observations of line maintenance checks at four stations. Narrative data were coded and analyzed according to the threat and error management (TEM) framework.
In Study 1, a line check was identified as a suitable unit of observation. Communication and third-party data management were the key factors in gaining maintainer trust. Study 2 identified that on average, maintainers experienced 7.8 threats (operational complexities) and committed 2.5 errors per observation. The majority of threats and errors were inconsequential. Links between specific threats and errors leading to 36 undesired states were established.
This research demonstrates that observations of routine maintenance operations are feasible. TEM-based results highlight successful management strategies that maintainers employ on a day-to-day basis.
MOSS is a novel approach for safety data collection and analysis. It helps practitioners understand the nature of maintenance errors, promote an informed culture, and support safety management systems in the maintenance domain.
The current study investigated performance on a dual auditory task during a simulated night shift.
Night shifts and sleep deprivation negatively affect performance on vigilance-based tasks, but less is known about the effects on complex tasks. Because language processing is necessary for successful work performance, it is important to understand how it is affected by night work and sleep deprivation.
Sixty-two participants completed a simulated night shift resulting in 28 hr of total sleep deprivation. Performance on a vigilance task and a dual auditory language task was examined across four testing sessions.
The results indicate that working at night negatively impacts vigilance, auditory attention, and comprehension. The effects on the auditory task varied based on the content of the auditory material. When the material was interesting and easy, the participants performed better. Night work had a greater negative effect when the auditory material was less interesting and more difficult.
These findings support research that vigilance decreases during the night. The results suggest that auditory comprehension suffers when individuals are required to work at night. Maintaining attention and controlling effort especially on passages that are less interesting or more difficult could improve performance during night shifts.
The results from the current study apply to many work environments where decision making is necessary in response to complex auditory information. Better predicting the effects of night work on language processing is important for developing improved means of coping with shiftwork.
The aim of these studies was to examine the extent to which uncertainty in contact location in submarine track management affected operator situation awareness (SA), workload, and performance and whether operator SA predicted unique variance in performance.
We extend prior research by manipulating uncertainty in contact location and by including a sample of expert track managers in a submarine combat system.
In Experiment 1, university students completed a track management task. In Experiment 2, expert submariners were embedded in a real submarine combat system. Uncertainty was manipulated and SA was measured using the situation present assessment method.
Increased uncertainty led to higher student workload and moderately impaired SA and performance, and SA predicted incremental variance in performance. Uncertainty had no effect on expert SA or the accuracy of the tactical picture compiled. On average, experts took 20 s to accept SA queries (compared with 2.18 s for students). The time taken for experts to accept SA queries, but not their subsequent response to SA queries, was positively associated with their tactical picture accuracy.
Uncertainty can negatively impact SA, workload, and performance. Some key findings from the laboratory were replicated using experts, but the fact that experts took on average 20 s to accept SA queries presents a challenge for using SPAM in submarine control rooms.
Contact location is uncertain due to the use of passive sonar and hostile deception. It is essential to measure track manager SA in order to inform work design and training.
Two experiments were conducted to determine whether detection of the onset of a lead car’s deceleration and judgments of its time to contact (TTC) were affected by the presence of vehicles in lanes adjacent to the lead car.
In a previous study, TTC judgments of an approaching object by a stationary observer were influenced by an adjacent task-irrelevant approaching object. The implication is that vehicles in lanes adjacent to a lead car could influence a driver’s ability to detect the lead car’s deceleration and to make judgments of its TTC.
Displays simulated car-following scenes in which two vehicles in adjacent lanes were either present or absent. Participants were instructed to respond as soon as the lead car decelerated (Experiment 1) or when they thought their car would hit the decelerating lead car (Experiment 2).
The presence of adjacent vehicles did not affect response time to detect deceleration of a lead car but did affect the signal detection theory measure of sensitivity d' and the number of missed deceleration events. Judgments of the lead car’s TTC were shorter when adjacent vehicles were present and decelerated early than when adjacent vehicles were absent.
The presence of vehicles in nearby lanes can affect a driver’s ability to detect a lead car’s deceleration and to make subsequent judgments of its TTC.
Results suggest that nearby traffic can affect a driver’s ability to accurately judge a lead car’s motion in situations that pose risk for rear-end collisions.
The aim of this study was to assess the contributions of Thomas Waters’s work in the field of health care ergonomics and beyond.
Waters’s research of safe patient handling with a focus on reducing musculoskeletal disorders (MSDs) in health care workers contributed to current studies and prevention strategies. He worked with several groups to share his research and assist in developing safe patient handling guidelines and curriculum for nursing students and health care workers.
The citations of articles that were published by Waters in health care ergonomics were evaluated for quality and themes of conclusions. Quality was assessed using the Mixed Methods Appraisal Tool and centrality to original research rating. Themes were documented by the type of population the citing articles were investigating.
In total, 266 articles that referenced the top seven cited articles were evaluated. More than 95% of them were rated either medium or high quality. The important themes of these citing articles were as follows: (a) Safe patient handling is effective in reducing MSDs in health care workers. (b) Shift work has negative impact on nurses. (c) There is no safe way to manually lift a patient. (d) Nurse curriculums should contain safe patient handling.
The research of Waters has contributed significantly to the health care ergonomics and beyond. His work, in combination with other pioneers in the field, has generated multiple initiatives, such as a standard safe patient-handling curriculum and safe patient-handling programs.
We describe health care simulation, designed primarily for training, and provide examples of how human factors experts can collaborate with health care professionals and simulationists—experts in the design and implementation of simulation—to use contemporary simulation to improve health care delivery.
The need—and the opportunity—to apply human factors expertise in efforts to achieve improved health outcomes has never been greater. Health care is a complex adaptive system, and simulation is an effective and flexible tool that can be used by human factors experts to better understand and improve individual, team, and system performance within health care.
Expert opinion is presented, based on a panel delivered during the 2014 Human Factors and Ergonomics Society Health Care Symposium.
Diverse simulators, physically or virtually representing humans or human organs, and simulation applications in education, research, and systems analysis that may be of use to human factors experts are presented. Examples of simulation designed to improve individual, team, and system performance are provided, as are applications in computational modeling, research, and lifelong learning.
The adoption or adaptation of current and future training and assessment simulation technologies and facilities provides opportunities for human factors research and engineering, with benefits for health care safety, quality, resilience, and efficiency.
Human factors experts, health care providers, and simulationists can use contemporary simulation equipment and techniques to study and improve health care delivery.
We assessed the perceived spaciousness and preference for a destination space in relation to six attributes (size, lighting, window size, texture, wall mural, and amount of furniture) of it and of the space experienced before it.
Studies have examined effects of these attributes but not for dynamic experience or preference.
We created 24 virtual reality walks between each possible pair of two levels of each attribute. For each destination space, 31 students (13 men, 18 women) rated spaciousness and 30 students (16 men, 14 women) rated preference. We conducted separate 2 x 2 repeated-measure ANOVAs across each condition for perceived spaciousness and preference.
Participants judged the space that was larger, was more brightly lit, with a larger window, or with less furniture as the more spacious. These attributes also increased preference. Consonant with adaptation-level theory, participants judged offices as higher in spaciousness and preference if preceded by a space that was smaller, was more dimly lit, or had smaller windows.
The findings suggest that perceived spaciousness varies with size, lightness, window size, and amount of furniture but that perception also depends on the size, lightness, and size of the space experienced before.
Designers could use the findings to manipulate features to make a space appear larger or more desirable.
We aimed to (a) describe the development and application of an automated approach for processing in-vehicle speech data from a naturalistic driving study (NDS), (b) examine the influence of child passenger presence on driving performance, and (c) model this relationship using in-vehicle speech data.
Parent drivers frequently engage in child-related secondary behaviors, but the impact on driving performance is unknown. Applying automated speech-processing techniques to NDS audio data would facilitate the analysis of in-vehicle driver–child interactions and their influence on driving performance.
Speech activity detection and speaker diarization algorithms were applied to audio data from a Melbourne-based NDS involving 42 families. Multilevel models were developed to evaluate the effect of speech activity and the presence of child passengers on driving performance.
Speech activity was significantly associated with velocity and steering angle variability. Child passenger presence alone was not associated with changes in driving performance. However, speech activity in the presence of two child passengers was associated with the most variability in driving performance.
The effects of in-vehicle speech on driving performance in the presence of child passengers appear to be heterogeneous, and multiple factors may need to be considered in evaluating their impact. This goal can potentially be achieved within large-scale NDS through the automated processing of observational data, including speech.
Speech-processing algorithms enable new perspectives on driving performance to be gained from existing NDS data, and variables that were once labor-intensive to process can be readily utilized in future research.
We investigated the nighttime conspicuity benefits of adding electroluminescent (EL) panels to pedestrian clothing that contains retroreflective elements.
Researchers have repeatedly documented that pedestrians are too often not sufficiently conspicuous to drivers at night and that retroreflective materials can enhance the conspicuity of pedestrians. However, because retroreflective elements in clothing are effective only when they are illuminated by the headlamps of an approaching driver, they are not useful for pedestrians who are positioned outside the beam pattern of an approaching vehicle’s headlamps. Electroluminescent materials—flexible luminous panels that can be attached to clothing—have the potential to be well suited for these conditions.
Using an open-road course at night, we compared the distances at which observers responded to pedestrians who were positioned at one of three lateral positions (relative to the vehicle’s path) wearing one of two high-visibility garments.
The garment that included both EL and retroreflective materials yielded longer response distances than the retroreflective-only garment. This effect was particularly strong when the test pedestrian was positioned farthest outside of the area illuminated by headlamps.
These findings suggest that EL materials can further enhance the conspicuity of pedestrians who are wearing retroreflective materials.
EL materials can be applied to garments. They may be especially valuable to enhance the conspicuity of roadway workers, emergency responders, and traffic control officers.
The aim of this study was to assess the effects of (a) auto-injector form factor on maximum applied force capability and (b) auto-injector design and instructions on force production and orientation.
Effective delivery of epinephrine through an auto-injector is the result of a multitude of design factors. At minimum, the design needs to allow the user to apply sufficient force for the needle to penetrate clothing and tissue.
Trainer devices for three commercially available epinephrine auto-injectors with different form factors (cylindrical, elliptical, prismatic) were tested in a laboratory-based repeated-measures experiment with 20 adults. Participants applied their maximum force onto a force plate positioned over their thigh and practiced an injection using the trainer device after viewing training videos. Participants also rated force confidence and preference.
The maximum force varied significantly across devices. The greatest force observed was 64 newtons with the elliptical device, and the lowest force was 61 newtons with the cylindrical device. Participants reported the highest force confidence when using the elliptical and cylindrical devices, ranking the elliptical as their preferred device.
Force capability results for the elliptical device suggest that it may be more successful in achieving the necessary force for drug delivery in a larger set of adult users.
Results suggest that the auto-injector with the elliptical form may enable more successful drug delivery among a larger set of users.
This study uses a dyadic approach to understand human-agent cooperation and system resilience.
Increasingly capable technology fundamentally changes human-machine relationships. Rather than reliance on or compliance with more or less reliable automation, we investigate interaction strategies with more or less cooperative agents.
A joint-task microworld scenario was developed to explore the effects of agent cooperation on participant cooperation and system resilience. To assess the effects of agent cooperation on participant cooperation, 36 people coordinated with a more or less cooperative agent by requesting resources and responding to requests for resources in a dynamic task environment. Another 36 people were recruited to assess effects following a perturbation in their own hospital.
Experiment 1 shows people reciprocated the cooperative behaviors of the agents; a low-cooperation agent led to less effective interactions and less resource sharing, whereas a high-cooperation agent led to more effective interactions and greater resource sharing. Experiment 2 shows that an initial fast-tempo perturbation undermined proactive cooperation—people tended to not request resources. However, the initial fast tempo had little effect on reactive cooperation—people tended to accept resource requests according to cooperation level.
This study complements the supervisory control perspective of human-automation interaction by considering interdependence and cooperation rather than the more common focus on reliability and reliance.
The cooperativeness of automated agents can influence the cooperativeness of human agents. Design and evaluation for resilience in teams involving increasingly autonomous agents should consider the cooperative behaviors of these agents.
Prior studies have demonstrated unique driver behavior outcomes when visual and cognitive distraction occurs simultaneously as compared to the occurrence of one form of distraction alone. This situation implies additional complexity for the design of robust distraction detection systems and vehicle automation for hazard mitigation.
This study evaluated the effectiveness of two distraction classification strategies: (a) a "two-stage" classifier, first detecting visual-manual distraction and then identifying dual or cognitive distraction states, and (b) a "direct-mapping" classifier developed to identify all distraction states at the same time.
Driving performance data were collected on 20 participants under different known states of distraction (none, visual-manual, cognitive, and combined). A support vector machine (SVM) was used as a base algorithm for both classifiers and performance data as well as the level of driving control (tactical and operational), which served as inputs and modifiers to the classification process.
The two-stage strategy was found to be sensitive for identifying states of visual-manual distraction; however, the strategy also produced a higher false alarm rate than direct-mapping. Consideration of driving control levels during classification also improved classification accuracy. Future work needs to account for strategic levels of vehicle control.
The aim of this study was to determine and verify the optimal location of the motion axis (MA) for the seat of a dynamic office chair.
A dynamic seat that supports pelvic motion may improve physical well-being and decrease the risk of sitting-associated disorders. However, office work requires an undisturbed view on the work task, which means a stable position of the upper trunk and head. Current dynamic office chairs do not fulfill this need. Consequently, a dynamic seat was adapted to the physiological kinematics of the human spine.
Three-dimensional motion tracking in free sitting helped determine the physiological MA of the spine in the frontal plane. Three dynamic seats with physiological, lower, and higher MA were compared in stable upper body posture (thorax inclination) and seat support of pelvic motion (dynamic fitting accuracy). Spinal kinematics during sitting and walking were compared.
The physiological MA was at the level of the 11th thoracic vertebra, causing minimal thorax inclination and high dynamic fitting accuracy. Spinal motion in active sitting and walking was similar.
The physiological MA of the seat allows considerable lateral flexion of the spine similar to walking with a stable upper body posture and a high seat support of pelvic motion.
The physiological MA enables lateral flexion of the spine, similar to walking, without affecting stable upper body posture, thus allowing active sitting while focusing on work.
We investigated performance, workload, and stress in groups of paired observers who performed a vigilance task in a coactive (independent) manner.
Previous studies have demonstrated that groups of coactive observers detect more signals in a vigilance task than observers working alone. Therefore, the use of such groups might be effective in enhancing signal detection in operational situations. However, concern over appearing less competent than one’s cohort might induce elevated levels of workload and stress in coactive group members and thereby undermine group performance benefits. Accordingly, we performed the initial experiment comparing workload and stress in observers who performed a vigilance task coactively with those of observers who performed the vigilance task alone.
Observers monitored a video display for collision flight paths in a simulated unmanned aerial vehicle control task. Self-reports of workload and stress were secured via the NASA-Task Load Index and the Dundee Stress State Questionnaire, respectively.
Groups of coactive observers detected significantly more signals than did single observers. Coacting observers did not differ significantly from those operating by themselves in terms of workload but did in regard to stress; posttask distress was significantly lower for coacting than for single observers.
Performing a visual vigilance task in a coactive manner with another observer does not elevate workload above that of observers working alone and serves to attenuate the stress associated with vigilance task performance.
The use of coacting observers could be an effective vehicle for enhancing performance efficiency in operational vigilance.
This study tests the reliability of a system (FINANS) to collect and analyze incident reports in the financial trading domain and is guided by a human factors taxonomy used to describe error in the trading domain.
Research indicates the utility of applying human factors theory to understand error in finance, yet empirical research is lacking. We report on the development of the first system for capturing and analyzing human factors–related issues in operational trading incidents.
In the first study, 20 incidents are analyzed by an expert user group against a referent standard to establish the reliability of FINANS. In the second study, 750 incidents are analyzed using distribution, mean, pathway, and associative analysis to describe the data.
Kappa scores indicate that categories within FINANS can be reliably used to identify and extract data on human factors–related problems underlying trading incidents. Approximately 1% of trades (n = 750) lead to an incident. Slip/lapse (61%), situation awareness (51%), and teamwork (40%) were found to be the most common problems underlying incidents. For the most serious incidents, problems in situation awareness and teamwork were most common.
We show that (a) experts in the trading domain can reliably and accurately code human factors in incidents, (b) 1% of trades incur error, and (c) poor teamwork skills and situation awareness underpin the most critical incidents.
This research provides data crucial for ameliorating risk within financial trading organizations, with implications for regulation and policy.
The aim of this study was to understand factors that influence the prediction of uncertain spatial trajectories (e.g., the future path of a hurricane or ship) and the role of human overconfidence in such prediction.
Research has indicated that human prediction of uncertain trajectories is difficult and may well be subject to overconfidence in the accuracy of forecasts as is found in event prediction, a finding that indicates that humans insufficiently appreciate the contributions of variance in nature to their predictions.
In two experiments, our paradigm required participants to observe a starting point, a position at time T, and then make a prediction of the location of the trajectory at time NT. They experienced several trajectories from the same underlying model but perturbed by random variance in heading and speed.
In Experiment 1A, people predicted linear paths well and were better in heading predictions than in speed predictions. However, participants greatly underestimated the variance in predicted location, indicating overconfidence. In Experiment 1B, the effect was replicated with frequencies rather than probabilities used in variance estimates. In Experiment 2, people predicted nonlinear trajectories poorly, and overconfidence was again observed. Overconfidence was reduced on the more difficult predictions. In both main experiments, those better at predicting the mean were not better at predicting the variance.
Predicting the level of uncertainty in spatial trajectories is not well done and may involve qualitatively different abilities than prediction of the mean.
Improving real-world performance at prediction demands developing better understanding of variability, not just the average case. Biases in prediction of uncertainty may be addressed through debiasing training and/or visualization tools that could assist in more calibrated action planning.
To honor Tom Waters’s work on emerging occupational health issues, we review the literature on physical along with chemical exposures and their impact on functional outcomes.
Many occupations present the opportunity for exposure to multiple hazardous exposures, including both physical and chemical factors. However, little is known about how these different factors affect functional ability and injury. The goal of this review is to examine the relationships between these exposures, impairment of the neuromuscular and musculoskeletal systems, functional outcomes, and health problems with a focus on acute injury.
Literature was identified using online databases, including PubMed, Ovid Medline, and Google Scholar. References from included articles were searched for additional relevant articles.
This review documented the limited existing literature that discussed cognitive impairment and functional disorders via neurotoxicity for physical exposures (heat and repetitive loading) and chemical exposures (pesticides, volatile organic compounds [VOCs], and heavy metals).
This review supports that workers are exposed to physical and chemical exposures that are associated with negative health effects, including functional impairment and injury. Innovation in exposure assessment with respect to quantifying the joint exposure to these different exposures is especially needed for developing risk assessment models and, ultimately, preventive measures.
Along with physical exposures, chemical exposures need to be considered, alone and in combination, in assessing functional ability and occupationally related injuries.
The objective of this study was to examine the potential benefits and impact on pilot behavior from the use of portable weather applications.
Seventy general aviation (GA) pilots participated in the study. Each pilot was randomly assigned to an experimental or a control group and flew a simulated single-engine GA aircraft, initially under visual meteorological conditions (VMC). The experimental group was equipped with a portable weather application during flight. We recorded measures for weather situation awareness (WSA), decision making, cognitive engagement, and distance from the aircraft to hazardous weather.
We found positive effects from the use of the portable weather application, with an increased WSA for the experimental group, which resulted in credibly larger route deviations and credibly greater distances to hazardous weather (≥30 dBZ cells) compared with the control group. Nevertheless, both groups flew less than 20 statute miles from hazardous weather cells, thus failing to follow current weather-avoidance guidelines. We also found a credibly higher cognitive engagement (prefrontal oxygenation levels) for the experimental group, possibly reflecting increased flight planning and decision making on the part of the pilots.
Overall, the study outcome supports our hypothesis that portable weather displays can be used without degrading pilot performance on safety-related flight tasks, actions, and decisions as measured within the constraints of the present study. However, it also shows that an increased WSA does not automatically translate to enhanced flight behavior.
The study outcome contributes to our knowledge of the effect of portable weather applications on pilot behavior and decision making.
The aim of this study was to investigate the effects of flooring type and resident weight on external hand forces required to push floor-based lifts in long-term care (LTC).
Novel compliant flooring is designed to reduce fall-related injuries among LTC residents but may increase forces required for staff to perform pushing tasks. A motorized lift may offset the effect of flooring on push forces.
Fourteen female LTC staff performed straight-line pushes with two floor-based lifts (conventional, motor driven) loaded with passengers of average and 90th-percentile resident weights over four flooring systems (concrete+vinyl, compliant+vinyl, concrete+carpet, compliant+carpet). Initial and sustained push forces were measured by a handlebar-mounted triaxial load cell and compared to participant-specific tolerance limits. Participants rated pushing difficulty.
Novel compliant flooring increased initial and sustained push forces and subjective ratings compared to concrete flooring. Compared to the conventional lift, the motor-driven lift substantially reduced initial and sustained push forces and perceived difficulty of pushing for all four floors and both resident weights. Participants exerted forces above published tolerance limits only when using the conventional lift on the carpet conditions (concrete+carpet, compliant+carpet). With the motor-driven lift only, resident weight did not affect push forces.
Novel compliant flooring increased linear push forces generated by LTC staff using floor-based lifts, but forces did not exceed tolerance limits when pushing over compliant+vinyl. The motor-driven lift substantially reduced push forces compared to the conventional lift.
Results may help to address risk of work-related musculoskeletal injury, especially in locations with novel compliant flooring.
The aim of this study was to evaluate the efficacy of the new variable lifting index (VLI) method, theoretically based on the Revised National Institute for Occupational Safety and Health [NIOSH] Lifting Equation (RNLE), in predicting the risk of acute low-back pain (LBP) in the past 12 months.
A new risk variable termed the VLI for assessing variable manual lifting has been developed, but there has been no epidemiological study that evaluates the relationship between the VLI and LBP.
A sample of 3,402 study participants from 16 companies in different industrial sectors was analyzed. Of the participants, 2,374 were in the risk exposure group involving manual materials handling (MMH), and 1,028 were in the control group without MMH. The VLI was calculated for each participant in the exposure group using a systematic approach. LBP information was collected by occupational physicians at the study sites. The risk of acute LBP was estimated by calculating the odds ratio (OR) between levels of the risk exposure and the control group using a logistic regression analysis. Both crude and adjusted ORs for body mass index, gender, and age were analyzed.
Both crude and adjusted ORs showed a dose-response relationship. As the levels of VLI increased, the risk of LBP increased. This risk relationship existed when VLI was greater than 1.
The VLI method can be used to assess the risk of acute LBP, although further studies are needed to confirm the outcome and to define better VLI categories.
The aim of this study was to introduce and evaluate two interventions, Ergo Bucket Carrier (EBC) and Easy Lift (EL), for youths (and adults) to handle water/feed buckets on farms.
The physical activities of both adult and youth farm workers contribute to the development of low-back disorders (LBDs). Many of the activities youths perform on farms are associated with increased LBD risk, particularly, the handling of water and feed buckets.
Seventeen adult and youth participants (10 males and seven females) participated in this study. To assess the risk of LBDs, the participants were instrumented with a three-dimensional spinal electrogonio-meter while lifting, carrying, and dumping water buckets using the traditional method and the two interventions.
For both the adult and youth groups, the results showed that the two interventions significantly decrease the magnitudes of LBD risk in many of the tasks evaluated. Overall, the use of the EBC resulted in a 41% reduction in the level of LBD risk for the carrying task and a reduction of 69% for the dumping task. Using the EL, on the other hand, is especially effective for lifting tasks (55% reduction in LBD risk). Results of the subjective response were consistent with the objective evaluations.
This study demonstrated the potential for ergonomic interventions in reducing LBD risk during the common farming task of bucket handling.
Potential application of this study includes the introduction of the EBC and EL in family farms to reduce the LBD risk among youth and adult farmers.
The objectives were to: (a) develop a continuous frequency multiplier (FM) for the revised NIOSH lifting equation (RNLE) as a function of lifting frequency and duration of a lifting task, and (b) describe the Cumulative Lifting Index (CULI), a methodology for estimating physical exposure to workers with job rotation.
The existing FM for the RNLE (FME) does not differentiate between task duration >2 hr and <8 hr, which makes quantifying physical exposure to workers with job rotation difficult and presents challenges to job designers.
Using the existing FMs for 1, 2, and 8 hr of task durations, we developed a continuous FM (FMP) that extends to 12 hr per day. We simulated 157,500 jobs consisting of two tasks each and, using different combinations of Frequency Independent Lifting Index, lifting frequency and duration of lifting. Biomechanical stresses were estimated using the CULI, time-weighted average (TWA), and peak exposure.
The median difference between FME and FMP was ±1% (range: 0%–15%). Compared to CULI, TWA underestimated risk of low-back pain (LBP) for 18% to 30% of jobs, and peak exposure for an assumed 8-hr work shift overestimated risk of LBP for 20% to 25% of jobs. Peak task exposure showed 90% agreement with CULI but ignored one of two tasks.
The CULI partially addressed the underestimation of physical exposure using the TWA approach and overestimation of exposure using the peak-exposure approach.
The proposed FM and CULI may provide more accurate physical exposure estimates, and therefore estimated risk of LBP, for workers with job rotation.
The objective of this article is to evaluate the impact of the revised National Institute for Occupational Safety and Health lifting equation (RNLE).
The RNLE has been used extensively as a risk assessment method for prevention of low back pain (LBP). However, the impact of the RNLE has not been documented.
A systematic review of the literature on the RNLE was conducted. The review consisted of three parts: characterization of the RNLE publications, assessment of the impact of the RNLE, and evaluation of the influences of the RNLE on ergonomic standards. The literature for assessing the impact was categorized into four research areas: methodology, laboratory, field, and risk assessment studies using the Lifting Index (LI) or Composite LI (CLI), both of which are the products of the RNLE.
The impact of the RNLE has been both widespread and influential. We found 24 studies that examined the criteria used to define lifting capacity used by the RNLE, 28 studies that compared risk assessment methods for identifying LBP, 23 studies that found the RNLE useful in identifying the risk of LBP with different work populations, and 13 studies on the relationship between LI/CLI and LBP outcomes. We also found evidence on the adoption of the RNLE as an ergonomic standard for use by various local, state, and international entities.
The review found 13 studies that link LI/CLI to adverse LBP outcomes. These studies showed a positive relationship between LI/CLI metrics and the severity of LBP outcomes.
This study investigated the effects of hospital bed features on the biomechanical stresses experienced by nurses when turning and laterally repositioning patients. Turn Assist, a common feature in ICU beds that helps to rotate patients, and side rail orientation were evaluated.
Manual patient handling is a risk factor for musculoskeletal injury, and turning patients is one of the most common patient handling activities. No known studies have evaluated bed attributes such as the Turn Assist feature and side rail orientation that may affect the stresses experienced by the nurse.
Nine female nurses laterally repositioned and turned a 63-kg and 123-kg subject on an ICU bed while motion capture, ground reaction forces, and hand force data were recorded. Loading of the spine and shoulder was modeled using 3D Static Strength Prediction Program (3DSSPP).
Spine compression and shear forces did not exceed recommended limits when turning or laterally repositioning. However, the mean pull forces required to manually laterally reposition even the 63-kg subject was 340 Newtons, more than 50% greater than limits established in psychophysical testing. Turn Assist considerably reduced spine loading and pull forces for both turning and laterally repositioning. Lowering side rails reduced spinal compression by 11% when turning patients.
Laterally repositioning patients as part of turning may pose an injury risk to caregivers. Turn Assist reduces physical loading on nurses when turning and repositioning patients.
Caregivers should consider using Turn Assist and other aids such as mechanical lifts or sliding sheets especially when turning patients requires lateral repositioning.
We seek to develop a new approach for analyzing the physical demands of highly variable lifting tasks through an adaptation of the Revised NIOSH (National Institute for Occupational Safety and Health) Lifting Equation (RNLE) into a Variable Lifting Index (VLI).
There are many jobs that contain individual lifts that vary from lift to lift due to the task requirements. The NIOSH Lifting Equation is not suitable in its present form to analyze variable lifting tasks.
In extending the prior work on the VLI, two procedures are presented to allow users to analyze variable lifting tasks. One approach involves the sampling of lifting tasks performed by a worker over a shift and the calculation of the Frequency Independent Lift Index (FILI) for each sampled lift and the aggregation of the FILI values into six categories. The Composite Lift Index (CLI) equation is used with lifting index (LI) category frequency data to calculate the VLI. The second approach employs a detailed systematic collection of lifting task data from production and/or organizational sources. The data are organized into simplified task parameter categories and further aggregated into six FILI categories, which also use the CLI equation to calculate the VLI.
The two procedures will allow practitioners to systematically employ the VLI method to a variety of work situations where highly variable lifting tasks are performed.
The scientific basis for the VLI procedure is similar to that for the CLI originally presented by NIOSH; however, the VLI method remains to be validated.
The VLI method allows an analyst to assess highly variable manual lifting jobs in which the task characteristics vary from lift to lift during a shift.
We examined how providing artificially high or low statements about automation reliability affected expectations, perceptions, and use of automation over time.
One common method of introducing automation is providing explicit statements about the automation’s capabilities. Research is needed to understand how expectations from such introductions affect perceptions and use of automation.
Explicit-statement introductions were manipulated to set higher-than (90%), same-as (75%), or lower-than (60%) levels of expectations in a dual-task scenario with 75% reliable automation. Two experiments were conducted to assess expectations, perceptions, compliance, reliance, and task performance over (a) 2 days and (b) 4 days.
The baseline assessments showed initial expectations of automation reliability matched introduced levels of expectation. For the duration of each experiment, the lower-than groups’ perceptions were lower than the actual automation reliability. However, the higher-than groups’ perceptions were no different from actual automation reliability after Day 1 in either study. There were few differences between groups for automation use, which generally stayed the same or increased with experience using the system.
Introductory statements describing artificially low automation reliability have a long-lasting impact on perceptions about automation performance. Statements including incorrect automation reliability do not appear to affect use of automation.
Introductions should be designed according to desired outcomes for expectations, perceptions, and use of the automation. Low expectations have long-lasting effects.
A fully immersive, high-fidelity street-crossing simulator was used to examine the effects of texting on pedestrian street-crossing performance.
Research suggests that street-crossing performance is impaired when pedestrians engage in cell phone conversations. Less is known about the impact of texting on street-crossing performance.
Thirty-two young adults completed three distraction conditions in a simulated street-crossing task: no distraction, phone conversation, and texting. A hands-free headset and a mounted tablet were used to conduct the phone and texting conversations, respectively. Participants moved through the virtual environment via a manual treadmill, allowing them to select crossing gaps and change their gait.
During the phone conversation and texting conditions, participants had fewer successful crossings and took longer to initiate crossing. Furthermore, in the texting condition, smaller percentage of time with head orientation toward the tablet, fewer number of head orientations toward the tablet, and greater percentage of total characters typed before initiating crossing predicted greater crossing success.
Our results suggest that (a) texting is as unsafe as phone conversations for street-crossing performance and (b) when subjects completed most of the texting task before initiating crossing, they were more likely to make it safely across the street.
Sending and receiving text messages negatively impact a range of real-world behaviors. These results may inform personal and policy decisions.
We review historical and more recent efforts in boredom research and related fields. A framework is presented that organizes the various facets of boredom, particularly in supervisory control settings, and research gaps and future potential areas for study are highlighted.
Given the ubiquity of boredom across a wide spectrum of work environments—exacerbated by increasingly automated systems that remove humans from direct, physical system interaction and possibly increasing tedium in the workplace—there is a need not only to better understand the multiple facets of boredom in work environments but to develop targeted mitigation strategies.
To better understand the relationships between the various influences and outcomes of boredom, a systems-based framework, called the Boredom Influence Diagram, is proposed that describes various elements of boredom and their interrelationships.
Boredom is closely related to vigilance, attention management, and task performance. This review highlights the need to develop more naturalistic experiments that reflect the characteristics of a boring work environment.
With the increase in automation, boredom in the workplace will likely become a more prevalent issue for motivation and retention. In addition, developing continuous measures of boredom based on physiological signals is critical.
Personnel selection and improvements in system and task design can potentially mitigate boredom. However, more work is needed to develop and evaluate other potential interventions.
Previously published statistical models of driving posture have been effective for vehicle design but have not taken into account the effects of age.
The present study developed new statistical models for predicting driving posture.
Driving postures of 90 U.S. drivers with a wide range of age and body size were measured in laboratory mockup in nine package conditions. Posture-prediction models for female and male drivers were separately developed by employing a stepwise regression technique using age, body dimensions, vehicle package conditions, and two-way interactions, among other variables.
Driving posture was significantly associated with age, and the effects of other variables depended on age. A set of posture-prediction models is presented for women and men. The results are compared with a previously developed model.
The present study is the first study of driver posture to include a large cohort of older drivers and the first to report a significant effect of age.
The posture-prediction models can be used to position computational human models or crash-test dummies for vehicle design and assessment.
We evaluated the effect of work surface angle and input hardware on upper-limb posture when using a hybrid computer workstation.
Offices use sit-stand and/or tablet workstations to increase worker mobility. These workstations may have negative effects on upper-limb joints by increasing time spent in non-neutral postures, but a hybrid standing workstation may improve working postures.
Fourteen participants completed office tasks in four workstation configurations: a horizontal or sloped 15° working surface with computer or tablet hardware. Three-dimensional right upper-limb postures were recorded during three tasks: reading, form filling, and writing e-mails. Amplitude probability distribution functions determined the median and range of upper-limb postures.
The sloped-surface tablet workstation decreased wrist ulnar deviation by 5° when compared to the horizontal-surface computer when reading. When using computer input devices (keyboard and mouse), the shoulder, elbow, and wrist were closest to neutral joint postures when working on a horizontal work surface. The elbow was 23° and 15° more extended, whereas the wrist was 6° less ulnar deviated, when reading compared to typing forms or e-mails.
We recommend that the horizontal-surface computer configuration be used for typing and the sloped-surface tablet configuration be used for intermittent reading tasks in this hybrid workstation.
Offices with mobile employees could use this workstation for alternating their upper-extremity postures; however, other aspects of the device need further investigation.
I explored whether different cognitive abilities (information-processing ability, working-memory capacity) are needed for expertise development when different types of automation (information vs. decision automation) are employed.
It is well documented that expertise development and the employment of automation lead to improved performance. Here, it is argued that a learner’s ability to reason about an activity may be hindered by the employment of information automation. Additional feedback needs to be processed, thus increasing the load on working memory and decelerating expertise development. By contrast, the employment of decision automation may stimulate reasoning, increase the initial load on information-processing ability, and accelerate expertise development. Authors of past research have not investigated the interrelations between automation assistance, individual differences, and expertise development.
Sixty-one naive learners controlled simulated air traffic with two types of automation: information automation and decision automation. Their performance was captured across 16 trials. Well-established tests were used to assess information-processing ability and working-memory capacity.
As expected, learners’ performance benefited from expertise development and decision automation. Furthermore, individual differences moderated the effect of the type of automation on expertise development: The employment of only information automation increased the load on working memory during later expertise development. The employment of decision automation initially increased the need to process information.
These findings highlight the importance of considering individual differences and expertise development when investigating human–automation interaction.
The results are relevant for selecting automation configurations for expertise development.
In the present study, we examined the effect of working while seated, while standing, or while walking on measures of short-term memory, working memory, selective and sustained attention, and information-processing speed.
The advent of computer-based technology has revolutionized the adult workplace, such that average adult full-time employees spend the majority of their working day seated. Prolonged sitting is associated with increasing obesity and chronic health conditions in children and adults. One possible intervention to reduce the negative health impacts of the modern office environment involves modifying the workplace to increase incidental activity and exercise during the workday. Although modifications, such as sit-stand desks, have been shown to improve physiological function, there is mixed information regarding the impact of such office modification on individual cognitive performance and thereby the efficiency of the work environment.
In a fully counterbalanced randomized control trial, we assessed the cognitive performance of 45 undergraduate students for up to a 1-hr period in each condition.
The results indicate that there is no significant change in the measures used to assess cognitive performance associated with working while seated, while standing, or while walking at low intensity.
These results indicate that cognitive performance is not degraded with short-term use of alternate workstations.
We investigated cross-level effects, which are concurrent changes across neural and cognitive-behavioral levels of analysis as teams interact, between neurophysiology and team communication variables under variations in team training.
When people work together as a team, they develop neural, cognitive, and behavioral patterns that they would not develop individually. It is currently unknown whether these patterns are associated with each other in the form of cross-level effects.
Team-level neurophysiology and latent semantic analysis communication data were collected from submarine teams in a training simulation. We analyzed whether (a) both neural and communication variables change together in response to changes in training segments (briefing, scenario, or debriefing), (b) neural and communication variables mutually discriminate teams of different experience levels, and (c) peak cross-correlations between neural and communication variables identify how the levels are linked.
Changes in training segment led to changes in both neural and communication variables, neural and communication variables mutually discriminated between teams of different experience levels, and peak cross-correlations indicated that changes in communication precede changes in neural patterns in more experienced teams.
Cross-level effects suggest that teamwork is not reducible to a fundamental level of analysis and that training effects are spread out across neural and cognitive-behavioral levels of analysis. Cross-level effects are important to consider for theories of team performance and practical aspects of team training.
Cross-level effects suggest that measurements could be taken at one level (e.g., neural) to assess team experience (or skill) on another level (e.g., cognitive-behavioral).
We studied the utility of occlusion distance as a function of task-relevant event density in realistic traffic scenarios with self-controlled speed.
The visual occlusion technique is an established method for assessing visual demands of driving. However, occlusion time is not a highly informative measure of environmental task-relevant event density in self-paced driving scenarios because it partials out the effects of changes in driving speed.
Self-determined occlusion times and distances of 97 drivers with varying backgrounds were analyzed in driving scenarios simulating real Finnish suburban and highway traffic environments with self-determined vehicle speed.
Occlusion distances varied systematically with the expected environmental demands of the manipulated driving scenarios whereas the distributions of occlusion times remained more static across the scenarios. Systematic individual differences in the preferred occlusion distances were observed. More experienced drivers achieved better lane-keeping accuracy than inexperienced drivers with similar occlusion distances; however, driving experience was unexpectedly not a major factor for the preferred occlusion distances.
Occlusion distance seems to be an informative measure for assessing task-relevant event density in realistic traffic scenarios with self-controlled speed. Occlusion time measures the visual demand of driving as the task-relevant event rate in time intervals, whereas occlusion distance measures the experienced task-relevant event density in distance intervals.
The findings can be utilized in context-aware distraction mitigation systems, human–automated vehicle interaction, road speed prediction and design, as well as in the testing of visual in-vehicle tasks for inappropriate in-vehicle glancing behaviors in any dynamic traffic scenario for which appropriate individual occlusion distances can be defined.
The aim of this study was to enable the head-up monitoring of two interrelated aircraft navigation instruments by developing a 3-D auditory display that encodes this navigation information within two spatially discrete sonifications.
Head-up monitoring of aircraft navigation information utilizing 3-D audio displays, particularly involving concurrently presented sonifications, requires additional research.
A flight simulator’s head-down waypoint bearing and course deviation instrument readouts were conveyed to participants via a 3-D auditory display. Both readouts were separately represented by a colocated pair of continuous sounds, one fixed and the other varying in pitch, which together encoded the instrument value’s deviation from the norm. Each sound pair’s position in the listening space indicated the left/right parameter of its instrument’s readout. Participants’ accuracy in navigating a predetermined flight plan was evaluated while performing a head-up task involving the detection of visual flares in the out-of-cockpit scene.
The auditory display significantly improved aircraft heading and course deviation accuracy, head-up time, and flare detections. Head tracking did not improve performance by providing participants with the ability to orient potentially conflicting sounds, suggesting that the use of integrated localizing cues was successful.
A supplementary 3-D auditory display enabled effective head-up monitoring of interrelated navigation information normally attended to through a head-down display.
Pilots operating aircraft, such as helicopters and unmanned aerial vehicles, may benefit from a supplementary auditory display because they navigate in two dimensions while performing head-up, out-of-aircraft, visual tasks.
The objective of this research was to advance an improved model of Flight Crew task performance.
Existing task models present a "local" description of Flight Crew task performance.
Process mapping workshops, interviews, and observations were conducted with both pilots and flight operations personnel from five airlines, as part of the Human Integration into the Lifecycle of Aviation Systems (HILAS) project.
The functional logic of the process dictates Flight Crew task requirements and specific task workflows. The Flight Crew task involves managing different levels of operational and environmental complexity, associated with the particular flight context. In so doing, the Flight Crew act as a coordinating interface between different human agents involved in the Active Flight Operations process and other processes that interface with this process.
This article presents a new sociotechnical model of the Flight Crew task. The proposed model reflects a shift from a local explanation of Flight Crew task activity to a broader process-centric explanation. In so doing, it illuminates the complex role of procedures in commercial operations.
The task model suggests specific requirements for pilot task support tools, procedures design, performance evaluation and crew resource management (CRM) training. Also, this model might be used to assess future operational concepts and associated technology requirements. Lastly, this model provides the basis for the operational validation of both existing and future cockpit technologies.
The aim of this study was to understand how the prolonged use of cockpit automation is affecting pilots’ manual flying skills.
There is an ongoing concern about a potential deterioration of manual flying skills among pilots who assume a supervisory role while cockpit automation systems carry out tasks that were once performed by human pilots.
We asked 16 airline pilots to fly routine and nonroutine flight scenarios in a Boeing 747-400 simulator while we systematically varied the level of automation that they used, graded their performance, and probed them about what they were thinking about as they flew.
We found pilots’ instrument scanning and manual control skills to be mostly intact, even when pilots reported that they were infrequently practiced. However, when pilots were asked to manually perform the cognitive tasks needed for manual flight (e.g., tracking the aircraft’s position without the use of a map display, deciding which navigational steps come next, recognizing instrument system failures), we observed more frequent and significant problems. Furthermore, performance on these cognitive tasks was associated with measures of how often pilots engaged in task-unrelated thought when cockpit automation was used.
We found that while pilots’ instrument scanning and aircraft control skills are reasonably well retained when automation is used, the retention of cognitive skills needed for manual flying may depend on the degree to which pilots remain actively engaged in supervising the automation.
We investigated the effects of active stereoscopic simulation-based training and individual differences in video game experience on multiple indices of combat identification (CID) performance.
Fratricide is a major problem in combat operations involving military vehicles. In this research, we aimed to evaluate the effects of training on CID performance in order to reduce fratricide errors.
Individuals were trained on 12 combat vehicles in a simulation, which were presented via either a non-stereoscopic or active stereoscopic display using NVIDIA’s GeForce shutter glass technology. Self-report was used to assess video game experience, leading to four between-subjects groups: high video game experience with stereoscopy, low video game experience with stereoscopy, high video game experience without stereoscopy, and low video game experience without stereoscopy. We then tested participants on their memory of each vehicle’s alliance and name across multiple measures, including photographs and videos.
There was a main effect for both video game experience and stereoscopy across many of the dependent measures. Further, we found interactions between video game experience and stereoscopic training, such that those individuals with high video game experience in the non-stereoscopic group had the highest performance outcomes in the sample on multiple dependent measures.
This study suggests that individual differences in video game experience may be predictive of enhanced performance in CID tasks.
Selection based on video game experience in CID tasks may be a useful strategy for future military training. Future research should investigate the generalizability of these effects, such as identification through unmanned vehicle sensors.
The aim of this study was to investigate mismatch between students and classroom furniture dimensions and evaluate the improvement in implementing the European furniture standard.
In Portugal, school furniture does not meet any national ergonomic criteria, so it cannot fit students’ anthropometric measures.
A total of 893 students belonging to third (7th through 9th grades) and secondary (10th through 12th grades) cycles participated in the study. Anthropometric measurements of the students were gathered in several physical education classes. The furniture dimensions were measured for two models of tables and seats. Several two-way equations for match criteria based on published studies were applied to data.
The percentage of students who match with classroom furniture dimensions is low (24% and 44% between table and students, 4% and 9% between seat and students at 7th and 12th grades, respectively). Table is high for the third cycle, seat is high for both cycles, and seat depth fits well to students. No significant relationship was found between ergonomic mismatch and prevalence of pain.
For each cycle, at least two different sizes indicated in the European standard should be available to students, considering the large variability in body dimensions within each cycle. The match criteria used gives a large percentage of students without pain in a mismatch situation.
Future measures applying to secondary schools should revise the decision of selecting a single size of classroom furniture and improve the implementation of the European standard. New criteria for ergonomic mismatch are needed that more closely model the responses about discomfort/pain.
We propose and test a method to reduce simulator sickness.
Prolonged work in driving simulators often leads to nausea and other symptoms summarized as simulator sickness. Visual/vestibular mismatches are a frequently addressed cause; we investigate another possibility, mismatch between actual distance to a screen and depicted distances in the simulator’s graphics.
Drivers negotiated a figure-8 course in a photorealistic simulator. They reported discomfort and vection every 10 minutes up to 40 min. A correction group wore optometric test frames with +1.75 diopter lenses and prisms to converge parallel lines of sight on a screen 56 cm from the driver’s eyes, preserving the normal accommodative convergence–to–accommodation (AC/A) ratio. A control group wore neutral lenses in the same test frames. In other experiments head tilt simulated vestibular experience on curves.
The optical correction significantly reduced simulator sickness measured on a 10-point discomfort scale, where 1 is no problem and 10 is about to vomit. Vection ratings were similar for correction and control groups. Some drivers failed to complete the course because of high discomfort ratings, crashes, or other causes. Head tilt in the direction opposite each curve while wearing the correction did not affect discomfort, while tilt in the same direction as each curve made simulator sickness worse.
Optical corrections can significantly reduce simulator sickness, though they do not eliminate it. Head tilt while driving is not recommended.
Simple optical corrections in spectacle frames, easily purchased at any optical facility, should be used in screen-based driving simulators. Strength of the correction depends on distance from the driver to the screen.
The aim of this laboratory experiment was to demonstrate how taking a longitudinal, multilevel approach can be used to examine the dynamic relationship between subjective workload and performance over a given period of activity involving shifts in task demand.
Subjective workload and conditions of the performance environment are oftentimes examined via cross-sectional designs without distinguishing within- from between-person effects. Given the dynamic nature of performance phenomena, multilevel designs coupled with manipulations of task demand shifts are needed to better model the dynamic relationships between state and trait components of subjective workload and performance.
With a sample of 75 college students and a computer game representing a complex decision-making environment, increases and decreases in task demand were counterbalanced and subjective workload and performance were measured concurrently in regular intervals within performance episodes. Data were analyzed using hierarchical linear modeling.
Both between- and especially within-person effects were dynamic. Nevertheless, at both levels of analysis, higher subjective workload reflected performance problems, especially more downstream from increases in task demand.
As a function of cognitive-energetic processes, shifts in task demand are associated with changes in how subjective workload is related to performance over a given period of activity. Multilevel, longitudinal approaches are useful for distinguishing and examining the dynamic relationships between state and trait components of subjective workload and performance.
The findings of this research help to improve the understanding of how a sequence of demands can exceed a performer’s capability to respond to further demands.
We investigated whether collision avoidance systems (CASs) should present individual crash alerts in a multiple-conflict scenario or present only one alert in response to the first conflict.
Secondary alerts may startle, confuse, or interfere with drivers’ execution of an emergency maneuver.
Fifty-one participants followed a pickup truck around a test track. Once the participant was visually distracted, a trailing sedan repositioned itself into the participant’s blind spot while a box was dropped from the truck. Participants received a forward collision warning (FCW) alert as the box landed. Twenty-six drivers swerved left in response to the box, encountering a lateral conflict with the adjacent sedan. Half of these 26 drivers received a lane-change merge (LCM) alert.
Drivers who received both the FCW and LCM alerts were significantly faster at steering away from the lateral crash threat than the drivers who received only the FCW alert (1.70 s vs. 2.76 s, respectively). Drivers liked receiving the LCM alert, rated it to be useful, found it easy to understand (despite being presented after the FCW alert), and did not find it to be startling.
Drivers who are familiar with CASs benefit from, and feel it is appropriate to generate, multiple alerts in a multiple-conflict scenario.
The results may inform the design of CASs for connected and automated vehicles.
We investigated whether different virtual keyboard key sizes affected typing force exposures, muscle activity, wrist posture, comfort, and typing productivity.
Virtual keyboard use is increasing and the physical exposures associated with virtual keyboard key sizes are not well documented.
Typing forces, forearm/shoulder muscle activity, wrist posture, subjective comfort, and typing productivity were measured from 21 subjects while they were typing on four different virtual keyboards with square key sizes, which were 13, 16, 19, and 22 mm on each side with 2-mm between-key spacing.
The results showed that virtual keyboard key size had little effect on typing force, forearm muscle activity, and ulnar/radial deviation. However, the virtual keyboard with the 13-mm keys had a 15% slower typing speed (p < .0001), slightly higher static (10th percentile) shoulder muscle activity (2% maximum voluntary contractions, p = .01), slightly greater wrist extension in both hands (2° to 3°, p < .01), and the lowest subjective comfort and preference ratings (p < .1).
The study findings indicate that virtual keyboards with a key size less than 16 mm may be too small for touch typing given the slower typing speed, higher static shoulder muscle activity, greater wrist extension, and lowest subjective preferences.
We evaluated the effects of virtual keyboard key sizes on typing force exposures, muscle activity, comfort, and typing productivity.
The aim of this study was to determine if interruptions affect the quality of work.
Interruptions are commonplace at home and in the office. Previous research in this area has traditionally involved time and errors as the primary measures of disruption. Little is known about the effect interruptions have on quality of work.
Fifty-four students outlined and wrote three essays using a within-subjects design. During Condition 1, interruptions occurred while participants were outlining. During Condition 2, interruptions occurred while they were writing. No interruptions occurred in Condition 3.
Quality of work was significantly reduced in both interruption conditions when compared to the non-interruption condition. The number of words produced was significantly reduced when participants were interrupted while writing the essay but not when outlining the essay.
This research represents a crucial first step in understanding the effect interruptions have on quality of work. Our research suggests that interruptions negatively impact quality of work during a complex, creative writing task. Since interruptions are such a prevalent part of daily life, more research needs to be conducted to determine what other tasks are negatively impacted. Moreover, the underlying mechanism(s) causing these decrements needs to be identified. Finally, strategies and systems need to be designed and put in place to help counteract the decline in quality of work caused by interruptions.
We examined preferences for different forms of causal explanations for indeterminate situations.
Klein and Hoffman distinguished several forms of causal explanations for indeterminate, complex situations: single-cause explanations, lists of causes, and explanations that interrelate several causes. What governs our preferences for single-cause (simple) versus multiple-cause (complex) explanations?
In three experiments, we examined the effect of target audience, explanatory context, participant nationality, and explanation type. All participants were college students. Participants were given two scenarios, one regarding the U.S. economic collapse in 2007 to 2008 and the other about the sudden success of the U.S. military in Iraq in 2007. The participants were asked to assess various types of causal explanations for each of the scenarios, with reference to one or more purposes or audience for the explanations.
Participants preferred simple explanations for presentation to less sophisticated audiences. Malaysian students of Chinese ethnicity preferred complex explanations more than did American students. The form of presentation made a difference: Participants preferred complex to simple explanations when given a chance to compare the two, but the preference for simple explanations increased when there was no chance for comparison, and the difference between Americans and Malaysians disappeared.
Preferences for explanation forms can vary with the context and with the audience, and they depend on the nature of the alternatives that are provided.
Guidance for decision-aiding technology and training systems that provide explanations need to involve consideration of the form and depth of the accounts provided as well as the intended audience.
This article investigates whether different interventions aimed at promoting postural change could increase body movement throughout the shift and reduce musculoskeletal discomfort.
Many researchers have reported high levels of discomfort for workers that have relatively low-level demands but whose jobs are sedentary in nature. To date, few interventions have been found to be effective in reducing worker discomfort.
Thirty-seven call center operators were evaluated in four different workstation conditions: conventional workstation, sit-stand workstation, conventional workstation with reminder software, and sit-stand workstation with break reminder software–prompt to remind workers to take break. The primary outcome variables consisted of productivity, measured by custom software; posture changes, measured by continuous video recording; and discomfort, measured by simple survey. Each condition was evaluated over a 2-week period.
Significant reductions in short-term discomfort were reported in the shoulders, upper back, and lower back when utilizing reminder software, independent of workstation type. Although not significant, many productivity indices were found to increase by about 10%.
Posture-altering workstation interventions, specifically sit-stand tables or reminder software with traditional tables, were effective in introducing posture variability. Further, postural variability appears to be linked to decreased short-term discomfort at the end of the day without a negative impact on productivity.
An intervention that can simply induce the worker to move throughout the day, such as a sit-stand table or simple software reminder about making a large posture change, can be effective in reducing discomfort in the worker, while not adversely impacting productivity.
This article evaluates the effectiveness of two interventions: a self-leveling pallet carousel designed to position the loads vertically and horizontally at origin, and an adjustable cart designed to raise loads vertically at destination to reduce spine loads.
Low back disorders among workers in manual material handling industries are very prevalent and have been linked to manual palletizing operations. Evidence into the effectiveness of ergonomic interventions is limited, with no research that investigates interventions with adjustable load location.
Thirteen males experienced in manual material handling participated in simulated order selecting tasks where spine loads were quantified for each intervention condition: carousel to traditional cart, pallet to traditional cart, pallet to adjustable cart, and carousel to adjustable cart.
The results showed that combining both devices results in reduction in spine compression (61%), anterior-posterior shear (72%), and lateral shear (63%) compared to traditional palletizing conditions. Individually, the carousel was responsible for the greatest reductions, but the lowest values were typically achieved by combining the adjustable cart and carousel.
The combination of the interventions (self-leveling carousel and adjustable cart) was most effective in reducing the spine loads when compared to the traditional pallet-cart condition. The individual interventions also reduced the loads compared to the traditional condition.
With de-palletizing/palletizing tasks being a major source of low back injuries, the combination of self-leveling carousel and adjustable cart has been found to be effective in reducing the peak spine loading as compared to traditional pallet on floor and nonadjustable flat cart conditions.
The objective was to study the performance of a manual tracking task with system flexibility and time delays in the input channel and to examine the effects of input shaping the human operator’s commands.
It has long been known that low-frequency, lightly damped vibration hinders performance of a manually controlled system. Recently, input shaping has been shown to improve the performance of such systems in a compensatory-display tracking task. It is unknown if similar improvements are seen with pursuit-display tasks, or how the improvement changes when time delays are added to the system.
A total of 18 novice participants performed a pursuit-view tracking experiment with a spring-centered joystick. Controlled elements included an integrator, an integrator with a lightly damped flexible mode, and an input-shaped integrator with a flexible mode. The input to these controlled elements was delayed between 0 and 1 s. Tracking performance was quantified by root mean square tracking error, and subjective difficulty was quantified by ratings on a Cooper–Harper scale.
Performance was best with the undelayed integrator. Both time delay and flexibility degraded performance. Input shaping improved control of the flexible element, with a diminishing benefit as the time delay increased. Tracking error and subjective rating were significantly related. Some operators used a pulsive control strategy.
Input shaping can improve the performance of a manually controlled system with flexibility, even when time delays are present.
This study is useful to designers of human-controlled systems, especially those with problematic flexibility and/or time delays.
The aim of the current study was to investigate potential benefits of likelihood alarm systems (LASs) over binary alarm systems (BASs) in a multitask environment.
Several problems are associated with the use of BASs, because most of them generate high numbers of false alarms. Operators lose trust in the systems and ignore alarms or cross-check all of them when other information is available. The first behavior harms safety, whereas the latter one reduces productivity. LASs represent an alternative, which is supposed to improve operators’ attention allocation.
We investigated LASs and BASs in a dual-task paradigm with and without the possibility to cross-check alerts with raw data information. Participants’ trust in the system, their behavior, and their performance in the alert and the concurrent task were assessed.
Reported trust, compliance with alarms, and performance in the alert and the concurrent task were higher for the LAS than for the BAS. The cross-check option led to an increase in alert task performance for both systems and a decrease in concurrent task performance for the BAS, which did not occur in the LAS condition.
LASs improve participants’ attention allocation between two different tasks and therefore lead to an increase in alert task and concurrent task performance. The performance maximum is achieved when LAS is combined with a cross-check option for validating alerts with additional information.
The use of LASs instead of BASs in safety-related multitask environments has the potential to increase safety and productivity likewise.
We introduced a new visually controlled tracking task that can be assessed on a handheld device in shift workers to evaluate time-of-day dependent modulations in visuomotor performance.
Tracking tasks have been used to predict performance fluctuations depending on time of day mainly under laboratory conditions. One challenge to an extended use at the actual working site is the complex and fixed test setup consisting of a test unit, a monitor, and a manipulation object, such as a joystick.
Participants followed an unpredictably moving target on the screen of a handheld device with an attachable stylus. A total of 11 shift workers (age range: 20–59, mean: 33.64, standard deviation: 10.56) were tested in the morning, the evening, and the night shift in 2-hr intervals with the tracking task and indicated their fatigue levels on visual analogue scales. We evaluated tracking precision by calculating the mean spatial deviation from the target for each session.
Tracking precision was significantly influenced by the interaction between shift and session, suggesting a clear time-of-day effect of visuomotor performance under real-life conditions. Tracking performance declined during early-morning hours whereas fatigue ratings increased.
These findings suggest that our setup is suitable to detect time-of-day dependent performance changes in visually guided tracking.
Our task could be used to evaluate fluctuations in visuomotor coordination, a skill that is decisive in various production steps at the actual working place to assess productivity.
A study was run to test which of five electroencephalographic (EEG) indices was most diagnostic of loss of vigilance at two levels of workload.
EEG indices of alertness include conventional spectral power measures as well as indices combining measures from multiple frequency bands, such as the Task Load Index (TLI) and the Engagement Index (EI). However, it is unclear which indices are optimal for early detection of loss of vigilance.
Ninety-two participants were assigned to one of two experimental conditions, cued (lower workload) and uncued (higher workload), and then performed a 40-min visual vigilance task. Performance on this task is believed to be limited by attentional resource availability. EEG was recorded continuously. Performance, subjective state, and workload were also assessed.
The task showed a vigilance decrement in performance; cuing improved performance and reduced subjective workload. Lower-frequency alpha (8 to 10.9 Hz) and TLI were most sensitive to the task parameters. The magnitude of temporal change was larger for lower-frequency alpha. Surprisingly, higher TLI was associated with superior performance. Frontal theta and EI were influenced by task workload only in the final period of work. Correlational data also suggested that the indices are distinct from one another.
Lower-frequency alpha appears to be the optimal index for monitoring vigilance on the task used here, but further work is needed to test how diagnosticity of EEG indices varies with task demands.
Lower-frequency alpha may be used to diagnose loss of operator alertness on tasks requiring vigilance.
We investigated whether intelligent advanced warnings of the end of green traffic signals help drivers negotiate the dilemma zone (DZ) at signalized intersections and sought to identify behavioral mechanisms for any warning-related benefits.
Prior research suggested that warnings of end of green can increase slowing and stopping frequency given the DZ, but drivers may sometimes respond to warnings by speeding up.
In two simulator studies, we compared six types of roadway or in-vehicle warnings with a no-warning control condition. Using multilevel modeling, we tested mediation models of the behavioral mechanisms underlying the effects of warnings.
In both studies, warnings led to more stopping at DZ intersections and milder decelerations when stopping compared with no warning. Drivers’ predominant response to warnings was anticipatory slowing on approaching the intersection, not speeding up. The increased stopping with warning was mediated by increased slowing. In Study 1, anticipatory slowing given warnings generalized to green-light intersections where no warning was given. In Study 2, we found that lane-specific warnings (e.g., LED lights embedded in each lane) sometimes led to fewer unsafe emergency stops than did non-lane-specific roadside warnings.
End-of-green warnings led to safer behavior in the DZ and on the early approach to intersections. The main mechanism for the benefits of warnings was drivers’ increased anticipatory slowing on approaching an intersection. Lane-specific warnings may have some benefits over roadside warnings.
Applications include performance models of how drivers use end-of-green warnings, control algorithms and warning displays for intelligent intersections, and statistical methodology in human factors research.
The aim of this study was to develop a computational account of the spontaneous task ordering that occurs within jobs as work unfolds ("on-the-fly task scheduling").
Air traffic control is an example of work in which operators have to schedule their tasks as a partially predictable work flow emerges. To date, little attention has been paid to such on-the-fly scheduling situations.
We present a series of discrete-event models fit to conflict resolution decision data collected from experienced controllers operating in a high-fidelity simulation.
Our simulations reveal air traffic controllers’ scheduling decisions as examples of the partial-order planning approach of Hayes-Roth and Hayes-Roth. The most successful model uses opportunistic first-come-first-served scheduling to select tasks from a queue. Tasks with short deadlines are executed immediately. Tasks with long deadlines are evaluated to assess whether they need to be executed immediately or deferred.
On-the-fly task scheduling is computationally tractable despite its surface complexity and understandable as an example of both the partial-order planning strategy and the dynamic-value approach to prioritization.
In the present study, we tested to what extent highly automated convoy driving involving small spacing ("platooning") may affect time headway (THW) and standard deviation of lateral position (SDLP) during subsequent manual driving.
Although many previous studies have reported beneficial effects of automated driving, some research has also highlighted potential drawbacks, such as increased speed and reduced THW during the activation of semiautomated driving systems. Here, we rather focused on the question of whether switching from automated to manual driving may produce unwanted carryover effects on safety-relevant driving performance.
We utilized a pre–post simulator design to measure THW and SDLP after highly automated driving and compared the data with those for a control group (manual driving throughout).
Our data revealed that THW was reduced and SDLP increased after leaving the automation mode. A closer inspection of the data suggested that specifically the effect on THW is likely due to sensory and/or cognitive adaptation processes.
Behavioral adaptation effects need to be taken into account in future implementations of automated convoy systems.
Potential application areas of this research comprise automated freight traffic (truck convoys) and the design of driver assistance systems in general. Potential countermeasures against following at short distance as behavioral adaptation should be considered.
This study examined the impact of stage of automation on the performance and perceived workload during simulated robotic arm control tasks in routine and off-nominal scenarios.
Automation varies with respect to the stage of information processing it supports and its assigned level of automation. Making appropriate choices in terms of stages and levels of automation is critical to ensure robust joint system performance. To date, this issue has been empirically studied in domains such as aviation and medicine but not extensively in the context of space operations.
A total of 36 participants played the role of a payload specialist and controlled a simulated robotic arm. Participants performed fly-to tasks with two types of automation (camera recommendation and trajectory control automation) of varying stage. Tasks were performed during routine scenarios and in scenarios in which either the trajectory control automation or a hazard avoidance automation failed.
Increasing the stage of automation progressively improved performance and lowered workload when the automation was reliable, but incurred severe performance costs when the system failed.
The results from this study support concerns about automation-induced complacency and automation bias when later stages of automation are introduced. The benefits of such automation are offset by the risk of catastrophic outcomes when system failures go unnoticed or become difficult to recover from.
A medium stage of automation seems preferable as it provides sufficient support during routine operations and helps avoid potentially catastrophic outcomes in circumstances when the automation fails.
The objective was to determine whether the scanpaths of air traffic controllers (ATCs) could be used to improve the performance of novices in a conflict detection task.
Studies in other domains show that novice performance can be improved by exposure to experts’ scanpaths. Whether this effect can be found for an aircraft conflict detection task is unknown.
Scanpaths of 25 professional ATCs ("experts") were recorded using a medium-fidelity air traffic control simulation with realistic scripted traffic that included aircraft pairs that would lose separation. A total of 20 novices were exposed to experts’ scanpaths ("treatment"), and their performance (for both loss of separation detection rates and false alarm rates) was compared to that of 20 novices given no treatment or instructions ("control") and 20 novices who were verbally instructed to attend to altitude ("instruction-only"). Interviews were held about the helpfulness of the exposure. The scanpaths were analyzed to find pattern differences among the three groups.
Chi-square tests showed significant differences for false alarm rates across the three groups (p = .001). Pairwise Mann–Whitney tests showed that the number of false alarms for the treatment group was significantly lower than that for the control group (p = .005), and trended lower than the instruction-only group (p = .08). Treatment group participants responded that experts’ scanpaths helped. Analysis of scanpaths showed an increased tendency of the scanpath treatment group to follow the experts’ scanpath.
The scanpath training intervention improved novice performance by reducing false alarms.
Implementing experts’ scanpaths into novices’ active learning process shows promise in enhancing training effectiveness and reducing training time.
In this simulator-based study, we aimed to quantify performance differences between joystick steering systems using first-order and second-order control, which are used in underground coal mining shuttle cars. In addition, we conducted an exploratory analysis of how users of the more difficult, second-order system changed their behavior over time.
Evidence from the visuomotor control literature suggests that higher-order control devices are not intuitive, which could pose a significant risk to underground mine personnel, equipment, and infrastructure.
Thirty-six naive participants were randomly assigned to first- and second-order conditions and completed three experimental trials comprising sequences of 90° turns in a virtual underground mine environment, with velocity held constant at 9 km/h–1. Performance measures were lateral deviation, steering angle variability, high-frequency steering content, joystick activity, and cumulative time in collision with the virtual mine wall.
The second-order control group exhibited significantly poorer performance for all outcome measures. In addition, a series of correlation analyses revealed that changes in strategy were evident in the second-order group but not the first-order group.
Results were consistent with previous literature indicating poorer performance with higher-order control devices and caution against the adoption of the second-order joystick system for underground shuttle cars.
Low-cost, portable simulation platforms may provide an effective basis for operator training and recruitment.
In this study, we investigated the effects of mild motion sickness and sopite syndrome on multitasking cognitive performance.
Despite knowledge on general motion sickness, little is known about the effect of motion sickness and sopite syndrome on multitasking cognitive performance. Specifically, there is a gap in existing knowledge in the gray area of mild motion sickness.
Fifty-one healthy individuals performed a multitasking battery. Three independent groups of participants were exposed to two experimental sessions. Two groups received motion only in the first or the second session, whereas the control group did not receive motion. Measurements of motion sickness, sopite syndrome, alertness, and performance were collected during the experiment.
Only during the second session, motion sickness and sopite syndrome had a significant negative association with cognitive performance. Significant performance differences between symptomatic and asymptomatic participants in the second session were identified in composite (9.43%), memory (31.7%), and arithmetic (14.7%) task scores. The results suggest that performance retention between sessions was not affected by mild motion sickness.
Multitasking cognitive performance declined even when motion sickness and soporific symptoms were mild. The results also show an order effect. We postulate that the differential effect of session on the association between symptomatology and multitasking performance may be related to the attentional resources allocated to performing the multiple tasks. Results suggest an inverse relationship between motion sickness effects on performance and the cognitive effort focused on performing a task.
Even mild motion sickness has potential implications for multitasking operational performance.
In this study, we compared how users locate physical and equivalent three-dimensional images of virtual objects in a cave automatic virtual environment (CAVE) using the hand to examine how human performance (accuracy, time, and approach) is affected by object size, location, and distance.
Virtual reality (VR) offers the promise to flexibly simulate arbitrary environments for studying human performance. Previously, VR researchers primarily considered differences between virtual and physical distance estimation rather than reaching for close-up objects.
Fourteen participants completed manual targeting tasks that involved reaching for corners on equivalent physical and virtual boxes of three different sizes. Predicted errors were calculated from a geometric model based on user interpupillary distance, eye location, distance from the eyes to the projector screen, and object.
Users were 1.64 times less accurate (p < .001) and spent 1.49 times more time (p = .01) targeting virtual versus physical box corners using the hands. Predicted virtual targeting errors were on average 1.53 times (p < .05) greater than the observed errors for farther virtual targets but not significantly different for close-up virtual targets.
Target size, location, and distance, in addition to binocular disparity, affected virtual object targeting inaccuracy. Observed virtual box inaccuracy was less than predicted for farther locations, suggesting possible influence of cues other than binocular vision.
Human physical interaction with objects in VR for simulation, training, and prototyping involving reaching and manually handling virtual objects in a CAVE are more accurate than predicted when locating farther objects.
This study applies text mining to extract clusters of vehicle problems and associated trends from free-response data in the National Highway Traffic Safety Administration’s vehicle owner’s complaint database.
As the automotive industry adopts new technologies, it is important to systematically assess the effect of these changes on traffic safety. Driving simulators, naturalistic driving data, and crash databases all contribute to a better understanding of how drivers respond to changing vehicle technology, but other approaches, such as automated analysis of incident reports, are needed.
Free-response data from incidents representing two severity levels (fatal incidents and incidents involving injury) were analyzed using a text mining approach: latent semantic analysis (LSA). LSA and hierarchical clustering identified clusters of complaints for each severity level, which were compared and analyzed across time.
Cluster analysis identified eight clusters of fatal incidents and six clusters of incidents involving injury. Comparisons showed that although the airbag clusters across the two severity levels have the same most frequent terms, the circumstances around the incidents differ. The time trends show clear increases in complaints surrounding the Ford/Firestone tire recall and the Toyota unintended acceleration recall. Increases in complaints may be partially driven by these recall announcements and the associated media attention.
Text mining can reveal useful information from free-response databases that would otherwise be prohibitively time-consuming and difficult to summarize manually.
Text mining can extend human analysis capabilities for large free-response databases to support earlier detection of problems and more timely safety interventions.
The goal of this research was to assess the usability of a voting system designed for smartphones.
Smartphones offer remote participation in elections through the use of pervasive technology. Voting on these devices could, among other benefits, increase voter participation while allowing voters to use familiar technology. However, the usability of these systems has not been assessed.
A mobile voting system optimized for use on a smartphone was designed and tested against traditional voting platforms for usability.
There were no reliable differences between the smartphone-based system and other voting methods in efficiency and perceived usability. More important, though, smartphone owners committed fewer errors on the mobile voting system than on the traditional voting systems.
Even with the known limitations of small mobile platforms in both displays and controls, a carefully designed system can provide a usable voting method. Much of the concern about mobile voting is in the area of security; therefore, although these results are promising, security concerns and usability issues arising from mitigating them must be strongly considered.
The results of this experiment may help to inform current and future election and public policy officials about the benefits of allowing voters to vote with familiar hardware.
The objective was to investigate the interaction between the mode of performance outcome feedback and task difficulty on timing decisions (i.e., when to act).
Feedback is widely acknowledged to affect task performance. However, the extent to which feedback display mode and its impact on timing decisions is moderated by task difficulty remains largely unknown.
Participants repeatedly engaged a zero-sum game involving silent duels with a computerized opponent and were given visual performance feedback after each engagement. They were sequentially tested on three different levels of task difficulty (low, intermediate, and high) in counterbalanced order. Half received relatively simple "inside view" binary outcome feedback, and the other half received complex "outside view" hit rate probability feedback. The key dependent variables were response time (i.e., time taken to make a decision) and survival outcome.
When task difficulty was low to moderate, participants were more likely to learn and perform better from hit rate probability feedback than binary outcome feedback. However, better performance with hit rate feedback exacted a higher cognitive cost manifested by higher decision response time.
The beneficial effect of hit rate probability feedback on timing decisions is partially moderated by task difficulty.
Performance feedback mode should be judiciously chosen in relation to task difficulty for optimal performance in tasks involving timing decisions.
The impact of a decision support tool designed to embed contextual mission factors was investigated. Contextual information may enable operators to infer the appropriateness of data underlying the automation’s algorithm.
Research has shown the costs of imperfect automation are more detrimental than perfectly reliable automation when operators are provided with decision support tools. Operators may trust and rely on the automation more appropriately if they understand the automation’s algorithm. The need to develop decision support tools that are understandable to the operator provides the rationale for the current experiment.
A total of 17 participants performed a simulated rapid retasking of intelligence, surveillance, and reconnaissance (ISR) assets task with manual, decision automation, or contextual decision automation differing in two levels of task demand: low or high. Automation reliability was set at 80%, resulting in participants experiencing a mixture of reliable and automation failure trials. Dependent variables included ISR coverage and response time of replanning routes.
Reliable automation significantly improved ISR coverage when compared with manual performance. Although performance suffered under imperfect automation, contextual decision automation helped to reduce some of the decrements in performance.
Contextual information helps overcome the costs of imperfect decision automation.
Designers may mitigate some of the performance decrements experienced with imperfect automation by providing operators with interfaces that display contextual information, that is, the state of factors that affect the reliability of the automation’s recommendation.
This article reports new anthropometric information of U.S. firefighters for fire apparatus design applications (Study 1) and presents a data method to assist in firefighter anthropometric data usage for research-to-practice propositions (Study 2).
Up-to-date anthropometric information of the U.S. firefighter population is needed for updating ergonomic and safety specifications for fire apparatus.
A stratified sampling plan of three-age by three-race/ethnicity combinations was used to collect anthropometric data of 863 male and 88 female firefighters across the U.S. regions; 71 anthropometric dimensions were measured (Study 1). Differences among original, weighted, and normality transformed data from Study 1 were compared to allowable observer errors (Study 2).
On average, male firefighters were 9.8 kg heavier and female firefighters were 29 mm taller than their counterparts in the general U.S. population. They also have larger upper-body builds than those of the general U.S. population. The data in weighted, unweighted, and normality transformed modes were compatible among each other with a few exceptions.
The data obtained in this study provide the first available U.S. national firefighter anthropometric information for fire apparatus designs. The data represent the demographic characteristics of the current firefighter population and, except for a few dimensions, can be directly employed into fire apparatus design applications without major weighting or nonnormality concerns.
The up-to-date firefighter anthropometric data and data method will benefit the design of future fire apparatus and protective equipment, such as seats, body restraints, cabs, gloves, and bunker gear.
In the present research, we investigated the hypothesis that working memory mediates conversation-induced impairment of situation awareness (SA) while driving.
Although there is empirical evidence that conversation impairs driving performance, the cognitive mechanisms that mediate this relationship remain underspecified. Researchers have reported that a phonological working memory task decreased drivers’ SA for vehicles located behind them whereas a visuospatial working memory task impaired SA for vehicles ahead. Conversation, therefore, might impair SA for vehicles behind the driver by preferentially taxing the phonological loop.
A 20-questions task was used as a proxy for natural conversation. In Experiment 1, driving performance was measured across three within-subjects conversation conditions (i.e., no conversation, driver asks questions, driver answers questions) with the use of a driving simulator. In Experiment 2, participants drove in the same simulator while either conversing (20-questions task) or not. Participants estimated the positions of other vehicles after the screens were blanked at the end of each trial.
Speed monitoring and responses to visual probes were impaired by the 20-questions conversation task (Experiment 1). As predicted, conversation impaired SA for the location of other vehicles more for vehicles located behind the driver than for those in front (Experiment 2).
Conversation impairs drivers’ SA of vehicles behind them by taxing working memory’s phonological loop and impairs SA generally by taxing working memory’s central executive.
Provides a theoretical framework that links driver SA to working memory and a mechanism for understanding why conversation impairs driving performance.
This study investigated whether a stressful military training program, the 9- to 10-week U.S. Army basic combat training (BCT) course, alters the cognitive performance and mood of healthy young adult females.
Structured training programs including adolescent boot camps, sports training camps, learning enrichment programs, and military basic training are accepted methods for improving academic and social functioning. However, limited research is available on the behavioral effects of structured training programs in regard to cognitive performance and mood.
Two separate, within-subject studies were conducted with different BCT classes; in total 212 female volunteers were assessed before and after BCT. In Study 1, Four-Choice Reaction Time, Match-to-Sample, and Grammatical Reasoning tests were administered. The Psychomotor Vigilance Test (PVT) was administered in Study 2. The Profile of Mood States (POMS) was administered in both studies.
In Study 1, reaction time to correct responses on all three of the performance tests improved from pre- to post-BCT. In Study 2, PVT reaction time significantly improved. All POMS subscales improved over time in the second study, whereas POMS subscales in the first study failed to meet criteria for statistically significant differences over time.
Cognition and mood substantially improved over military basic training. These changes may be a result of structured physical and mental training experienced during basic training or other factors not as yet identified.
Properly structured training may have extensive, beneficial effects on cognitive performance and mood; however, additional research is needed to determine what factors are responsible for such changes.
Individual meta-analyses were conducted for six training methods as part of a U.S. Army basic research project. The objective was to identify evidence-based guidelines for the effectiveness of each training method, under different moderating conditions, for cognitive skill transfer in adult learning. Results and implications for two of these training methods, learner control (LC) and exploratory learning (EL), are discussed. LC provides learners with active control over training variables. EL requires learners to discover relationships and interactions between variables.
There is mixed evidence on the effectiveness of both LC and EL learning methods on transfer relative to more guided training methods. Cognitive load theory (CLT) provides a basis for predicting that training strategies that manage intrinsic load of a task during training and minimize extraneous load will avail more resources that can be devoted to learning.
Meta-analyses were conducted using a Hedges’s g analysis of effect sizes. Control conditions with little to no learner freedom were contrasted with treatment conditions manipulating more learner freedom.
Overall more LC was no different from training with limited or no learner control, and more EL was less effective than limited or no exploration; however, each can be effective under certain conditions. Both strategies have been more effective for cognitive skill learning than for knowledge recall tasks. LC exhibited more benefit to very near transfer, whereas EL’s benefit was to far transfer.
Task type, transfer test, and transfer distance moderate the overall transfer cost of more learner freedom.
The findings are applicable to the development of instructional design guidelines for the use of LC and EL in adult skill training.
The aim of this study was to develop a predictive discomfort model in single-axis, 3-D, and 6-D combined-axis whole-body vibrations of seated occupants considering different postures.
Non-neutral postures in seated whole-body vibration play a significant role in the resulting level of perceived discomfort and potential long-term injury. The current international standards address contact points but not postures.
The proposed model computes discomfort on the basis of static deviation of human joints from their neutral positions and how fast humans rotate their joints under vibration. Four seated postures were investigated. For practical implications, the coefficients of the predictive discomfort model were changed into the Borg scale with psychophysical data from 12 volunteers in different vibration conditions (single-axis random fore-aft, lateral, and vertical and two magnitudes of 3-D). The model was tested under two magnitudes of 6-D vibration.
Significant correlations (R2 = .93) were found between the predictive discomfort model and the reported discomfort with different postures and vibrations. The ISO 2631-1 correlated very well with discomfort (R2 = .89) but was not able to predict the effect of posture.
Human discomfort in seated whole-body vibration with different non-neutral postures can be closely predicted by a combination of static posture and the angular velocities of the joint.
The predictive discomfort model can assist ergonomists and human factors researchers design safer environments for seated operators under vibration. The model can be integrated with advanced computer biomechanical models to investigate the complex interaction between posture and vibration.
To test the display luminance hypothesis of the positive polarity advantage and gain insights for display design, the joint effects of display polarity and character size were assessed with a proofreading task.
Studies have shown that dark characters on light background (positive polarity) lead to better legibility than do light characters on dark background (negative polarity), presumably due to the typically higher display luminance of positive polarity presentations.
Participants performed a proofreading task with black text on white background or white text on black background. Texts were presented in four character sizes (8, 10, 12, and 14 pt; corresponding to 0.22°, 0.25°, 0.31°, and 0.34° of vertical visual angle).
A positive polarity advantage was observed in proofreading performance. Importantly, the positive polarity advantage linearly increased with decreasing character size.
The findings are in line with the assumption that the typically higher luminance of positive polarity displays leads to an improved perception of detail.
The implications seem important for the design of text on such displays as those of computers, automotive control and entertainment systems, and smartphones that are increasingly used for the consumption of text-based media and communication. The sizes of these displays are limited, and it is tempting to use small font sizes to convey as much information as possible. Especially with small font sizes, negative polarity displays should be avoided.
The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method.
A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes.
Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity.
The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method.
We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research.
We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.
The aim of this study was to design and evaluate an algorithm for detecting drowsiness-related lane departures by applying a random forest classifier to steering wheel angle data.
Although algorithms exist to detect and mitigate driver drowsiness, the high rate of false alarms and missed detection of drowsiness represent persistent challenges. Current algorithms use a variety of data sources, definitions of drowsiness, and machine learning approaches to detect drowsiness.
We develop a new approach for detecting drowsiness-related lane departures using steering wheel angle data that employ an ensemble definition of drowsiness and a random forest algorithm. Data collected from 72 participants driving the National Advanced Driving Simulator are used to train and evaluate the model. The model’s performance was assessed relative to a commonly used algorithm, percentage eye closure (PERCLOS).
The random forest steering algorithm had a higher classification accuracy and area under the receiver operating characteristic curve than PERCLOS and had comparable positive predictive value. The algorithm succeeds at identifying two key scenarios associated with the drowsiness detection task. These two scenarios consist of instances when drivers depart their lane because they fail to modulate their steering behavior according to the demands of the simulated road and instances when drivers correctly modulate their steering behavior according to the demands of the road.
The random forest steering algorithm is a promising approach to detect driver drowsiness. The algorithm’s ties to consequences of drowsy driving suggest that it can be easily paired with mitigation systems.
We tested the effectiveness of an illustrated divider ("the divider") for bedside emergency equipment drawers in an intensive care unit (ICU). In Study 1, we assessed whether the divider increases completeness and standardizes the locations of emergency equipment within the drawer. In Study 2, we investigated whether the divider decreases nurses’ restocking and retrieval times and decreases their workload.
Easy access to fully stocked emergency equipment is important during emergencies. However, inefficient equipment storage and cognitively demanding work settings might mean that drawers are incompletely stocked and access to items is slow.
A pre-post-post study investigated drawer completeness and item locations before and after the introduction of the divider to 30 ICU drawers. A subsequent experiment measured item restocking time, item retrieval time, and subjective workload for nurses.
At 2 weeks and 10 weeks after the divider was introduced, the completeness of the drawer increased significantly compared with before the divider was introduced. The divider decreased the variability of the locations of the 17 items in the drawer to 16% of its original value. Study 2 showed that restocking times but not retrieval times were significantly faster with the divider present. For both tasks, nurses rated their workload lower with the divider.
The divider improved the standardization and completeness of emergency equipment. In addition, restocking times and workload were decreased with the divider.
Redesigning storage for certain equipment using human factors design principles can help to speed and standardize restocking and ease access to equipment.
This paper proposes a new methodology that focuses on the effects of cold and harsh environments on the reliability of human performance.
As maritime operations move into Arctic and Antarctic environments, decision makers must be able to recognize how cold weather affects human performance and subsequently adjusts management and operational tools and strategies.
In the present work, a revised version of the Human Error Assessment and Reduction Technique (HEART) methodology has been developed to assess the effects of cold on the likelihood of human error in offshore oil and gas facilities. This methodology has been applied to post-maintenance tasks of offshore oil and gas facility pumps to investigate how management, operational, and equipment issues must be considered in risk analysis and prediction of human error in cold environments.
This paper provides a proof of concept indicating that the risk associated with operations in cold environments is greater than the risk associated with the same operations performed in temperate climates. It also develops guidelines regarding how this risk can be assessed. The results illustrate that in post-maintenance procedures of a pump, the risk value related to the effect of cold and harsh environments on operator cognitive performance is twice as high as the risk value when performed in normal conditions.
The present work demonstrates significant differences between human error probabilities (HEPs) and associated risks in normal conditions as opposed to cold and harsh environments. This study also highlights that the cognitive performance of the human operator is the most important factor affected by the cold and harsh conditions.
The methodology developed in this paper can be used for reevaluating the HEPs for particular scenarios that occur in harsh environments since these HEPs may not be comparable to similar scenarios in normal conditions.
We study the dependence or independence of reliance and compliance as two responses to alarms to understand the mechanisms behind these responses.
Alarms, alerts, and other binary cues affect user behavior in complex ways. The suggestion has been made that there are two different responses to alerts—compliance (the tendency to perform an action cued by the alert) and reliance (the tendency to refrain from actions as long as no alert is issued). The study tests the degree to which these two responses are indeed independent.
An experiment tested the effects of the positive and negative predictive values of the alerts (PPV and NPV) on measures of compliance and reliance based on cutoff settings, response times, and subjective confidence.
For cutoff settings and response times, compliance was unaffected by the irrelevant NPV, whereas reliance depended on the irrelevant PPV. For subjective estimates, there were no significant effects of the irrelevant variables.
Results suggest that compliance is relatively stable and unaffected by irrelevant information (the NPV), whereas reliance is also affected by the PPV. The results support the notion that reliance and compliance are separate, but related, forms of trust.
False alarm rates, which affect PPV, determine both the response to alerts (compliance) and the tendency to limit precautions when no alert is issued (reliance).
Our objective was to explore the value of considering the number of tasks that use a piece of information when calculating the relevance information has to an operator.
Whereas frequency and criticality of information are often identified as information attributes, the number of tasks that use the information is rarely considered.
We calculated the relevance of pieces of information in air traffic control using criticality and frequency, and compared it to a formula that also considered the number of tasks.
Including number of tasks resulted in information ranking that better accounted for aircraft relevant information, and better supported the information needs of air traffic controllers as determined by judgments of controllers.
The attribute of number of tasks is valuable in calculating the relevance of information.
Interface designers should consider the number of tasks that use a particular piece of information when determining the placement of information within a display.
Increasingly, people work in socially networked environments. With growing adoption of enterprise social network technologies, supporting effective social community is becoming an important factor in organizational success.
Relatively few human factors methods have been applied to social connection in communities. Although team methods provide a contribution, they do not suit design for communities. Wenger’s community of practice concept, combined with cognitive work analysis, provided one way of designing for community.
We used a cognitive work analysis approach modified with principles for supporting communities of practice to generate a new website design. Over several months, the community using the site was studied to examine their degree of social connectedness and communication levels.
Social network analysis and communications analysis, conducted at three different intervals, showed increases in connections between people and between people and organizations, as well as increased communication following the launch of the new design.
In this work, we suggest that human factors approaches can be effective in social environments, when applied considering social community principles.
This work has implications for the development of new human factors methods as well as the design of interfaces for sociotechnical systems that have community building requirements.
The aim of this study was to test whether inattentional deafness to critical alarms would be observed in a simulated cockpit.
The inability of pilots to detect unexpected changes in their auditory environment (e.g., alarms) is a major safety problem in aeronautics. In aviation, the lack of response to alarms is usually not attributed to attentional limitations, but rather to pilots choosing to ignore such warnings due to decision biases, hearing issues, or conscious risk taking.
Twenty-eight general aviation pilots performed two landings in a flight simulator. In one scenario an auditory alert was triggered alone, whereas in the other the auditory alert occurred while the pilots dealt with a critical windshear.
In the windshear scenario, 11 pilots (39.3%) did not report or react appropriately to the alarm whereas all the pilots perceived the auditory warning in the no-windshear scenario. Also, of those pilots who were first exposed to the no-windshear scenario and detected the alarm, only three suffered from inattentional deafness in the subsequent windshear scenario.
These findings establish inattentional deafness as a cognitive phenomenon that is critical for air safety. Pre-exposure to a critical event triggering an auditory alarm can enhance alarm detection when a similar event is encountered subsequently.
Case-based learning is a solution to mitigate auditory alarm misperception.
The current study tested whether undersea divers are able to accurately judge their level of memory impairment from inert gas narcosis.
Inert gas narcosis causes a number of cognitive impairments, including a decrement in memory ability. Undersea divers may be unable to accurately judge their level of impairment, affecting safety and work performance.
In two underwater field experiments, performance decrements on tests of memory at 33 to 42 m were compared with self-ratings of impairment and resolution. The effect of depth (shallow [1-11 m] vs. deep [33-42 m]) was measured on free-recall (Experiment 1; n = 41) and cued-recall (Experiment 2; n = 39) performance, a visual-analogue self-assessment rating of narcotic impairment, and the accuracy of judgements-of-learning (JOLs).
Both free- and cued-recall were significantly reduced in deep, compared to shallow, conditions. This decrement was accompanied by an increase in self-assessed impairment. In contrast, resolution (based on JOLs) remained unaffected by depth. The dissociation of memory accuracy and resolution, coupled with a shift in a self-assessment of impairment, indicated that divers were able to accurately judge their decrease in memory performance at depth.
These findings suggest that impaired self-assessment and resolution may not actually be a symptom of narcosis in the depth range of 33 to 42 m underwater and that the divers in this study were better equipped to manage narcosis than prior literature suggested. The results are discussed in relation to implications for diver safety and work performance.
Shooter accuracy and stability were monitored while firing two bullpup and two conventional configuration rifles of the same caliber in order to determine if one style of weapon results in superior performance.
Considerable debate exists among police and military professionals regarding the differences between conventional configuration weapons, where the magazine and action are located ahead of the trigger, and bullpup configuration, where they are located behind the trigger (closer to the user). To date, no published research has attempted to evaluate this question from a physical ergonomics standpoint, and the knowledge that one style might improve stability or result in superior performance is of interest to countless military, law enforcement, and industry experts.
A live-fire evaluation of both weapon styles was performed using a total of 48 participants. Shooting accuracy and fluctuations in biomechanical stability (center of pressure) were monitored while subjects used the weapons to perform standard drills.
The bullpup weapon designs were found to provide a significant advantage in accuracy and shooter stability, while subjects showed considerable preference toward the conventional weapons.
Although many mechanical and maintenance issues must be considered before committing to a bullpup or conventional weapon system, it is clear in terms of basic human stability that the bullpup is the more advantageous configuration.
Results can be used by competitive shooter, military, law enforcement, and industry experts while outfitting personnel with a weapon system that leads to superior performance.
We examined whether a gene known to influence dopamine availability in the prefrontal cortex is associated with individual differences in learning a supervisory control task.
Methods are needed for selection and training of human operators who can effectively supervise multiple unmanned vehicles (UVs). Compared to the valine (Val) allele, the methionine (Met) allele of the COMT gene has been linked to superior executive function, but it is not known whether it is associated with training-related effects in multi-UV supervisory control performance.
Ninety-nine healthy adults were genotyped for the COMT Val158Met single nucleotide polymorphism (rs4680) and divided into Met/Met, Val/Met, and Val/Val groups. Participants supervised six UVs in an air defense mission requiring them to attack incoming enemy aircraft and protect a no-fly zone from intruders in conditions of low and high task load (numbers of enemy aircraft). Training effects were examined across four blocks of trials in each task load condition.
Compared to the Val/Met and Val/Val groups, Met/Met individuals exhibited a greater increase in enemy targets destroyed and greater reduction in enemy red zone incursions across training blocks.
Individuals with the COMT Met/Met genotype can acquire skill in executive function tasks, such as multi-UV supervisory control, to a higher level and/or faster than other genotype groups.
Potential applications of this research include the development of individualized training methods for operators of multi-UV systems and selecting personnel for complex supervisory control tasks.
This study investigated two cusp catastrophe models for cognitive workload and fatigue for a vigilance dual task, the role of emotional intelligence and frustration in the performance dynamics, and the dynamics for individuals and teams of two participants.
The effects of workload, fatigue, practice, and time on a specific task can be separated with the two models and an appropriate experimental design. Group dynamics add further complications to the understanding of workload and fatigue effects for teams.
In this experiment, 141 undergraduates responded to target stimuli that appeared on a simulated security camera display at three rates of speed while completing a jigsaw puzzle. Participants worked alone or in pairs and completed additional measurements prior to or after the main tasks.
The workload cusp verified the expected effects of speed and frustration on change in performance. The fatigue cusp showed that positive and negative changes in performance were greater if more work on the secondary task was completed and whether the participants who started with the fast vigilance condition demonstrated less fatigue.
The results supported the efficacy of the cusp models and suggested, furthermore, that training modules that varied speed of presentation could buffer the effects of fatigue.
The cusp models can be used to analyze virtually any cognitively demanding task set. The particular results generalize to vigilance tasks, although a wider range of conditions within vigilance tasks needs to be investigated further.
The aim of this study was to obtain a comprehensive analysis of the physical workload of clinical staff in long-term care facilities, before and after a safe resident handling program (SRHP).
Ergonomic exposures of health care workers include manual handling of patients and many non-neutral postures. A comprehensive assessment requires the integration of loads from these varied exposures into a single metric.
The Postures, Activities, Tools, and Handling observational protocol, customized for health care, was used for direct observations of ergonomic exposures in clinical jobs at 12 nursing homes before the SRHP and 3, 12, 24, and 36 months afterward. Average compressive forces on the spine were estimated for observed combinations of body postures and manual handling and then weighted by frequencies of observed time for the combination. These values were summed to obtain a biomechanical index for nursing assistants and nurses across observation periods.
The physical workload index (PWI) was much higher for nursing assistants than for nurses and decreased more after 3 years (–24% versus –2.5%). Specifically during resident handling, the PWI for nursing assistants decreased by 41% of baseline value.
Spinal loading was higher for nursing assistants than for nurses in long-term care centers. Both job groups experienced reductions in physical loading from the SRHP, especially the nursing assistants and especially while resident handling.
The PWI facilitates a comprehensive investigation of physical loading from both manual handling and non-neutral postures. It can be used in any work setting to identify high-risk tasks and determine whether reductions in one exposure are offset by increases in another.
The present study used a neuroergonomic approach to examine the interaction of mental and physical fatigue by assessing prefrontal cortex activation during submaximal fatiguing handgrip exercises.
Mental fatigue is known to influence muscle function and motor performance, but its contribution to the development of voluntary physical fatigue is not well understood.
A total of 12 participants performed separate physical (control) and physical and mental fatigue (concurrent) conditions at 30% of their maximal handgrip strength until exhaustion. Functional near infrared spectroscopy was employed to measure prefrontal cortex activation, whereas electromyography and joint steadiness were used simultaneously to quantify muscular effort.
Compared to the control condition, blood oxygenation in the bilateral prefrontal cortex was significantly lower during submaximal fatiguing contractions associated with mental fatigue at exhaustion, despite comparable muscular responses.
The findings suggest that interference in the prefrontal cortex may influence motor output during tasks that require both physical and cognitive processing.
A neuroergonomic approach involving simultaneous monitoring of brain and body functions can provide critical information on fatigue development that may be overlooked during traditional fatigue assessments.
We aimed to determine if serum biochemical and MRI biomarkers differed between high volume (≥ 230 texts sent/day; n = 5) and low volume (≤ 25 texts sent/day; n = 5) texters. A secondary aim was to ascertain what correlations between the biochemical and imaging biomarkers could tell us about the pathophysiology of early onset tendinopathies.
Text messaging has become widespread, particularly among college-aged young adults. There is concern that high rates of texting may result in musculoskeletal disorders, including tendinopathies. Pathophysiology of tendinopathies is largely unknown.
Ten females with a mean age of 20 were recruited. We examined serum for 20 biomarkers of inflammation, tissue degeneration, and repair. We used conventional MRI and MRI mean intratendinous signal intensity (MISI) to assess thumb tendons. Correlations between MISI and serum biomarkers were also examined.
Three high volume texters had MRI tendinopathy findings as did one low volume texter. Increased serum TNF-R1 was found in high volume texters compared to low volume texters, as were nonsignificant increases in MISI in two thumb tendons. Serum TNF-R1 and TNF-α correlated with MISI in these tendons, as did IL1-R1.
These results suggest that early onset tendinopathy with concurrent inflammation may be occurring in prolific texters. Further studies with larger sample sizes are needed for confirmation.
High volume texting may be a risk factor for thumb tendinopathy in later years. Multidisciplinary research using biochemical and imaging biomarkers may be used to gain insight into pathophysiological processes in musculoskeletal disorders.
A laboratory study investigated the relationship between power hand tool and task-related factors affecting threaded fastener torque accuracy and associated handle reaction force.
We previously developed a biodynamic model to predict handle reaction forces. We hypothesized that torque accuracy was related to the same factors that affect operator capacity to react against impulsive tool forces, as predicted by the model.
The independent variables included tool (pistol grip on a vertical surface, right angle on a horizontal surface), fastener torque rate (hard, soft), horizontal distance (30 cm and 60 cm), and vertical distance (80 cm, 110 cm, and 140 cm). Ten participants (five male and five female) fastened 12 similar bolts for each experimental condition.
Average torque error (audited – target torque) was affected by fastener torque rate and operator position. Torque error decreased 33% for soft torque rates, whereas handle forces greatly increased (170%). Torque error also decreased for the far horizontal distance 7% to 14%, when vertical distance was in the middle or high, but handle force decreased slightly 3% to 5%.
The evidence suggests that although both tool and task factors affect fastening accuracy, they each influence handle reaction forces differently. We conclude that these differences are attributed to different parameters each factor influences affecting the dynamics of threaded faster tool operation. Fastener torque rate affects the tool dynamics, whereas posture affects the spring-mass-damping biodynamic properties of the human operator.
The prediction of handle reaction force using an operator biodynamic model may be useful for codifying complex and unobvious relationships between tool and task factors for minimizing torque error while controlling handle force.
We describe a novel concept, situation awareness recovery (SAR), and we identify perceptual and cognitive processes that characterize SAR.
Situation awareness (SA) is typically described in terms of perceiving relevant elements of the environment, comprehending how those elements are integrated into a meaningful whole, and projecting that meaning into the future. Yet SA fluctuates during the time course of a task, making it important to understand the process by which SA is recovered after it is degraded.
We investigated SAR using different types of interruptions to degrade SA. In Experiment 1, participants watched short videos of an operator performing a supervisory control task, and then the participants were either interrupted or not interrupted, after which SA was assessed using a questionnaire. In Experiment 2, participants performed a supervisory control task in which they guided vehicles to their respective targets and either experienced an interruption, during which they performed a visual search task in a different panel, or were not interrupted.
The SAR processes we identified included shorter fixation durations, increased number of objects scanned, longer resumption lags, and a greater likelihood of refixating on objects that were previously looked at.
We interpret these findings in terms of the memory-for-goals model, which suggests that SAR consists of increased scanning in order to compensate for decay, and previously viewed cues act as associative primes that reactivate memory traces of goals and plans.
The objective was to test the accuracy of using remote methods (tele-ergonomics) to identify potential mismatches between workers and their computer workstations.
Remote access to ergonomic assessments and interventions using two-way interactive communications, tele-ergonomics, increases the ability to deliver computer ergonomic services. However, this mode of delivery must first be tested for accuracy.
In this single group study, the computer workstations of 30 participants who reported mild to moderate discomfort were remotely assessed using photographs taken by a research assistant and the self-report Computer Workstation Checklist (CWC) completed by the study participant. Mismatches identified remotely by an ergonomics expert were compared to results obtained from an onsite computer workstation visit completed by the same expert.
We accurately identified 92% of mismatches. The method was more sensitive (0.97) than specific (0.88), indicating that experts using the remote method were likely to overidentify mismatches.
These results suggest that an expert using the self-reported CWC combined with workstation photographs can accurately identify mismatches between workers and their computer workstations.
Remote assessment is a promising method to improve access to computer workstation ergonomic assessments.
The objective was to establish the nature of choice in cognitive multitasking.
Laboratory studies of multitasking suggest people are rational in their switch choices regarding multitasking, whereas observational studies suggest they are not. Threaded cognition theory predicts that switching is opportunistic and depends on availability of cognitive resources.
A total of 21 participants answered e-mails by looking up information (similar to customer service employees) while being interrupted by chat messages. They were free to choose when to switch to the chat message. We analyzed the switching behavior and the time they needed to complete the primary mail task.
When participants are faced with a delay in the e-mail task, they switch more often to the chat task at high-workload points. Choosing to switch to the secondary task instead of waiting makes them slower. It also makes them forget the information in the e-mail task half of the time, which slows them down even more.
When many cognitive resources are available, the probability of switching from one task to another is high. This does not necessarily lead to optimal switching behavior.
Potential applications of this research include the minimization of delays in task design and the inability or discouragement of switching in high-workload moments.
The aim of the present study is to investigate Chinese handwriting on mobile touch devices, considering the effects of three characteristics of the human finger (type, length, and width) and three characteristics of Chinese characters (direction of the first stroke, number of strokes, and structure).
Due to the popularity of touch devices in recent years, finger input for Chinese characters has attracted more attention from both industry and academia. However, previous studies have no systematical consideration on the effects of human finger and Chinese characters on Chinese handwriting performance.
An experiment was reported in this article to illustrate the effects of the human finger and Chinese characters on the Chinese handwriting performance (i.e., input time, accuracy, number of protruding strokes, mental workload, satisfaction, and physical fatigue).
The experiment results indicated that all six factors have significant effects on Chinese handwriting performance, especially on the input time, accuracy, and number of protruding strokes.
Finger type, finger length, finger width, direction of the first stroke, number of strokes, and character structures are significantly influencing Chinese handwriting performance. These factors should be taken into more consideration in future research and the practical design for Chinese handwriting systems.
We report two psychoacoustical experiments that assessed the relationship between auditory azimuthal localization performance in water and duration of prior exposure to the milieu.
The adaptability of spatial hearing abilities has been demonstrated in air for both active and passive exposures to altered localization cues. Adaptability occurred faster and was more complete for elevation perception than for azimuth perception. In water, spatial hearing is believed to solely rely on smaller than normal cues-to-azimuth: interaural time differences. This should produce a medial bias in localization judgments toward the center of the horizontal plane, unless the listeners have adapted to the environment.
Azimuthal localization performance was measured in seawater for eight azimuthal directions of a low-frequency (<500 Hz) auditory target. Seventeen participants performed a forced-choice task in Experiment 1. Twenty-eight other participants performed a pointing task in Experiment 2.
In both experiments we observed poor front/back discrimination but accurate left/right discrimination, regardless of prior exposure. A medial bias was found in azimuth perception, whose size decreased as the exposure duration of the participant increased.
The study resembles earlier results showing that passive exposure to altered azimuth cues elicits the adaptability of internal audio-spatial maps, that is, the behavioral plasticity of spatial hearing abilities.
Studies of the adaptability of the auditory system to altered spatial information may yield practical implications for scuba divers, hearing-impaired listeners with reduced sensitivity to spatial cues, and various normal-hearing users of virtual auditory displays.
We report on four experiments that investigated the critical tracking task’s (CTT) potential as a tool to measure distraction.
Assessment of the potential of new in-vehicle information systems to be distracting has become an important issue. An easy-to-use method, which might be a candidate to assess this distraction, is the CTT. The CTT requires an operator to stabilize a bar, which is displayed on a computer screen, such that it does not depart from a predefined target position. As the CTT reflects various basic aspects of the operational level of the driving task, we used it as a simple surrogate for driving to assess the CTT’s capabilities.
We employed secondary tasks of varying demand, artificial tasks as well as tasks representative of secondary tasks while driving, and asked participants to perform them together with the CTT in parallel. CTT performance, secondary task performance, and subjective ratings of load were recorded and analyzed.
Overall, the CTT was able to differentiate between different levels of demand elicited by the secondary tasks. The results obtained corresponded with our a priori assumptions about the respective secondary tasks’ potential to distract.
It appears that the CTT can be used to assess in-vehicle information systems with regard to their potential to distract drivers. Additional experiments are necessary to further clarify the relationship between driving and CTT performance.
The CTT can provide a cost-effective solution as part of a battery of tests for early testing of new in-vehicle devices.
The objective was to evaluate the effects of vertical key spacing on a conventional computer keyboard on typing speed, percentage error, usability, forearm muscle activity, and wrist posture for both females with small fingers and males with large fingers.
Part 1 evaluated primarily horizontal key spacing and found that for male typists with large fingers, productivity and usability were similar for spacings of 17, 18, and 19 mm but were reduced for spacings of 16 mm. Few other key spacing studies are available, and the international standards that specify the spacing between keys on a keyboard have been mainly guided by design convention.
Experienced female typists (n = 26) with small fingers (middle finger length ≤ 7.71 cm or finger breadth of ≤ 1.93 cm) and male typists (n = 26) with large fingers (middle finger length ≥ 8.37 cm or finger breadth of ≥ 2.24 cm) typed on five keyboards that differed primarily in vertical key spacing (17 x 18, 17 x 17, 17 x 16, 17 x 15.5, and 18 x 16 mm) while typing speed, error, fatigue, preference, forearm muscle activity, and wrist posture were recorded.
Productivity and usability ratings were significantly worse for the keyboard with 15.5 mm vertical spacing compared to the other keyboards for both groups. There were few significant differences on usability ratings between the other keyboards. Reducing vertical key spacing, from 18 to 17 to 16 mm, had no significant effect on productivity or usability.
The findings support the design of keyboards with vertical key spacings of 16, 17, or 18 mm.
These findings may influence keyboard design and standards.
A high-fidelity street crossing simulator was used to test the hypothesis that experienced action video game players are less vulnerable than nongamers to dual task costs in complex tasks.
Previous research has shown that action video game players outperform nonplayers on many single task measures of perception and attention. It is unclear, however, whether action video game players outperform nonplayers in complex, divided attention tasks.
Experienced action video game players and nongamers completed a street crossing task in a high-fidelity simulator. Participants walked on a manual treadmill to cross the street. During some crossings, a cognitively demanding working memory task was added.
Dividing attention resulted in more collisions and increased decision making time. Of importance, these dual task costs were equivalent for the action video game players and the nongamers.
These results suggest that action video game players are equally susceptible to the costs of dividing attention in a complex task.
Perceptual and attentional benefits associated with action video game experience may not translate to performance benefits in complex, real-world tasks.
The objective was to examine the relationship between cockpit automation use and task-related and task-unrelated thought among airline pilots.
Studies find that cockpit automation can sometimes relieve pilots of tedious control tasks and afford them more time to think ahead. Paradoxically, automation has also been shown to lead to lesser awareness. These results prompt the question of what pilots think about while using automation.
A total of 18 airline pilots flew a Boeing 747-400 simulator while we recorded which of two levels of automation they used. As they worked, pilots were verbally probed about what they were thinking. Pilots were asked to categorize their thoughts as pertaining to (a) a specific task at hand, (b) higher-level flight-related thoughts (e.g., planning ahead), or (c) thoughts unrelated to the flight. Pilots’ performance was also measured.
Pilots reported a smaller percentage of task-at-hand thoughts (27% vs. 50%) and a greater percentage of higher-level flight-related thoughts (56% vs. 29%) when using the higher level of automation. However, when all was going according to plan, using either level of automation, pilots also reported a higher percentage of task-unrelated thoughts (21%) than they did when in the midst of an unsuccessful performance (7%). Task-unrelated thoughts peaked at 25% when pilots were not interacting with the automation.
Although cockpit automation may provide pilots with more time to think, it may encourage pilots to reinvest only some of this mental free time in thinking flight-related thoughts.
This research informs the design of human–automation systems that more meaningfully engage the human operator.
We investigated how automation-induced human performance consequences depended on the degree of automation (DOA).
Function allocation between human and automation can be represented in terms of the stages and levels taxonomy proposed by Parasuraman, Sheridan, and Wickens. Higher DOAs are achieved both by later stages and higher levels within stages.
A meta-analysis based on data of 18 experiments examines the mediating effects of DOA on routine system performance, performance when the automation fails, workload, and situation awareness (SA). The effects of DOA on these measures are summarized by level of statistical significance.
We found (a) a clear automation benefit for routine system performance with increasing DOA, (b) a similar but weaker pattern for workload when automation functioned properly, and (c) a negative impact of higher DOA on failure system performance and SA. Most interesting was the finding that negative consequences of automation seem to be most likely when DOA moved across a critical boundary, which was identified between automation supporting information analysis and automation supporting action selection.
Results support the proposed cost–benefit trade-off with regard to DOA. It seems that routine performance and workload on one hand, and the potential loss of SA and manual skills on the other hand, directly trade off and that appropriate function allocation can serve only one of the two aspects.
Findings contribute to the body of research on adequate function allocation by providing an overall picture through quantitatively combining data from a variety of studies across varying domains.
The present experiment evaluated whether training involving throwing transferred to metric distance estimation (i.e., describing in feet and inches the distance between oneself and targets).
In prior work, we found that metric estimation training negatively transferred to throwing. We explained our results in terms of cognitive intrusion. The present study tested that possibility by swapping our training and transfer tasks.
During pretesting, participants verbally estimated the metric distances between themselves and targets, or they threw a beanbag to targets. During training, participants donned goggles that distorted their vision. While wearing the goggles, they threw a beanbag to targets. Half received feedback. During posttesting, participants removed the distorting goggles and completed the same task that they performed during pretesting.
The results indicated that the distorting goggles degraded throwing at the beginning of training, visual feedback improved throwing during training, the effects of training with feedback persisted into the throwing posttest, and the effects of training with feedback did not transfer to the verbal metric estimation posttest.
Training involving throwing was effective, but did not transfer to verbal metric distance estimation. This supports our argument that the negative transfer observed in our previous study stemmed from cognitive intrusion.
The present experiment suggests that the creation of distance estimation training should begin with a careful analysis of the transfer task, and that distance estimation training programs should explicitly teach trainees that their training will not generalize to all distance estimation tasks.
The main purpose of this study was to investigate the effects and interactions of line length, line number, and line spacing on Chinese screen-based proofreading performance and amount of scrolling.
Proofreading is an important process, and much of it is now done on screen. The Chinese language is increasingly important, but very little work has been done on the factors that affect proofreading performance for Chinese passages.
Three display factors related to screen size, namely line length, line number, and line spacing, were selected to be investigated in an experiment to determine their effects on proofreading performance and amount of scrolling. Correlations between proofreading performance in time and accuracy and scrolling amount were also analyzed.
The results showed that line number and line spacing had significant main and interaction effects on both proofreading time and detection rate. Line length and line number influenced scrolling amount significantly, but there was no interaction effect for scrolling. Scrolling amount was negatively correlated with proofreading time and typo detection rate such that more scrolling movement was associated with faster proofreading, but lower detection rate. There was a trade-off between time and accuracy.
For balancing time and detection rate and improving performance for on-screen Chinese proofreading, the display setting of medium line length (36 characters per line) with four lines and 1.5 line spacing should be used.
The findings provide information and recommendations for display factors and the screen design that should prove useful for improving proofreading time and accuracy.
The objective was to evaluate a configural vital signs (CVS) display designed to support rapid detection and identification of physiological deterioration by graphically presenting patient vital signs data.
Current display technology in the intensive care unit (ICU) is not optimized for fast recognition and identification of physiological changes in patients. To support nurses more effectively, graphical or configural vital signs displays need to be developed and evaluated.
A CVS display was developed based on findings from studies of the cognitive work of ICU nurses during patient monitoring. A total of 42 ICU nurses interpreted data presented either in a traditional, numerical format (n = 21) or on the CVS display (n = 21). Response time and accuracy in clinical data interpretation (i.e., identification of patient status) were assessed across four scenarios.
Data interpretation speed and accuracy improved significantly in the CVS display condition; for example, in one scenario nurses required only half of the time for data interpretation and showed up to 1.9 times higher accuracy in identifying the patient state compared to the numerical display condition.
Providing patient information in a configural display with readily visible trends and data variability can improve the speed and accuracy of data interpretation by ICU nurses.
Although many studies, including this one, support the use of configural displays, the vast majority of ICU monitoring displays still present clinical data in numerical format. The introduction of configural displays in clinical monitoring has potential to improve patient safety.
Two studies were conducted to develop an understanding of factors that drive user expectations when navigating between discrete elements on a display via a limited degree-of-freedom cursor control device.
For the Orion Crew Exploration Vehicle spacecraft, a free-floating cursor with a graphical user interface (GUI) would require an unachievable level of accuracy due to expected acceleration and vibration conditions during dynamic phases of flight. Therefore, Orion program proposed using a "caged" cursor to "jump" from one controllable element (node) on the GUI to another. However, nodes are not likely to be arranged on a rectilinear grid, and so movements between nodes are not obvious.
Proximity between nodes, direction of nodes relative to each other, and context features may all contribute to user cursor movement expectations. In an initial study, we examined user expectations based on the nodes themselves. In a second study, we examined the effect of context features on user expectations.
The studies established that perceptual grouping effects influence expectations to varying degrees. Based on these results, a simple rule set was developed to support users in building a straightforward mental model that closely matches their natural expectations for cursor movement.
The results will help designers of display formats take advantage of the natural context-driven cursor movement expectations of users to reduce navigation errors, increase usability, and decrease access time.
The rules set and guidelines tie theory to practice and can be applied in environments where vibration or acceleration are significant, including spacecraft, aircraft, and automobiles.
The objective of this study was to investigate if a verbal task can improve alertness and if performance changes are associated with changes in alertness as measured by EEG.
Previous research has shown that a secondary task can improve performance on a short, monotonous drive. The current work extends this by examining longer, fatiguing drives. The study also uses EEG to confirm that improved driving performance is concurrent with improved driver alertness.
A 90-min, monotonous simulator drive was used to place drivers in a fatigued state. Four secondary tasks were used: no verbal task, continuous verbal task, late verbal task, and a passive radio task.
When engaged in a secondary verbal task at the end of the drive, drivers showed improved lane-keeping performance and had improvements in neurophysiological measures of alertness.
A strategically timed concurrent task can improve performance even for fatiguing drives.
Secondary-task countermeasures may prove useful for enhancing driving performance across a range of driving conditions.
This work investigated the impact of uncertainty representation on performance in a complex authentic visualization task, submarine localization.
Because passive sonar does not provide unique course, speed, and range information on a contact, the submarine operates under significant uncertainty. There are many algorithms designed to address this problem, but all are subject to uncertainty. The extent of this solution uncertainty can be expressed in several ways, including a table of locations (course, speed, range) or a graphical area of uncertainty.
To test the hypothesis that the representation of uncertainty that more closely matches the experts’ preferred representation of the problem would better support performance, even for the nonexpert., performance data were collected using displays that were either stripped of the spatial or the tabular representation.
Performance was more accurate when uncertainty was displayed spatially. This effect was only significant for the nonexperts for whom the spatial displays supported almost expert-like performance. This effect appears to be due to reduced mental effort.
These results suggest that when the representation of uncertainty for this spatial task better matches the expert’s preferred representation of the problem even a nonexpert can show expert-like performance.
These results could apply to any domain where performance requires working with highly uncertain information.
The objective was to review and integrate available research about the construct of state-level suspicion as it appears in social science literatures and apply the resulting findings to information technology (IT) contexts.
Although the human factors literature is replete with articles about trust (and distrust) in automation, there is little on the related, but distinct, construct of "suspicion" (in either automated or IT contexts). The construct of suspicion—its precise definition, theoretical correlates, and role in such applications—deserves further study.
Literatures that consider suspicion are reviewed and integrated. Literatures include communication, psychology, human factors, management, marketing, information technology, and brain/neurology. We first develop a generic model of state-level suspicion. Research propositions are then derived within IT contexts.
Fundamental components of suspicion include (a) uncertainty, (b) increased cognitive processing (e.g., generation of alternative explanations for perceived discrepancies), and (c) perceptions of (mal)intent. State suspicion is defined as the simultaneous occurrence of these three components. Our analysis also suggests that trust inhibits suspicion, whereas distrust can be a catalyst of state-level suspicion. Based on a three-stage model of state-level suspicion, associated research propositions and questions are developed. These propositions and questions are intended to help guide future work on the measurement of suspicion (self-report and neurological), as well as the role of the construct of suspicion in models of decision making and detection of deception.
The study of suspicion, including its correlates, antecedents, and consequences, is important. We hope that the social sciences will benefit from our integrated definition and model of state suspicion. The research propositions regarding suspicion in IT contexts should motivate substantial research in human factors and related fields.
Assess team performance within a networked supervisory control setting while manipulating automated decision aids and monitoring team communication and working memory ability.
Networked systems such as multi–unmanned air vehicle (UAV) supervision have complex properties that make prediction of human-system performance difficult. Automated decision aid can provide valuable information to operators, individual abilities can limit or facilitate team performance, and team communication patterns can alter how effectively individuals work together. We hypothesized that reliable automation, higher working memory capacity, and increased communication rates of task-relevant information would offset performance decrements attributed to high task load.
Two-person teams performed a simulated air defense task with two levels of task load and three levels of automated aid reliability. Teams communicated and received decision aid messages via chat window text messages.
Task Load x Automation effects were significant across all performance measures. Reliable automation limited the decline in team performance with increasing task load. Average team spatial working memory was a stronger predictor than other measures of team working memory. Frequency of team rapport and enemy location communications positively related to team performance, and word count was negatively related to team performance.
Reliable decision aiding mitigated team performance decline during increased task load during multi-UAV supervisory control. Team spatial working memory, communication of spatial information, and team rapport predicted team success.
An automated decision aid can improve team performance under high task load. Assessment of spatial working memory and the communication of task-relevant information can help in operator and team selection in supervisory control systems.
We studied associations between job-title-based measures of force and repetition and incident carpal tunnel syndrome (CTS).
Job exposure matrices (JEMs) are not commonly used in studies of work-related upper-extremity disorders.
We enrolled newly hired workers in a prospective cohort study. We assigned a Standard Occupational Classification (SOC) code to each job held and extracted physical work exposure variables from the Occupational Information Network (O*NET). CTS case definition required both characteristic symptoms and abnormal median nerve conduction.
Of 1,107 workers, 751 (67.8%) completed follow-up evaluations. A total of 31 respondents (4.4%) developed CTS during an average of 3.3 years of follow-up. Repetitive motion, static strength, and dynamic strength from the most recent job held were all significant predictors of CTS when included individually as physical exposures in models adjusting for age, gender, and BMI. Similar results were found using time-weighted exposure across all jobs held during the study. Repetitive motion, static strength, and dynamic strength were correlated, precluding meaningful analysis of their independent effects.
This study found strong relationships between workplace physical exposures assessed via a JEM and CTS, after adjusting for age, gender, and BMI. Though job-title-based exposures are likely to result in significant exposure misclassification, they can be useful for large population studies where more precise exposure data are not available.
JEMs can be used as a measure of workplace physical exposures for some studies of musculoskeletal disorders.
The effects of box shape—specifically width and height—on the perception of heaviness were evaluated during individual and team lifting.
Large objects are perceived to be as much as 50% lighter than smaller objects with the same mass. This size-weight illusion presents an obvious risk when lifting large and heavy boxes. Recent research has shown that shape influences this illusion. Specifically, increases in length and width do not produce identical decreases in perceived heaviness. However, this effect has been documented only in individual lifting, mostly with small objects.
Individuals and teams lifted large boxes and reported their perceptions of heaviness. The mass, height, and width of the boxes were varied independently to determine their unique effects on perceived heaviness.
For both types of lift, increasing width produced a greater mean illusory drop (expressed as a percentage decrease with 95% confidence intervals) in perceived heaviness (24 ± 7% during individual lifting and 41 ± 8% during team lifting) than increasing height (15 ± 7% during individual lifting and 18 ± 8% during team lifting).
Size and shape are important factors in perceiving the heaviness of boxes during both individual and team lifting.
To avoid misperceiving weight and risking injury, lifters should be careful when approaching larger (especially wider) boxes.
This study tested the effects of two fundamental forms of distraction, including visual-manual and cognitive-audio distraction, with comparison under both operational and tactical driving. Strategic control remains for future study.
Driving is a complex control task involving operational, tactical, and strategic control. Although operational control, such as lead-car following, has been studied, the influence of in-vehicle distractions on higher levels of control, including tactical and strategic, remains unclear.
Two secondary tasks were designed to independently represent visual-manual and cognitive-audio distractions, based on multiple resource theory. Drivers performed operational vehicle control maneuvers (lead-car following) or tactical control maneuvers (passing) along with the distraction tasks in a driving simulator. Response measures included driving performance and visual behavior.
Results revealed drivers’ ability to accommodate either visual or cognitive distractions in following tasks but not in passing. The simultaneous distraction condition led to the greatest decrement in performance.
Findings support the need to assess the impacts of in-vehicle distraction on different levels of driving control. Future study should investigate driver distraction under strategic control.
The principal objective of the present work was to examine the effects of mind state (mind-wandering vs. on-task) on driving performance in a high-fidelity driving simulator.
Mind-wandering is thought to interfere with goal-directed thought. It is likely, then, that when driving, mind-wandering might lead to impairments in critical aspects of driving performance. In two experiments, we assess the extent to which mind-wandering interferes with responsiveness to sudden events, mean velocity, and headway distance.
Using a car-following procedure in a high-fidelity driving simulator, participants were probed at random times to indicate whether they were on-task at that moment or mind-wandering. The dependent measures were analyzed based on the participant’s response to the probe.
Compared to when on-task, when mind-wandering participants showed longer response times to sudden events, drove at a higher velocity, and maintained a shorter headway distance.
Collectively, these findings indicate that mind-wandering affects a broad range of driving responses and may therefore lead to higher crash risk.
The results suggest that situations that are likely associated with mind-wandering (e.g., route familiarity) can impair driving performance.
We propose a network perspective of team knowledge that offers both conceptual and methodological advantages, expanding explanatory value through representation and measurement of component structure and content.
Team knowledge has typically been conceptualized and measured with relatively simple aggregates, without fully accounting for differing knowledge configurations among team members. Teams with similar aggregate values of team knowledge may have very different team dynamics depending on how knowledge isolates, cliques, and densities are distributed across the team; which members are the most knowledgeable; who shares knowledge with whom; and how knowledge clusters are distributed.
We illustrate our proposed network approach through a sample of 57 teams, including how to compute, analyze, and visually represent team knowledge.
Team knowledge network structures (isolation, centrality) are associated with outcomes of, respectively, task coordination, strategy coordination, and the proportion of team knowledge cliques, all after controlling for shared team knowledge.
Network analysis helps to represent, measure, and understand the relationship of team knowledge to outcomes of interest to team researchers, members, and managers. Our approach complements existing team knowledge measures.
Researchers and managers can apply network concepts and measures to help understand where team knowledge is held within a team and how this relational structure may influence team coordination, cohesion, and performance.
The objective is to demonstrate how the Human View architecture can be used to define and evaluate the human interoperability capabilities of a net-centric system. Human interoperability strives to understand the types of system relationships that affect collaboration across networked environments.
The Human View was developed as an additional system architectural viewpoint to focus on the human component of a system by capturing data on human roles, tasks, constraints, interactions, and metrics. This framework can be used to collect and organize social system parameters to facilitate the way that humans interact across organizational boundaries.
By mapping the Human View elements to organizational relationships defined in the domain of network theory, a network model of the Human View can be developed. This representation can then be aligned with a Layers of Interoperability model for collaborative systems. The model extends traditional technical interoperability to include organizational aspects important for human interoperability. The resulting composite model can be used to evaluate the human interoperability capability of network-enabled systems.
An interagency response to a crisis situation is an example where increased levels of human interoperability can affect the effectiveness of the organizational interactions. The existing Human View products representing the interagency capabilities were evaluated using the network model to demonstrate how the social system variables can be identified and evaluated to improve the system design.
By understanding and incorporating human interoperability requirements, the resulting system design can more effectively support collaborative tasks across technological environments to facilitate timely responses to events.
The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions.
Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem.
A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator.
Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically.
The model performed well under both routine and nonroutine conditions and over different patterns of workload variation.
Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.
The aim of the study was to investigate the influence of different driving scenarios (urban, rural, highway) on the timing required by drivers from a two-stage warning system, based on car-to-car communication.
Car-to-car communication systems are designed to inform drivers of potential hazards at an early stage, before they are visible to them. Here, questions arise as to how drivers acknowledge early warnings and when they should be informed (first stage) and warned (second stage). Hence, optimum timing for presenting the information was tested.
A psychophysical method was used to establish the optimum timing in three driving scenarios at different speed limits (urban: 50 km/h, rural: 100 km/h, highway: 130 km/h). A total of 24 participants (11 female, 13 male; M = 29.1 years, SD = 11.6 years) participated in the study.
The results showed that the optimum timing did not differ among the three scenarios. The first and second stages should ultimately be presented at different timings at each speed limit (first stage: 26.5 s, second stage: 12.1 s before a potential hazard).
The results showed that well-selected timing for activating information and warning is crucial for the acceptance of these systems. Appropriate timing for presenting the information and warning can be derived for these systems.
The findings will be integrated in further development of assistance systems based on car-to-x technology within the Car2X-Safety project of the Niedersächsisches Forschungszentrum Fahrzeugtechnik in Germany. This study was also supported by Chalmers University of Technology in Sweden.
This research aimed to identify the most frequently occurring human factors contributing to maintenance-related failures within a petroleum industry organization. Commonality between failures will assist in understanding reliability in maintenance processes, thereby preventing accidents in high-hazard domains.
Methods exist for understanding the human factors contributing to accidents. Their application in a maintenance context mainly has been advanced in aviation and nuclear power. Maintenance in the petroleum industry provides a different context for investigating the role that human factors play in influencing outcomes. It is therefore worth investigating the contributing human factors to improve our understanding of both human factors in reliability and the factors specific to this domain.
Detailed analyses were conducted of maintenance-related failures (N = 38) in a petroleum company using structured interviews with maintenance technicians. The interview structure was based on the Human Factor Investigation Tool (HFIT), which in turn was based on Rasmussen’s model of human malfunction.
A mean of 9.5 factors per incident was identified across the cases investigated. The three most frequent human factors contributing to the maintenance failures were found to be assumption (79% of cases), design and maintenance (71%), and communication (66%).
HFIT proved to be a useful instrument for identifying the pattern of human factors that recurred most frequently in maintenance-related failures.
The high frequency of failures attributed to assumptions and communication demonstrated the importance of problem-solving abilities and organizational communication in a domain where maintenance personnel have a high degree of autonomy and a wide geographical distribution.
In the present study, we explored the state versus trait aspects of measures of task and team workload in a disaster simulation.
There is often a need to assess workload in both individual and collaborative settings. Researchers in this field often use the NASA Task Load Index (NASA-TLX) as a global measure of workload by aggregating the NASA-TLX’s component items. Using this practice, one may overlook the distinction between traits and states.
Fifteen dyadic teams (11 inexperienced, 4 experienced) completed five sessions of a tsunami disaster simulator. After every session, individuals completed a modified version of the NASA-TLX that included team workload measures. We then examined the workload items by using a between-subjects and within-subjects perspective.
Between-subjects and within-subjects correlations among the items indicated the workload items are more independent within subjects (as states) than between subjects (as traits). Correlations between the workload items and simulation performance were also different at the trait and state levels.
Workload may behave differently at trait (between-subjects) and state (within-subjects) levels.
Researchers interested in workload measurement as a state should take a within-subjects perspective in their analyses.
We investigated skill development and workload of pilots driving teleoperated unmanned ground vehicles (UGVs) through different apertures and viewpoints using the cornering law.
Due to technological and cost restraints, humans are still needed for tasks involving UGVs. Operators of teleoperated UGVs are likely to have less situation awareness and thus are more prone to getting stuck or damaged when negotiating apertures. To our knowledge, the operation of physical UGVs through corners has not been examined. Therefore, a better understanding of cornering a teleoperated UGVs is imperative.
In Experiment 1, 20 novice participants repeatedly teleoperated a physical UGV using a third-person overhead view through apertures that varied in width. In Experiment 2, 18 additional novice participants completed a similar task but used a first-person view.
Participants’ performance increased (i.e., faster cornering times and less collisions) over sessions. The cornering law successfully modeled the effect of different aperture widths on participant performance for both viewing perspectives.
In this study, we successfully modeled human performance of teleoperated UGVs using the cornering law. Analogous to Fitts’ and steering law, we were able to successfully model and predict cornering performance based on a derived index of cornering difficulty.
The cornering law could be used to aid in the development of prototype user interfaces and also to examine the effects of different teleoperation views (first person vs. third person).
The authors examine the pattern of direction errors made during the manipulation of a physical simulation of an underground coal mine bolting machine to assess the directional control-response compatibility relationships associated with the device and to compare these results to data obtained from a virtual simulation of a generic device.
Directional errors during the manual control of underground coal roof bolting equipment are associated with serious injuries. Directional control-response relationships have previously been examined using a virtual simulation of a generic device; however, the applicability of these results to a specific physical device may be questioned.
Forty-eight participants randomly assigned to different directional control-response relationships manipulated horizontal or vertical control levers to move a simulated bolter arm in three directions (elevation, slew, and sump) as well as to cause a light to become illuminated and raise or lower a stabilizing jack. Directional errors were recorded during the completion of 240 trials by each participant.
Directional error rates are increased when the control and response are in opposite directions or if the direction of the control and response are perpendicular. The pattern of direction error rates was consistent with experiments obtained from a generic device in a virtual environment.
Error rates are increased by incompatible directional control-response relationships.
Ensuring that the design of equipment controls maintains compatible directional control-response relationships has potential to reduce the errors made in high-risk situations, such as underground coal mining.
In this study, we aimed to examine the effect of shared leadership within and across teams in multiteam systems (MTS) on team goal attainment and MTS success.
Due to different and sometimes competing goals in MTS, leadership is required within and across teams. Shared leadership, the effectiveness of which has been proven in single teams, may be an effective strategy to cope with these challenges.
We observed leadership in 84 cockpit and cabin crews that collaborated in the form of six-member MTS aircrews (N = 504) during standardized simulations of an in-flight emergency. Leadership was coded by three trained observers using a structured observation system. Team goal attainment was assessed by two subject matter experts using a checklist-based rating tool. MTS goal attainment was measured objectively on the basis of the outcome of the simulated flights.
In successful MTS aircrews, formal leaders and team members displayed significantly more leadership behaviors, shared leadership by pursers and flight attendants predicted team goal attainment, and pursers’ shared leadership across team boundaries predicted cross-team goal attainment. In cockpit crews, leadership was not shared and captains’ vertical leadership predicted team goal attainment regardless of MTS success.
The results indicate that in general, shared leadership positively relates to team goal attainment and MTS success, whereby boundary spanners’ dual leadership role is key.
Leadership training in MTS should address shared rather than merely vertical forms of leadership, and component teams in MTS should be trained together with emphasis on boundary spanners’ dual leadership role. Furthermore, team members should be empowered to engage in leadership processes when required.
The aim of this study was to explore human factors aspects of reality-based "force-on-force" (FoF) handgun practice through a within-subjects field experiment that assessed subjective stress measurements, biomarker regulation, performance outcomes, and behavioral adaptations.
FoF handgun practice is a recent training asset for armed officers whereby dynamic opponents may act, react, and even retaliate with specially designed marker ammunition. Predesigned scenarios enable trainees to practice in a simulated real-life environment.
A sample of experienced military personnel (N = 20) ran a handgun workshop in two conditions: FoF practice and traditional cardboard-target practice. Intra-individual assessments included anticipated distress, subjective stress, salivary alpha-amylase (sAA), shooting accuracy, and directly observable training seriousness.
Compared with the standard cardboard practice condition, FoF exposure caused significant increases in anticipatory distress, subjective stress, and sAA secretion. Furthermore, participants’ first encounter with FoF practice (vs. cardboard practice) substantially degraded their shooting performance and had a significant positive impact on the earnestness with which they approached their mission during the workshop.
FoF practice is an effective training tool for armed officers because it simulates a realistic work environment by increasing task-specific stress such that it affects important outcomes of professional performance and leads to desirable behavioral changes during training.
Potential applications of this research include the introduction of biomarker assessments in human factors research and the design, based on reality-based practice, of effective training procedures for high-reliability professionals.
A new protocol was evaluated for identification of stiffness, mass, and damping parameters employing a linear model for human hand-arm dynamics relevant to right-angle torque tool use.
Powered torque tools are widely used to tighten fasteners in manufacturing industries. While these tools increase accuracy and efficiency of tightening processes, operators are repetitively exposed to impulsive forces, posing risk of upper extremity musculoskeletal injury.
A novel testing apparatus was developed that closely mimics biomechanical exposure in torque tool operation. Forty experienced torque tool operators were tested with the apparatus to determine model parameters and validate the protocol for physical capacity assessment.
A second-order hand-arm model with parameters extracted in the time domain met model accuracy criterion of 5% for time-to-peak displacement error in 93% of trials (vs. 75% for frequency domain). Average time-to-peak handle displacement and relative peak handle force errors were 0.69 ms and 0.21%, respectively. Model parameters were significantly affected by gender and working posture.
Protocol and numerical calculation procedures provide an alternative method for assessing mechanical parameters relevant to right-angle torque tool use. The protocol more closely resembles tool use, and calculation procedures demonstrate better performance of parameter extraction using time domain system identification methods versus frequency domain.
Potential future applications include parameter identification for in situ torque tool operation and equipment development for human hand-arm dynamics simulation under impulsive forces that could be used for assessing torque tools based on factors relevant to operator health (handle dynamics and hand-arm reaction force).
The aim of this study was to compare the effectiveness of a new index of perceived mental workload, the Multiple Resource Questionnaire (MRQ), with the standard measure of workload used in the study of vigilance, the NASA Task Load Index (NASA-TLX).
The NASA-TLX has been used extensively to demonstrate that vigilance tasks impose a high level of workload on observers. However, this instrument does not specify the information-processing resources needed for task performance. The MRQ offers a tool to measure the workload associated with vigilance assignments in which such resources can be identified.
Two experiments were performed in which factors known to influence task demand were varied. Included were the detection of stimulus presence or absence, detecting critical signals by means of successive-type (absolute judgment) and simultaneous-type (comparative judgment) discriminations, and operating under multitask vs. single-task conditions.
The MRQ paralleled the NASA-TLX in showing that vigilance tasks generally induce high levels of workload and that workload scores are greater in detecting stimulus absence than presence and in making successive as compared to simultaneous-type discriminations. Additionally, the MRQ was more effective than the NASA-TLX in reflecting higher workload in the context of multitask than in single-task conditions. The resource profiles obtained with MRQ fit well with the nature of the vigilance tasks employed, testifying to the scale’s content validity.
The MRQ may be a meaningful addition to the NASA-TLX for measuring the workload of vigilance assignments.
By uncovering knowledge representation associated with different tasks, the MRQ may aid in designing operational vigilance displays.
To determine differences in muscle activity amplitudes and variation of amplitudes when using different information and communication technologies (ICT).
Office workers use different ICT to perform tasks. Upper body musculoskeletal complaints are frequently reported by this occupational group. Increased muscle activity and insufficient variation are potential risk factors for musculoskeletal complaints.
Muscle activity of right and left upper trapezius and right wrist extensor muscle bundle (extensor carpi radialis longus and brevis) of 24 office workers (performing their usual tasks requiring different ICT at work and away from work) were measured continuously over 10 to 12 hours. Muscle activity variation was quantified using two indices, amplitude probability distribution function and exposure variation analysis.
There was a trend for electronics-based New ICT tasks to involve less electromyography (EMG) variation than paper-based Old ICT tasks. Performing Combined ICT tasks (i.e., using paper- and electronics-based ICT simultaneously) resulted in the highest muscle activity levels and least variation; however, these Combined ICT tasks were rarely performed. Tasks involving no ICT (Non-ICT) had the greatest muscle activity variation.
Office workers in this study used various ICT during tasks at work and away from work. The high EMG amplitudes and low variation observed when using Combined ICT may present the greatest risk for musculoskeletal complaints, and use of Combined ICT by workers should be kept low in office work. Breaking up combined, New, and Old ICT tasks, for example, by interspersing highly variable Non-ICT tasks into office workers’ daily tasks, could increase overall muscle activity variation and reduce risk for musculoskeletal complaints.
A pair of simulated driving experiments studied the effects of cognitive load on drivers’ lane-keeping performance.
Cognitive load while driving often reduces the variability of lane position. However, there is no agreement as to whether this effect should be interpreted as a performance loss, consistent with other effects of distraction on driving, or as an anomalous performance gain.
Participants in a high-fidelity driving simulator performed a lane-keeping task in lateral wind, with instructions to keep a steady lane position. Under high load conditions, participants performed a concurrent working memory task with auditory stimuli. Cross-spectral analysis measured the relationship between wind force and steering inputs.
Cognitive load reduced the variability of lane position and increased the coupling between steering wheel position and crosswind strength.
Although cognitive load disrupts driver performance in a variety of ways, it produces a performance gain in lane keeping. This effect appears to reflect drivers’ efforts to protect lateral control against the risk of distraction, at the apparent neglect of other elements of driving performance.
Results may inform educational efforts to help drivers understand the risks of distraction and the inadequacies of compensatory driving strategies.
We aimed to investigate how ordered mappings (e.g., left-to-right and right-to-left order representations) would interfere with each other.
Mental representations of numbers and letters are linked with spatial representation and can be changed intentionally.
The experiment consisted of three sessions. In the digit-alone session, two digits randomly selected from [1], [2], and [3] were shown. If the two digits were the same, participants pressed the button corresponding to the digit, and if the digits differed, they pressed the remaining button. The response buttons were ordered [1][2][3] from the left. In the letter-alone session, three different button configurations were prepared: sequential [A][B][C], reversed [C][B][A], or partially reversed [B][A][C]. The same-versus-different rules were basically identical to those in the digit task. In the mixed session, trials of the digit task and those of the letter task were randomly mixed.
We found that two ordinal representations did not interfere with each other when they shared the same direction of order ([1][2][3] vs. [A][B][C]), two ordinal mappings interfered with each other when they had different directions of order ([1][2][3] vs. [C][B][A]), and an ordinal mapping ([1][2][3]) was affected by a nonordinal mapping ([B][A][C]), but the nonordinal mapping was less affected by the ordinal mapping.
The mapping between ordinal information and space can be modulated by top-down processes, and it is prone to interference depending on the nature of another coexisting mapping.
Our findings may be used in designing response assignments for input devices for multiple functions.
The aim of this study was to investigate the effects of font size, interline spacing, and a technology called ReadingMate on the letter-counting task performance of users running on a treadmill.
Few researchers have investigated how runners read text while running on a treadmill. Our previous studies showed that ReadingMate had positive effects on the reading-while-running experience; however, the effect of other text conditions (i.e., font size and interline spacing) and the interplay between ReadingMate and such text conditions on the letter-counting task performance are not clearly understood.
Fifteen participants were recruited for the experiment. There were three main factors: display types (normal and ReadingMate), font sizes (8, 12, 16, and 20 point), and interline spacing (1.0x, 1.5x, 2.0x, and 2.5x). The researchers employed a letter-counting task. The performance was measured regarding task performance time, success rate of counting the target letter f, and number of give-ups.
Overall, the letter-counting task performance while running on a treadmill improved as font size and interline spacing increased, as expected. ReadingMate was more effective than normal display particularly when text was displayed in a small font size and with dense interline spacing.
When text must be displayed in a small font size and with dense interline spacing, ReadingMate can be used to improve the users’ task performance.
Practical applications of ReadingMate include improving the text-reading experience in shaky environments, such as in aviation, construction, and transportation.
The objective of this study was to quantify shoulder muscle fatigue during repetitive exertions similar to motions found in automobile assembly tasks.
Shoulder musculoskeletal disorders (MSDs) are a common and costly problem in automotive manufacturing.
Ten subjects participated in the study. There were three independent variables: shoulder angle, frequency, and force. There were two types of dependent measures: percentage change in near-infrared spectroscopy (NIRS) measures and change in electromyography (EMG) median frequency. The anterior deltoid and trapezius muscles were measured for both NIRS and EMG. Also, EMG was collected on the middle deltoid and biceps muscles.
The results showed that oxygenated hemoglobin decreased significantly due to the main effects (shoulder angle, frequency, and force). The percentage change in oxygenated hemoglobin had a significant interaction attributable to force and repetition for the anterior deltoid muscle, indicating that as repetition increased, the magnitude of the differences between the forces increased. The interaction of repetition and shoulder angle was also significant for the percentage change in oxygenated hemoglobin. The median frequency decreased significantly for the main effects; however, no interactions were statistically significant.
There was significant shoulder muscle fatigue as a function of shoulder angle, task frequency, and force level. Furthermore, percentage change in oxygenated hemoglobin had two statistically significant interactions, enhancing our understanding of these risk factors.
Ergonomists should examine interactions of force and repetition as well as shoulder angle and repetition when evaluating the risk of shoulder MSDs.
We explore whether the visual presentation of relative position vectors (RPVs) improves conflict detection in conditions representing some aspects of future airspace concepts.
To help air traffic controllers manage increasing traffic, new tools and systems can automate more cognitively demanding processes, such as conflict detection. However, some studies reveal adverse effects of such tools, such as reduced situation awareness and increased workload. New displays are needed that help air traffic controllers handle increasing traffic loads.
A new display tool based on the display of RPVs, the Multi-Conflict Display (MCD), is evaluated in a series of simulated conflict detection tasks. The conflict detection performance of air traffic controllers with the MCD plus a conventional plan-view radar display is compared with their performance with a conventional plan-view radar display alone.
Performance with the MCD plus radar was better than with radar alone in complex scenarios requiring controllers to find all actual or potential conflicts, especially when the number of aircraft on the screen was large. However performance with radar alone was better for static scenarios in which conflicts for a target aircraft, or target pair of aircraft, were the focus.
Complementing the conventional plan-view display with an RPV display may help controllers detect conflicts more accurately with extremely high aircraft counts.
We provide an initial proof of concept that RPVs may be useful for supporting conflict detection in situations that are partially representative of conditions in which controllers will be working in the future.
The aim of this study was to evaluate the efficacy of a 9-day accommodation protocol on reducing perceived discomfort while sitting on a stability ball (SB); trunk muscle activity levels and lumbar spinal postures were also considered.
Previous studies have compared SB sitting with office chair sitting with few observed differences in muscle activity or posture; however, greater discomfort during SB sitting has been reported. These findings may indicate an accommodation period is necessary to acclimate to SB sitting.
For this study, 6 males and 6 females completed two separate, 2-hr sitting sessions on an SB. Half the participants completed a 9-day accommodation period between the visits, whereas the other half did not use an SB during the time. On both occasions, self-reported perceived discomfort ratings were collected along with erector spinae and abdominal muscle activity and lumbar spinal postures.
Discomfort ratings were reduced in female participants following the accommodation; no effects on muscle activation or lumbar spine postures were observed.
Accommodation training may reduce perceived low-back discomfort in females. Trunk muscle activity and lumbar spine postures during seated office work on an SB did not differ between groups; however, greater sample power was required to conclusively address these variables.
Regarding whether to use an SB in place of a standard office chair, this study indicates that females electing to use an SB can decrease discomfort by following an accommodation protocol; no evidence was found to indicate that SB chair use will improve trunk strength or posture, even following an accommodation period.
The aim of this study was to evaluate whether communicating automation uncertainty improves the driver–automation interaction.
A false system understanding of infallibility may provoke automation misuse and can lead to severe consequences in case of automation failure. The presentation of automation uncertainty may prevent this false system understanding and, as was shown by previous studies, may have numerous benefits. Few studies, however, have clearly shown the potential of communicating uncertainty information in driving. The current study fills this gap.
We conducted a driving simulator experiment, varying the presented uncertainty information between participants (no uncertainty information vs. uncertainty information) and the automation reliability (high vs. low) within participants. Participants interacted with a highly automated driving system while engaging in secondary tasks and were required to cooperate with the automation to drive safely.
Quantile regressions and multilevel modeling showed that the presentation of uncertainty information increases the time to collision in the case of automation failure. Furthermore, the data indicated improved situation awareness and better knowledge of fallibility for the experimental group. Consequently, the automation with the uncertainty symbol received higher trust ratings and increased acceptance.
The presentation of automation uncertainty through a symbol improves overall driver–automation cooperation.
Most automated systems in driving could benefit from displaying reliability information. This display might improve the acceptance of fallible systems and further enhances driver–automation cooperation.
We evaluated alternative scrolling methods on non–touch screen computer operating systems by comparing human performance in different scrolling conditions.
The scrolling directions on current operating systems are discrepant. Few researchers have investigated how scrolling method influences users’ performance. The response–effect (R-E) compatibility principle can be used as a theoretical guide.
Experiments 1 and 2 involved two successive tasks (scrolling and target content judgment) to simulate how people scroll to acquire and use off-screen information. Performance in R-E compatible and incompatible conditions was compared. Experiment 3 involved a location judgment task to test the influence of target location. Experiments 4 and 5 included a scrolling effect following the location judgment task to test the sufficient role of the scrolling effect.
Overall, responses were facilitated when the response direction was compatible with the forthcoming display-content movement direction (an R-E compatibility effect), when the scrolling effect was task relevant or task irrelevant. A spatial stimulus–response (S-R) compatibility effect attributable to target location was also found. When the scrolling effect was present, there were both R-E and S-R components; the R-E effect was the larger of the two.
Scrolling in the direction of content movement yielded the best performance, and the scrolling effect was the main source of the R-E compatibility effect.
These findings suggest that (a) the R-E compatibility principle may be used as a general design guideline for scrolling and (b) a consistent scrolling method should be available on various operating systems.
The sensitivity of pinch movement discrimination between the thumb and index finger was assessed with and without elastic resistance.
Researchers have examined the effect of elastic resistance on control of single upper-limb movements; however, no one has explored how elastic resistance affects proprioceptive acuity when using two digits simultaneously in a coordinated movement.
For this study, 16 right-handed, healthy young adults undertook an active finger pinch movement discrimination test for the right and left hands, with and without elastic resistance. We manipulated pinch movement distance by varying the size of the object that created the physical stop to end the pinch action.
Adding elastic resistance from a spring to the thumb–index finger pinch task did not affect accuracy of pinch discrimination measured as either the just noticeable difference, F(1, 15) = 1.78, p = .20, or area under the curve, F(1, 15) = 0.07, p = .80.
Having elastic resistance to generate lever return in pincers, tweezers, and surgical equipment or in virtual instruments is unlikely to affect pinch movement discrimination.
Elastic resistance did not affect finger pinch discrimination in the present study, suggesting that return tension on equipment lever arms has a practical but not perceptual function. An active finger pinch movement discrimination task, with or without elastic resistance, could be used for hand proprioceptive training and as a screening tool to identify those with aptitude or decrements in fine finger movement control.
The performance of human operators acting within closed-loop control systems is investigated in a classic tracking task. The dependence of the control error (tracking error) on the parameters display gain, kdisplay, and input signal frequency bandwidth, fg, which alter task difficulty and presumably the control delay, is studied with the aim of functionally specifying it via a model.
The human operator as an element of a cascaded human–machine control system (e.g., car driving or piloting an airplane) codetermines the overall system performance. Control performance of humans in continuous tracking has been described in earlier studies.
Using a handheld joystick, 10 participants tracked continuous random input signals. The parameters fg and kdisplay were altered between experiments.
Increased task difficulty promoted lengthened control delay and, consequently, increased control error. Tracking performance degraded profoundly with target deflection components above 1 Hz, confirming earlier reports.
The control error is composed of a delay-induced component, a demand-based component, and a novel component: a human tracking limit. Accordingly, a new model that allows concepts of the observed control error to be split into these three components is suggested.
To achieve optimal performance in control systems that include a human operator (e.g., vehicles, remote controlled rovers, crane control), (a) tasks should be kept as simple as possible to achieve shortest control delays, and (b) task components requiring higher-frequency (>1 Hz) tracking actions should be avoided or automated by technical systems.
The aim of the study was to compare the decision times for left–right decisions for a dual-coded advisory turn indicator and a typical spatial-only turn indicator in a GPS navigational map display.
Track-up maps are useful for turn decision making but do not facilitate configural knowledge acquisition of an area. North-up maps present a stable orientation for this type of learning, but typical implementations of north-up map displays lead to misaligned and confusing turn information. We compared a typical spatial-only indicator with a dual-coded spatial-plus-verbal indicator, systematically manipulating vehicle heading and measuring reaction time. The new display, the Dual-Coded Advisory Turn Indicator for Maps (DATIM), was based on an assumption of the advantages of concurrent verbal and spatial processing of advisory turn indicators in map displays.
The experimental design was a 2 x 2 x 24 mixed design with indicator type as a between-subjects factor and turn direction (left, right) and 24 heading angles (15° intervals) as repeated-measures factors. Participants made turn decisions while viewing static displays of intersections at variably rotated headings.
Reaction time for the DATIM display was consistently faster than the typical spatial-only indicator at all heading angles but especially at heading angles beyond ±45° (520-ms difference at 180°).
The DATIM display produced faster turn decisions at all heading angles.
DATIM displays could allow north-up maps to be used for turn-by-turn decision making in GPS navigational systems. Drivers could have the advantages of both the stable orientation to facilitate planning and the easy turn-by-turn guidance. Limitations are discussed.
The objective of this work was to understand the relationship between eye movements and cognitive workload in maintaining lane position while driving.
Recent findings in driving research have found that, paradoxically, increases in cognitive workload decrease lateral position variability. If people drive where they look and drivers look more centrally with increased cognitive workload, then one could explain the decreases in lateral position variability as a result of changes in lateral eye movements. In contrast, it is also possible that cognitive workload brings about these patterns regardless of changes in eye movements.
We conducted three experiments involving a fixed-base driving simulator to independently manipulate eye movements and cognitive workload.
Results indicated that eye movements played a modest role in lateral position variability, whereas cognitive workload played a much more substantial role.
Increases in cognitive workload decrease lane position variability independently from eye movements. These findings are discussed in terms of hierarchical control theory.
These findings could potentially be used to identify periods of high cognitive workload during driving.
In this work, we expand on the theory of adaptive aiding by measuring the effectiveness of coadaptive aiding, wherein we explicitly allow for both system and user to adapt to each other.
Adaptive aiding driven by psycho- physiological monitoring has been demonstrated to be a highly effective means of controlling task allocation and system functioning. Psychophysiological monitoring is uniquely well suited for coadaptation, as malleable brain activity may be used as a continuous input to the adaptive system.
To establish the efficacy of the coadaptive system, physiological activation of adaptation was directly compared with manual activation or no activation of the same automation and cuing systems. We used interface adaptations and automation that are plausible for real-world operations, presented in the context of a multi–remotely piloted aircraft control simulation. Each participant completed 3 days of testing during 1 week. Performance was assessed via proportion of targets successfully engaged.
In the first 2 days of testing, there were no significant differences in performance between the conditions. However, in the third session, physiological adaptation produced the highest performance.
By extending the data collection across multiple days, we offered enough time and repeated experience for user adaptation as well as online system adaptation, hence demonstrating coadaptive aiding.
The results of this work may be employed to implement more effective adaptive works-tations in a variety of work domains.
An experiment was conducted to investigate the impacts of length and variability of system response time (SRT) on user behavior and user experience (UX) in sequential computing tasks.
Length is widely considered to be the most important aspect of SRTs in human–computer interaction. Research on temporal attention shows that humans adjust to temporal structures and that performance substantially improves with temporal predictability.
Participants performed a sequential task with simulated office software. Duration and variability, that is, the number of different SRTs, was manipulated. Lower variability came at the expense of on average higher durations. User response times, task execution times, and failure rates were measured to assess user performance. UX was measured with a questionnaire.
A reduction in variability improved user performance significantly. Whereas task load and failure rates remained constant, responses were significantly faster. Although a reduction in variability came along with, on average, increased SRTs, no difference in UX was found.
Considering SRT variability when designing software can yield considerable performance benefits for the users. Although reduced variability comes at the expense of overall longer SRTs, the interface is not subjectively evaluated to be less satisfactory or demanding. Time design should aim not only at reducing average SRT length but also at finding the optimum balance of length and variability.
Our findings can easily be applied in any user interface for sequential tasks. User performance can be improved without loss of satisfaction by selectively prolonging particular SRTs to reduce variability.
The author examines the relationship between energetic arousal (EA) and the processing of sentences containing natural-language quantifiers.
Previous studies and theories have shown that energy may differentially affect various cognitive functions. Recent investigations devoted to quantifiers strongly support the theory that various types of quantifiers involve different cognitive functions in the sentence–picture verification task.
In the present study, 201 students were presented with a sentence–picture verification task consisting of simple propositions containing a quantifier that referred to the color of a car on display. Color pictures of cars accompanied the propositions. In addition, the level of participants’ EA was measured before and after the verification task.
It was found that EA and performance on proportional quantifiers (e.g., "More than half of the cars are red") are in an inverted U-shaped relationship.
This result may be explained by the fact that proportional sentences engage working memory to a high degree, and previous models of EA–cognition associations have been based on the assumption that tasks that require parallel attentional and memory processes are best performed when energy is moderate.
The research described in the present article has several applications, as it shows the optimal human conditions for verbal comprehension. For instance, it may be important in workplace design to control the level of arousal experienced by office staff when work is mostly related to the processing of complex texts. Energy level may be influenced by many factors, such as noise, time of day, or thermal conditions.