This study aimed to examine the effects of accumulating nursing work on maximal and rapid strength characteristics in female nurses and compare these effects in day versus night shift workers.
Nurses exhibit among the highest nonfatal injury rates of all occupations, which may be a consequence of long, cumulative work shift schedules. Fatigue may accumulate across multiple shifts and lead to performance impairments, which in turn may be linked to injury risks.
Thirty-seven nurses and aides performed isometric strength-based performance testing of three muscle groups, including the knee extensors, knee flexors, and wrist flexors (hand grip), as well as countermovement jumps, at baseline and following exposure to three 12-hour work shifts in a four-day period. Variables included peak torque (PT) and rate of torque development (RTD) from isometric strength testing and jump height and power output.
The rigorous work period resulted in significant decreases (–7.2% to –19.2%) in a large majority (8/9) of the isometric strength-based measurements. No differences were noted for the day versus night shift workers except for the RTD at 200 millisecond variable, for which the night shift had greater work-induced decreases than the day shift workers. No changes were observed for jump height or power output.
A compressed nursing work schedule resulted in decreases in strength-based performance abilities, being indicative of performance fatigue.
Compressed work schedules involving long shifts lead to functional declines in nurse performance capacities that may pose risks for both the nurse and patient quality of care. Fatigue management plans are needed to monitor and regulate increased levels of fatigue.
To investigate how people’s sequential adjustments to their position are impacted by the source of the information.
There is an extensive body of research on how the order in which new information is received affects people’s final views and decisions as well as research on how they adjust their views in light of new information.
Seventy college-aged students, 60% of whom were women, completed one of eight different randomly distributed booklets prepared to create the eight different between-subjects treatment conditions created by crossing the two levels of information source with the four level of order conditions. Based on the information provided, participants estimated the probability of an attack, the dependent measure.
Confirming information from an expert intelligence officer significantly increased the attack probability from the initial position more than confirming information from a longtime friend. Conversely, disconfirming information from a longtime friend decreased the attack probability significantly more than the same information from an intelligence officer.
It was confirmed that confirming and disconfirming evidence were differentially affected depending on information source, either an expert or a close friend. The difference appears to be due to the existence of two kinds of trust: cognitive-based imbued to an expert and affective-based imbued to a close friend.
Purveyors of information need to understand that it is not only the content of a message that counts but that other forces are at work such as the order in which information is received and characteristics of the information source.
Four studies were conducted to assess bicyclist conspicuity enhancement at night by the application of reflective tape (ECE/ONU 104) to the bicycle rear frame and to pedal cranks.
Previous studies have tested the benefits of reflective markings applied to bicyclist clothing. Reflective jackets however need to be available and worn while reflective markings enhance conspicuity without any active behavior by the bicyclist.
In the first study, reflective tape was applied to the rear frame. Detection distance was compared in four conditions: control, rear red reflector, high visibility jacket, and reflective tape. In the second study, the same conditions were studied with night street lighting on and off. In the third study, detection and recognition distances were evaluated in rainy conditions. In the fourth study, visibility was assessed with the reflective tape applied to pedal cranks.
In the first study, the application of reflective markings resulted in a detection distance of 168.28 m. In the second study, the detection distance with reflective markings was 229.74 m with public street light on and 256.41 m with public street light off. In rainy conditions, detection distance using the reflective markings was 146.47 m. Reflective tape applied to pedal cracks resulted in a detection distance of 168.60 m.
Reflective tape applied to the rear bicycle frame can considerably increase bicyclist conspicuity and safety at night.
Reflective tape is highly recommended to complement anterior and rear lights in bicycle riding at night.
The objective for this study was to investigate the effects of prior familiarization with takeover requests (TORs) during conditional automated driving on drivers’ initial takeover performance and automation trust.
System-initiated TORs are one of the biggest concerns for conditional automated driving and have been studied extensively in the past. Most, but not all, of these studies have included training sessions to familiarize participants with TORs. This makes them hard to compare and might obscure first-failure-like effects on takeover performance and automation trust formation.
A driving simulator study compared drivers’ takeover performance in two takeover situations across four prior familiarization groups (no familiarization, description, experience, description and experience) and automation trust before and after experiencing the system.
As hypothesized, prior familiarization with TORs had a more positive effect on takeover performance in the first than in a subsequent takeover situation. In all groups, automation trust increased after participants experienced the system. Participants who were given no prior familiarization with TORs reported highest automation trust both before and after experiencing the system.
The current results extend earlier findings suggesting that prior familiarization with TORs during conditional automated driving will be most relevant for takeover performance in the first takeover situation and that it lowers drivers’ automation trust.
Potential applications of this research include different approaches to familiarize users with automated driving systems, better integration of earlier findings, and sophistication of experimental designs.
The aim of this study was to determine whether a sequence of earcons can effectively convey the status of multiple processes, such as the status of multiple patients in a clinical setting.
Clinicians often monitor multiple patients. An auditory display that intermittently conveys the status of multiple patients may help.
Nonclinician participants listened to sequences of 500-ms earcons that each represented the heart rate (HR) and oxygen saturation (SpO2) levels of a different simulated patient. In each sequence, one, two, or three patients had an abnormal level of HR and/or SpO2. In Experiment 1, participants reported which of nine patients in a sequence were abnormal. In Experiment 2, participants identified the vital signs of one, two, or three abnormal patients in sequences of one, five, or nine patients, where the interstimulus interval (ISI) between earcons was 150 ms. Experiment 3 used the five-sequence condition of Experiment 2, but the ISI was either 150 ms or 800 ms.
Participants reported which patient(s) were abnormal with median 95% accuracy. Identification accuracy for vital signs decreased as the number of abnormal patients increased from one to three, p < .001, but accuracy was unaffected by number of patients in a sequence. Overall, identification accuracy was significantly higher with an ISI of 800 ms (89%) compared with an ISI of 150 ms (83%), p < .001.
A multiple-patient display can be created by cycling through earcons that represent individual patients.
The principles underlying the multiple-patient display can be extended to other vital signs, designs, and domains.
The goal of the present study was to examine the effects of domain-relevant expertise on running memory and the ability to process handoffs of information. In addition, the role of active or passive processing was examined.
Currently, there is little research that addresses how individuals with different levels of expertise process information in running memory when the information is needed to perform a real-world task.
Three groups of participants differing in their level of clinical expertise (novice, intermediate, and expert) performed an abstract running memory span task and two tasks resembling real-world activities, a clinical handoff task and an air traffic control (ATC) handoff task. For all tasks, list length and the amount of information to be recalled were manipulated.
Regarding processing strategy, all participants used passive processing for the running memory span and ATC tasks. The novices also used passive processing for the clinical task. The experts, however, appeared to use more active processing, and the intermediates fell in between.
Overall, the results indicated that individuals with clinical expertise and a developed mental model rely more on active processing of incoming information for the clinical task while individuals with little or no knowledge rely on passive processing.
The results have implications about how training should be developed to aid less experienced personnel identify what information should be included in a handoff and what should not.
A computational process model could explain how the dynamic interaction of human cognitive mechanisms produces each of multiple error types.
With increasing capability and complexity of technological systems, the potential severity of consequences of human error is magnified. Interruption greatly increases people’s error rates, as does the presence of other information to maintain in an active state.
The model executed as a software-instantiated Monte Carlo simulation. It drew on theoretical constructs such as associative spreading activation for prospective memory, explicit rehearsal strategies as a deliberate cognitive operation to aid retrospective memory, and decay.
The model replicated the 30% effect of interruptions on postcompletion error in Ratwani and Trafton’s Stock Trader task, the 45% interaction effect on postcompletion error of working memory capacity and working memory load from Byrne and Bovair’s Phaser Task, as well as the 5% perseveration and 3% omission effects of interruption from the UNRAVEL Task.
Error classes including perseveration, omission, and postcompletion error fall naturally out of the theory.
The model explains post-interruption error in terms of task state representation and priming for recall of subsequent steps. Its performance suggests that task environments providing more cues to current task state will mitigate error caused by interruption. For example, interfaces could provide labeled progress indicators or facilities for operators to quickly write notes about their task states when interrupted.
The present paper presents findings from two studies addressing the effects of the employee’s intention to have rest breaks on rest-break frequency and the change of well-being during a workday.
Rest breaks are effective in avoiding an accumulation of fatigue during work. However, little is known about individual differences in rest-break behavior.
In Study 1, the association between rest-break intention and the daily number of rest breaks recorded over 4 consecutive workdays was determined by generalized linear model in a sample of employees (n = 111, 59% females). In Study 2, professional geriatric nurses (n = 95 females) who worked over two consecutive 12-hour day shifts recorded well-being (fatigue, distress, effort motivation) at the beginning and the end of their shifts. The effect of rest-break intention on the change of well-being was determined by multilevel modeling.
Rest-break intention was positively associated with the frequency of rest breaks (Study 1) and reduced the increase of fatigue and distress over the workday (Study 2).
The results indicate that individual differences account for the number of breaks an employee takes and, as a consequence, for variations in the work-related fatigue and distress.
Strengthening rest-break intentions may help to increase rest-break behavior to avoid the buildup of fatigue and distress over a workday.
The goals of this study were to assess the risk identification aspect of mental models using standard elicitation methods and how university campus alerts were related to these mental models.
People fail to follow protective action recommendations in emergency warnings. Past research has yet to examine cognitive processes that influence emergency decision-making.
Study 1 examined 2 years of emergency alerts distributed by a large southeastern university. In Study 2, participants listed emergencies in a thought-listing task. Study 3 measured participants’ time to decide if a situation was an emergency.
The university distributed the most alerts about an armed person, theft, and fire. In Study 2, participants most frequently listed fire, car accident, heart attack, and theft. In Study 3, participants quickly decided a bomb, murder, fire, tornado, and rape were emergencies. They most slowly decided that a suspicious package and identify theft were emergencies.
Recent interaction with warnings was only somewhat related to participants’ mental models of emergencies. Risk identification precedes decision-making and applying protective actions. Examining these characteristics of people’s mental representations of emergencies is fundamental to further understand why some emergency warnings go ignored.
Someone must believe a situation is serious to categorize it as an emergency before taking the protective action recommendations in an emergency warning. Present-day research must continue to examine the problem of people ignoring warning communication, as there are important cognitive factors that have not yet been explored until the present research.
The overall purpose was to understand the effects of handoff protocols using meta-analytic approaches.
Standardized protocols have been required by the Joint Commission, but meta-analytic integration of handoff protocol research has not been conducted.
The primary outcomes investigated were handoff information passed during transitions of care, patient outcomes, provider outcomes, and organizational outcomes. Sources included Medline, SAGE, Embase, PsycINFO, and PubMed, searched from the earliest date available through March 30th, 2015. Initially 4,556 articles were identified, with 4,520 removed. This process left a final set of 36 articles, all which included pre-/postintervention designs implemented in live clinical/hospital settings. We also conducted a moderation analysis based on the number of items contained in each protocol to understand if the length of a protocol led to systematic changes in effect sizes of the outcome variables.
Meta-analyses were conducted on 34,527 pre- and 30,072 postintervention data points. Results indicate positive effects on all four outcomes: handoff information (g = .71, 95% confidence interval [CI] [.63, .79]), patient outcomes (g = .53, 95% CI [.41, .65]), provider outcomes (g = .51, 95% CI [.41, .60]), and organizational outcomes (g = .29, 95% CI [.23, .35]). We found protocols to be effective, but there is significant publication bias and heterogeneity in the literature. Due to publication bias, we further searched the gray literature through greylit.org and found another 347 articles, although none were relevant to this research. Our moderation analysis demonstrates that for handoff information, protocols using 12 or more items led to a significantly higher proportion of information passed compared with protocols using 11 or fewer items. Further, there were numerous negative outcomes found throughout this meta-analysis, with trends demonstrating that protocols can increase the time for handover and the rate of errors of omission.
These results demonstrate that handoff protocols tend to improve results on multiple levels, including handoff information passed and patient, provider, and organizational outcomes. These findings come with the caveat that publication bias exists in the literature on handoffs. Instances where protocols can lead to negative outcomes are also discussed.
Significant effects were found for protocols across provider types, regardless of expertise or area of clinical focus. It also appears that more thorough protocols lead to more information being passed, especially when those protocols consist of 12 or more items. Given these findings, publication bias is an apparent feature of this literature base. Recommendations to reduce the apparent publication bias in the field include changing the way articles are screened and published.
Analysis of the effect of mental fatigue on a cognitive task and determination of the right start time for rest breaks in work environments.
Mental fatigue has been recognized as one of the most important factors influencing individual performance. Subjective and physiological measures are popular methods for analyzing fatigue, but they are restricted to physical experiments. Computational cognitive models are useful for predicting operator performance and can be used for analyzing fatigue in the design phase, particularly in industrial operations and inspections where cognitive tasks are frequent and the effects of mental fatigue are crucial.
A cyclic mental task is modeled by the ACT-R architecture, and the effect of mental fatigue on response time and error rate is studied. The task includes visual inspections in a production line or control workstation where an operator has to check products’ conformity to specifications. Initially, simulated and experimental results are compared using correlation coefficients and paired t test statistics. After validation of the model, the effects are studied by human and simulated results, which are obtained by running 50-minute tests.
It is revealed that during the last 20 minutes of the tests, the response time increased by 20%, and during the last 12.5 minutes, the error rate increased by 7% on average.
The proper start time for the rest period can be identified by setting a limit on the error rate or response time.
The proposed model can be applied early in production planning to decrease the negative effects of mental fatigue by predicting the operator performance. It can also be used for determining the rest breaks in the design phase without an operator in the loop.
We investigated the effects of automatic target detection (ATD) on the detection and identification performance of soldiers.
Prior studies have shown that highlighting targets can aid their detection. We provided soldiers with ATD that was more likely to detect one target identity than another, potentially acting as an implicit identification aid.
Twenty-eight soldiers detected and identified simulated human targets in an immersive virtual environment with and without ATD. Task difficulty was manipulated by varying scene illumination (day, night). The ATD identification bias was also manipulated (hostile bias, no bias, and friendly bias). We used signal detection measures to treat the identification results.
ATD presence improved detection performance, especially under high task difficulty (night illumination). Identification sensitivity was greater for cued than uncued targets. The identification decision criterion for cued targets varied with the ATD identification bias but showed a "sluggish beta" effect.
ATD helps soldiers detect and identify targets. The effects of biased ATD on identification should be considered with respect to the operational context.
Less-than-perfectly-reliable ATD is a useful detection aid for dismounted soldiers. Disclosure of known ATD identification bias to the operator may aid the identification process.
To propose a driver attention theory based on the notion of driving as a satisficing and partially self-paced task and, within this framework, present a definition for driver inattention.
Many definitions of driver inattention and distraction have been proposed, but they are difficult to operationalize, and they are either unreasonably strict and inflexible or suffer from hindsight bias.
Existing definitions of driver distraction are reviewed and their shortcomings identified. We then present the minimum required attention (MiRA) theory to overcome these shortcomings. Suggestions on how to operationalize MiRA are also presented.
MiRA describes which role the attention of the driver plays in the shared "situation awareness of the traffic system." A driver is considered attentive when sampling sufficient information to meet the demands of the system, namely, that he or she fulfills the preconditions to be able to form and maintain a good enough mental representation of the situation. A driver should only be considered inattentive when information sampling is not sufficient, regardless of whether the driver is concurrently executing an additional task or not.
The MiRA theory builds on well-established driver attention theories. It goes beyond available driver distraction definitions by first defining what a driver needs to be attentive to, being free from hindsight bias, and allowing the driver to adapt to the current demands of the traffic situation through satisficing and self-pacing. MiRA has the potential to provide the stepping stone for unbiased and operationalizable inattention detection and classification.
The purpose was to determine if Soldier rucksack load, marching distance, and average heart rate (HR) during shooting affect the probability of hitting the target.
Infantry Soldiers routinely carry heavy rucksack loads and are expected to engage enemy targets should a threat arise.
Twelve male Soldiers performed two 11.8 km marches in forested terrain at 4.3 km/hour on separate days (randomized, counterbalanced design). The Rifleman load consisted of protective armor (26.1 kg); the Rucksack load included the Rifleman load plus a weighted rucksack (48.5 kg). Soldiers performed a live-fire shooting task (48 targets) prior to the march, in the middle of the march, and at the end of the march. HR was collected during the shooting task. Data were assessed with multilevel logistic regression controlling for the multiple observations on each subject and shooting target distance. Predicted probabilities for hitting the target were calculated.
There was a three-way interaction effect between rucksack load, average HR, and march (p = .02). Graphical assessment of predicted probabilities indicated that regardless of load, marching increases shooting performance. Increases in shooting HR after marching result in lower probability of hitting the target, and rucksack load has inconsistent effects on marksmanship.
Early evidence suggests that rucksack load and marching may not uniformly decrease marksmanship but that an inverted-U phenomenon may govern changes in marksmanship.
The effects of load and marching on marksmanship are not linear; the abilities of Soldiers should be continuously monitored to understand their capabilities in a given scenario.
The objective of the present research was to understand drivers’ interaction patterns with hybrid electric vehicles’ (HEV) eco-features (electric propulsion, regenerative braking, neutral mode) and their relationship to fuel efficiency and driver characteristics (technical system knowledge, eco-driving motivation).
Eco-driving (driving behaviors performed to achieve higher fuel efficiency) has the potential to reduce CO2 emissions caused by road vehicles. Eco-driving in HEVs is particularly challenging due to the systems’ dynamic energy flows. As a result, drivers are likely to show diverse eco-driving behaviors, depending on factors like knowledge and motivation. The eco-features represent an interface for the control of the systems’ energy flows.
A sample of 121 HEV drivers who had constantly logged their fuel consumption prior to the study participated in an online questionnaire.
Drivers’ interaction patterns with the eco-features were related to fuel efficiency. A common factor was identified in an exploratory factor analysis, characterizing the intensity of actively dealing with electric energy, which was also related to fuel efficiency. Driver characteristics were not related to this factor, yet they were significant predictors of fuel efficiency.
From the perspective of user–energy interaction, the relationship of the aggregated factor to fuel efficiency emphasizes the central role of drivers’ perception of and interaction with energy conversions in determining HEV eco-driving success.
To arrive at an in-depth understanding of drivers’ eco-driving behaviors that can guide interface design, authors of future research should be concerned with the psychological processes that underlie drivers’ interaction patterns with eco-features.
The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors.
Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players’ attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes.
The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented.
A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity.
The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience.
The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users.
We examine how transitions in task demand are manifested in mental workload and performance in a dual-task setting.
Hysteresis has been defined as the ongoing influence of demand levels prior to a demand transition. Authors of previous studies predominantly examined hysteretic effects in terms of performance. However, little is known about the temporal development of hysteresis in mental workload.
A simulated driving task was combined with an auditory memory task. Participants were instructed to prioritize driving or to prioritize both tasks equally. Three experimental conditions with low, high, and low task demands were constructed by manipulating the frequency of lane changing. Multiple measures of subjective mental workload were taken during experimental conditions.
Contrary to our prediction, no hysteretic effects were found after the high- to low-demand transition. However, a hysteretic effect in mental workload was found within the high-demand condition, which degraded toward the end of the high condition. Priority instructions were not reflected in performance.
Online assessment of both performance and mental workload demonstrates the transient nature of hysteretic effects. An explanation for the observed hysteretic effect in mental workload is offered in terms of effort regulation.
An informed arrival at the scene is important in safety operations, but peaks in mental workload should be avoided to prevent buildup of fatigue. Therefore, communication technologies should incorporate the historical profile of task demand.