Previous studies support a link between moral disgust and impurity, whereas anger is linked to harm. We challenged these strict correspondences by showing that disgust is activated in response to information about moral character, even for harm violations. By contrast, anger is activated in response to information about actions, including their moral wrongness and consequences. Study 1 examined disgust and anger in response to an action that suggests bad moral character (animal cruelty) versus an action that is seen as inherently more wrong (domestic abuse). Animal cruelty was associated with more disgust than domestic abuse was, whereas domestic abuse was associated with more anger. Studies 2 and 3 manipulated character by varying the agent’s desire to cause harm and also varied the action’s harmful consequences. Desire to harm predicted only disgust (controlling for anger), whereas consequences were more closely related to anger (controlling for disgust). Taken together, these results indicate that disgust arises in response to evidence of bad moral character, not just to impurity.
The most salient dimension of men’s sexual orientation is gender: attraction to males versus females. A second dimension is sexual maturity: attraction to children versus adults. A less appreciated dimension is location: attraction to other individuals versus the sexual fantasy of being one of those individuals. Men sexually aroused by the idea or fantasy of being the kinds of individuals to whom they are sexually attracted have an erotic-target identity inversion (ETII). We conducted an online survey to investigate the prevalence and phenomenology of ETIIs among 475 men sexually attracted to children. Autopedophilia, or sexual arousal by the idea of being a child, was common. Furthermore, autopedophilic men tended to be sexually aroused by imagining themselves as the kinds of children (with respect to gender and age) to whom they are sexually attracted. Results support the concept of ETIIs and exemplify the simultaneous importance of three dimensions of male sexual orientation.
Understanding how human populations naturally respond to and cope with risk is important for fields ranging from psychology to public health. We used geophysical and individual-level mobile-phone data (mobile-apps, telecommunications, and Web usage) of 157,358 victims of the 2013 Ya’an earthquake to diagnose the effects of the disaster and investigate how experiencing real risk (at different levels of intensity) changes behavior. Rather than limiting human activity, higher earthquake intensity resulted in graded increases in usage of communications apps (e.g., social networking, messaging), functional apps (e.g., informational tools), and hedonic apps (e.g., music, videos, games). Combining mobile data with a field survey (N = 2,000) completed 1 week after the earthquake, we use an instrumental-variable approach to show that only increases in hedonic behavior reduced perceived risk. Thus, hedonic behavior could potentially serve as a population-scale coping and recovery strategy that is often missing in risk management and policy considerations.
The efficiency of averaging properties of sets without encoding redundant details is analogous to gestalt proposals that perception is parsimoniously organized as a function of recurrent order in the world. This similarity suggests that grouping and averaging are part of a broader set of strategies allowing the visual system to circumvent capacity limitations. To examine how gestalt grouping affects the manner in which information is averaged and remembered, I compared the error in observers’ adjustments of remembered sizes of individual circles in two different mean-size sets defined by similarity, proximity, connectedness, or a common region. Overall, errors were more similar within the same gestalt-defined groups than between different gestalt-defined groups, such that the remembered sizes of individual circles were biased toward the mean size of their respective gestalt-defined groups. These results imply that gestalt grouping facilitates perceptual averaging to minimize the error with which individual items are encoded, thereby optimizing the efficiency of visual short-term memory.
In two experiments, we explored the effects of noticing and remembering change in the misinformation paradigm. People watched slide shows, read narratives containing misinformation about the events depicted in the slide shows, and took a recognition test on which they reported whether any details had changed between the slides and the narratives. As expected, we found a strong misinformation effect overall. In some cases, however, misinformation led to improved recognition, which is opposite the usual finding. Critically, misinformation led to improved recognition of the original event when subjects detected and remembered a change between the original event and the postevent information. Our research agrees with other findings from retroactive-interference paradigms and can be interpreted within the recursive-remindings framework, according to which detecting and remembering change can enhance retention. We conclude that the misinformation effect occurs mostly for witnessed details that are not particularly memorable. In the case of more memorable details, providing misinformation can actually facilitate later recollection of the original events.
Scholars have argued that opposition to welfare is, in part, driven by stereotypes of African Americans. This argument assumes that when individuals think about welfare, they spontaneously think about Black recipients. We investigated people’s mental representations of welfare recipients. In Studies 1 and 2, we used a perceptual task to visually estimate participants’ mental representations of welfare recipients. Compared with the average non-welfare-recipient image, the average welfare-recipient image was perceived (by a separate sample) as more African American and more representative of stereotypes associated with welfare recipients and African Americans. In Study 3, participants were asked to determine whether they supported giving welfare benefits to the people pictured in the average welfare-recipient and non-welfare-recipient images generated in Study 2. Participants were less supportive of giving welfare benefits to the person shown in the welfare-recipient image than to the person shown in the non-welfare-recipient image. The results suggest that mental images of welfare recipients may bias attitudes toward welfare policies.
The general view in psychological science is that natural categories obey a coherent, family-resemblance principle. In this investigation, we documented an example of an important exception to this principle: Results of a multidimensional-scaling study of igneous, metamorphic, and sedimentary rocks (Experiment 1) suggested that the structure of these categories is disorganized and dispersed. This finding motivated us to explore what might be the optimal procedures for teaching dispersed categories, a goal that is likely critical to science education in general. Subjects in Experiment 2 learned to classify pictures of rocks into compact or dispersed high-level categories. One group learned the categories through focused high-level training, whereas a second group was required to simultaneously learn classifications at a subtype level. Although high-level training led to enhanced performance when the categories were compact, subtype training was better when the categories were dispersed. We provide an interpretation of the results in terms of an exemplar-memory model of category learning.
Observers experience affordance-specific biases in visual processing for objects within the hands’ grasping space, but the mechanism that tunes visual cognition to facilitate action remains unknown. I investigated the hypothesis that altered vision near the hands is a result of experience-driven plasticity. Participants performed motion-detection and form-perception tasks—while their hands were either near the display, in atypical grasping postures, or positioned in their laps—both before and after learning novel grasp affordances. Participants showed enhanced temporal sensitivity for stimuli viewed near the backs of the hands after training to execute a power grasp using the backs of their hands (Experiment 1), but showed enhanced spatial sensitivity for stimuli viewed near the tips of their little fingers after training to use their little fingers to execute a precision grasp (Experiment 2). These results show that visual biases near the hands are plastic, facilitating processing of information relevant to learned grasp affordances.
On many occasions, people spontaneously or deliberately take the perspective of a person facing them rather than their own perspective. How is this done? Using a spatial perspective task in which participants were asked to identify objects at specific locations, we found that self-perspective judgments were faster for objects presented to the right, rather than the left, and for objects presented closer to the participants’ own bodies. Strikingly, taking the opposing perspective of another person led to a reversal (i.e., remapping) of these effects, with reference to the other person’s position (Experiment 1). A remapping of spatial relations was also observed when an empty chair replaced the other person (Experiment 2), but not when access to the other viewpoint was blocked (Experiment 3). Thus, when the spatial scene allows a physically feasible but opposing point of view, people respond as if their own bodies were in that place. Imagination can thus overcome perception.
Vision in the fovea, the center of the visual field, is much more accurate and detailed than vision in the periphery. This is not in line with the rich phenomenology of peripheral vision. Here, we investigated a visual illusion that shows that detailed peripheral visual experience is partially based on a reconstruction of reality. Participants fixated on the center of a visual display in which central stimuli differed from peripheral stimuli. Over time, participants perceived that the peripheral stimuli changed to match the central stimuli, so that the display seemed uniform. We showed that a wide range of visual features, including shape, orientation, motion, luminance, pattern, and identity, are susceptible to this uniformity illusion. We argue that the uniformity illusion is the result of a reconstruction of sparse visual information (from the periphery) based on more readily available detailed visual information (from the fovea), which gives rise to a rich, but illusory, experience of peripheral vision.
A recent study has linked individual differences in face recognition to rs237887, a single-nucleotide polymorphism (SNP) of the oxytocin receptor gene (OXTR; Skuse et al., 2014). In that study, participants were assessed using the Warrington Recognition Memory Test for Faces, but performance on Warrington’s test has been shown not to rely purely on face recognition processes. We administered the widely used Cambridge Face Memory Test—a purer test of face recognition—to 370 participants. Performance was not significantly associated with rs237887, with 16 other SNPs of OXTR that we genotyped, or with a further 75 imputed SNPs. We also administered three other tests of face processing (the Mooney Face Test, the Glasgow Face Matching Test, and the Composite Face Test), but performance was never significantly associated with rs237887 or with any of the other genotyped or imputed SNPs, after corrections for multiple testing. In addition, we found no associations between OXTR and Autism-Spectrum Quotient scores.
Because linguistic communication is inherently noisy and uncertain, adult language comprehenders integrate bottom-up cues from speech perception with top-down expectations about what speakers are likely to say. Further, in line with the predictions of ideal-observer models, past results have shown that adult comprehenders flexibly adapt how much they rely on these two kinds of cues in proportion to their changing reliability. Do children also show evidence of flexible, expectation-based language comprehension? We presented preschoolers with ambiguous utterances that could be interpreted in two different ways, depending on whether the children privileged perceptual input or top-down expectations. Across three experiments, we manipulated the reliability of both their perceptual input and their expectations about the speaker’s intended meaning. As predicted by noisy-channel models of speech processing, results showed that 4- and 5-year-old—but perhaps not younger—children flexibly adjusted their interpretations as cues changed in reliability.
It is well established that emotion and cognition interact in humans, but such an interaction has not been extensively studied in nonhuman primates. We investigated whether emotional value can affect nonhuman primates’ processing of stimuli that are only mentally represented, not visually available. In a short-term memory task, baboons memorized the location of two target squares of the same color, which were presented with a distractor of a different color. Through prior long-term conditioning, one of the two colors had acquired a negative valence. Subjects were slower and less accurate on the memory task when the targets were negative than when they were neutral. In contrast, subjects were faster and more accurate when the distractors were negative than when they were neutral. Some of these effects were modulated by individual differences in emotional disposition. Overall, the results reveal a pattern of cognitive avoidance of negative stimuli, and show that emotional value alters cognitive processing in baboons even when the stimuli are not physically present. This suggests that emotional influences on cognition are deeply rooted in evolutionary continuity.
In the current study, we investigated windows for enhanced learning of cognitive skills during adolescence. Six hundred thirty-three participants (11–33 years old) were divided into four age groups, and each participant was randomly allocated to one of three training groups. Each training group completed up to 20 days of online training in numerosity discrimination (i.e., discriminating small from large numbers of objects), relational reasoning (i.e., detecting abstract relationships between groups of items), or face perception (i.e., identifying differences in faces). Training yielded some improvement in performance on the numerosity-discrimination task, but only in older adolescents or adults. In contrast, training in relational reasoning improved performance on that task in all age groups, but training benefits were greater for people in late adolescence and adulthood than for people earlier in adolescence. Training did not increase performance on the face-perception task for any age group. Our findings suggest that for certain cognitive skills, training during late adolescence and adulthood yields greater improvement than training earlier in adolescence, which highlights the relevance of this late developmental stage for education.
Humans can communicate even with few existing conventions in common (e.g., when they lack a shared language). We explored what makes this phenomenon possible with a nonlinguistic experimental task requiring participants to coordinate toward a common goal. We observed participants creating new communicative conventions using the most minimal possible signals. These conventions, furthermore, changed on a trial-by-trial basis in response to shared environmental and task constraints. Strikingly, as a result, signals of the same form successfully conveyed contradictory messages from trial to trial. Such behavior is evidence for the involvement of what we term joint inference, in which social interactants spontaneously infer the most sensible communicative convention in light of the common ground between them. Joint inference may help to elucidate how communicative conventions emerge instantaneously and how they are modified and reshaped into the elaborate systems of conventions involved in human communication, including natural languages.
Past research has suggested a fundamental principle of price precision: The more precise an opening price, the more it anchors counteroffers. The present research challenges this principle by demonstrating a too-much-precision effect. Five experiments (involving 1,320 experts and amateurs in real-estate, jewelry, car, and human-resources negotiations) showed that increasing the precision of an opening offer had positive linear effects for amateurs but inverted-U-shaped effects for experts. Anchor precision backfired because experts saw too much precision as reflecting a lack of competence. This negative effect held unless first movers gave rationales that boosted experts’ perception of their competence. Statistical mediation and experimental moderation established the critical role of competence attributions. This research disentangles competing theoretical accounts (attribution of competence vs. scale granularity) and qualifies two putative truisms: that anchors affect experts and amateurs equally, and that more precise prices are linearly more potent anchors. The results refine current theoretical understanding of anchoring and have significant implications for everyday life.
It is a fundamental human need to secure and sustain a sense of social belonging. Previous research has shown that individuals who are lonely are more likely than people who are not lonely to attribute humanlike traits (e.g., free will) to nonhuman agents (e.g., an alarm clock that makes people get up by moving away from the sleeper), presumably in an attempt to fulfill unmet needs for belongingness. We directly replicated the association between loneliness and anthropomorphism in a larger sample (N = 178); furthermore, we showed that reminding people of a close, supportive relationship reduces their tendency to anthropomorphize. This finding provides support for the idea that the need for belonging has causal effects on anthropomorphism. Last, we showed that attachment anxiety—characterized by intense desire for and preoccupation with closeness, fear of abandonment, and hypervigilance to social cues—was a stronger predictor of anthropomorphism than loneliness was. This finding helps clarify the mechanisms underlying anthropomorphism and supports the idea that anthropomorphism is a motivated process reflecting the active search for potential sources of connection.
According to Bayesian models, perception and cognition depend on the optimal combination of noisy incoming evidence with prior knowledge of the world. Individual differences in perception should therefore be jointly determined by a person’s sensitivity to incoming evidence and his or her prior expectations. It has been proposed that individuals with autism have flatter prior distributions than do nonautistic individuals, which suggests that prior variance is linked to the degree of autistic traits in the general population. We tested this idea by studying how perceived speed changes during pursuit eye movement and at low contrast. We found that individual differences in these two motion phenomena were predicted by differences in thresholds and autistic traits when combined in a quantitative Bayesian model. Our findings therefore support the flatter-prior hypothesis and suggest that individual differences in prior expectations are more systematic than previously thought. In order to be revealed, however, individual differences in sensitivity must also be taken into account.
Research has shown that people who feel powerful are more likely to act than those who feel powerless, whereas people who feel ambivalent are less likely to act than those whose reactions are univalent (entirely positive or entirely negative). But what happens when powerful people also are ambivalent? On the basis of the self-validation theory of judgment, we hypothesized that power and ambivalence would interact to predict individuals’ action. Because power can validate individuals’ reactions, we reasoned that feeling powerful strengthens whatever reactions people have during a decision. It can strengthen univalent reactions and increase action orientation, as shown in past research. Among people who hold an ambivalent judgment, however, those who feel powerful would be less action oriented than those who feel powerless. Two experiments provide evidence for this hypothesized interactive effect of power and ambivalence on individuals’ action tendencies during both positive decisions (promoting an employee; Experiment 1) and negative decisions (firing an employee; Experiment 2). In summary, when individuals’ reactions are ambivalent, power increases the likelihood of inaction.
We explore how preferences for attributes are constructed when people choose between multiattribute options. As found in prior research, we observed that while people make decisions, their preferences for the attributes in question shift to support the emerging choice, thus enabling confident decisions. The novelty of the studies reported here is that participants repeated the same task 6 to 8 weeks later. We found that between tasks, preferences returned to near their original levels, only to shift again to support the second choice, regardless of which choice participants made. Similar patterns were observed in a free-choice task (Study 1) and when the favorableness of options was manipulated (Study 2). It follows that preferences behave in an elastic manner: In the absence of situational pressures, they rest at baseline levels, but during the process of reaching a decision, they morph to support the chosen options. This elasticity appears to facilitate confident decision making in the face of decisional conflict.
Instilling values in children is among the cornerstones of every society. There is wide agreement that beyond academic teaching, schools play an important role in shaping schoolchildren’s character, imparting in them values such as curiosity, achievement, benevolence, and citizenship. Despite the importance of this topic, we know very little about whether and how schools affect children’s values. In this large-scale longitudinal study, we examined school principals’ roles in the development of children’s values. We hypothesized that relationships exist between principals’ values and changes in children’s values through the mediating effect of the school climate. To test our predictions, we collected data from 252 school principals, 3,658 teachers, and 49,401 schoolchildren. A multilevel structural-equation-modeling analysis yielded overall support for our hypotheses. These findings contribute to understanding the development of children’s values and the far-reaching impact of leaders’ values. They also demonstrate effects of schools on children beyond those on academic achievement.
When perceiving rich sensory information, some people may integrate its various aspects, whereas other people may selectively focus on its most salient aspects. We propose that neural gain modulates the trade-off between breadth and selectivity, such that high gain focuses perception on those aspects of the information that have the strongest, most immediate influence, whereas low gain allows broader integration of different aspects. We illustrate our hypothesis using a neural-network model of ambiguous-letter perception. We then report an experiment demonstrating that, as predicted by the model, pupil-diameter indices of higher gain are associated with letter perception that is more selectively focused on the letter’s shape or, if primed, its semantic content. Finally, we report a recognition-memory experiment showing that the relationship between gain and selective processing also applies when the influence of different stimulus features is voluntarily modulated by task demands.
The ability to regulate emotions is central to well-being, but healthy emotion regulation may not merely be about using the "right" strategies. According to the strategy-situation-fit hypothesis, emotion-regulation strategies are conducive to well-being only when used in appropriate contexts. This study is the first to test the strategy-situation-fit hypothesis using ecological momentary assessment of cognitive reappraisal—a putatively adaptive strategy. We expected people who used reappraisal more in uncontrollable situations and less in controllable situations to have greater well-being than people with the opposite pattern of reappraisal use. Healthy participants (n = 74) completed measures of well-being in the lab and used a smartphone app to report their use of reappraisal and perceived controllability of their environment 10 times a day for 1 week. Results supported the strategy-situation-fit hypothesis. Participants with relatively high well-being used reappraisal more in situations they perceived as lower in controllability and less in situations they perceived as higher in controllability. In contrast, we found little evidence for an association between greater well-being and greater mean use of reappraisal across situations.
Positive affect (e.g., attentiveness) and negative affect (e.g., upset) fluctuate over time. We examined genetic influences on interindividual differences in the day-to-day variability of affect (i.e., ups and downs) and in average affect over the duration of a month. Once a day, 17-year-old twins in the United Kingdom (N = 447) rated their positive and negative affect online. The mean and standard deviation of each individual’s daily ratings across the month were used as the measures of that individual’s average affect and variability of affect. Analyses revealed that the average of negative affect was significantly heritable (.53), but the average of positive affect was not; instead, the latter showed significant shared environmental influences (.42). Fluctuations across the month were significantly heritable for both negative affect (.54) and positive affect (.34). The findings support the two-factor theory of affect, which posits that positive affect is more situational and negative affect is more dispositional.
The importance of executive functioning for later life outcomes, along with its potential to be positively affected by intervention programs, motivates the need to find early markers of executive functioning. In this study, 18-month-olds performed three executive-function tasks—involving simple inhibition, working memory, and more complex inhibition—and a motion-capture task assessing prospective motor control during reaching. We demonstrated that prospective motor control, as measured by the peak velocity of the first movement unit, is related to infants’ performance on simple-inhibition and working memory tasks. The current study provides evidence that motor control and executive functioning are intertwined early in life, which suggests an embodied perspective on executive-functioning development. We argue that executive functions and prospective motor control develop from a common source and a single motive: to control action. This is the first demonstration that low-level movement planning is related to higher-order executive control early in life.
Brief verbal descriptions of people’s bodies (e.g., "curvy," "long-legged") can elicit vivid mental images. The ease with which these mental images are created belies the complexity of three-dimensional body shapes. We explored the relationship between body shapes and body descriptions and showed that a small number of words can be used to generate categorically accurate representations of three-dimensional bodies. The dimensions of body-shape variation that emerged in a language-based similarity space were related to major dimensions of variation computed directly from three-dimensional laser scans of 2,094 bodies. This relationship allowed us to generate three-dimensional models of people in the shape space using only their coordinates on analogous dimensions in the language-based description space. Human descriptions of photographed bodies and their corresponding models matched closely. The natural mapping between the spaces illustrates the role of language as a concise code for body shape that captures perceptually salient global and local body features.
We theorize that people’s social class affects their appraisals of others’ motivational relevance—the degree to which others are seen as potentially rewarding, threatening, or otherwise worth attending to. Supporting this account, three studies indicate that social classes differ in the amount of attention their members direct toward other human beings. In Study 1, wearable technology was used to film the visual fields of pedestrians on city streets; higher-class participants looked less at other people than did lower-class participants. In Studies 2a and 2b, participants’ eye movements were tracked while they viewed street scenes; higher class was associated with reduced attention to people in the images. In Study 3, a change-detection procedure assessed the degree to which human faces spontaneously attract visual attention; faces proved less effective at drawing the attention of high-class than low-class participants, which implies that class affects spontaneous relevance appraisals. The measurement and conceptualization of social class are discussed.
Many individuals with normal visual acuity are unable to discriminate the direction of 3-D motion in a portion of their visual field, a deficit previously referred to as a stereomotion scotoma. The origin of this visual deficit has remained unclear. We hypothesized that the impairment is due to a failure in the processing of one of the two binocular cues to motion in depth: changes in binocular disparity over time or interocular velocity differences. We isolated the contributions of these two cues and found that sensitivity to interocular velocity differences, but not changes in binocular disparity, varied systematically with observers’ ability to judge motion direction. We therefore conclude that the inability to interpret motion in depth is due to a failure in the neural mechanisms that combine velocity signals from the two eyes. Given these results, we argue that the deficit should be considered a prevalent but previously unrecognized agnosia specific to the perception of visual motion.
Sometimes it is easy to do the right thing. But often, people act morally only after overcoming competing immoral desires. How does learning about someone’s inner moral conflict influence children’s and adults’ moral judgments about that person? Across four studies, we discovered a striking developmental difference: When the outcome is held constant, 3- to 8-year-old children judge someone who does the right thing without experiencing immoral desires to be morally superior to someone who does the right thing through overcoming conflicting desires—but adults have the opposite intuition. This developmental difference also occurs for judgments of immoral actors: Three- to 5-year-olds again prefer the person who is not conflicted, whereas older children and adults judge that someone who struggles with the decision is morally superior. Our findings suggest that children may begin with the view that inner moral conflict is inherently negative, but, with development, come to value the exercise of willpower and self-control.
The demands of social life often require categorically judging whether someone’s continuously varying facial movements express "calm" or "fear," or whether one’s fluctuating internal states mean one feels "good" or "bad." In two studies, we asked whether this kind of categorical, "black and white," thinking can shape the perception and neural representation of emotion. Using psychometric and neuroimaging methods, we found that (a) across participants, judging emotions using a categorical, "black and white" scale relative to judging emotions using a continuous, "shades of gray," scale shifted subjective emotion perception thresholds; (b) these shifts corresponded with activity in brain regions previously associated with affective responding (i.e., the amygdala and ventral anterior insula); and (c) connectivity of these regions with the medial prefrontal cortex correlated with the magnitude of categorization-related shifts. These findings suggest that categorical thinking about emotions may actively shape the perception and neural representation of the emotions in question.
In four experiments, we tested the community-of-knowledge hypothesis, that people fail to distinguish their own knowledge from other people’s knowledge. In all the experiments, despite the absence of any actual explanatory information, people rated their own understanding of novel natural phenomena as higher when they were told that scientists understood the phenomena than when they were told that scientists did not yet understand them. In Experiment 2, we found that this occurs only when people have ostensible access to the scientists’ explanations; the effect does not occur when the explanations exist but are held in secret. In Experiment 3, we further ruled out two classes of alternative explanations (one appealing to task demands and the other proposing that judgments were mediated by inferences about a phenomenon’s understandability). In Experiment 4, we ruled out the possibility that the effect could be attributed to a pragmatic inference.
Puberty prepares mammals to sexually reproduce during adolescence. It is also hypothesized to invoke a social metamorphosis that prepares adolescents to take on adult social roles. We provide the first evidence to support this hypothesis in humans and show that pubertal development retunes the face-processing system from a caregiver bias to a peer bias. Prior to puberty, children exhibit enhanced recognition for adult female faces. With puberty, superior recognition emerges for peer faces that match one’s pubertal status. As puberty progresses, so does the peer recognition bias. Adolescents become better at recognizing faces with a pubertal status similar to their own. These findings reconceptualize the adolescent "dip" in face recognition by showing that it is a recalibration of the face-processing system away from caregivers toward peers. Thus, in addition to preparing the physical body for sexual reproduction, puberty shapes the perceptual system for processing the social world in new ways.
Sex differences in favor of males have been documented in measures of spatial perspective taking. In this research, we examined whether social factors (i.e., stereotype threat and the inclusion of human figures in tasks) account for these differences. In Experiment 1, we evaluated performance when perspective-taking tests were framed as measuring either spatial or social (empathetic) perspective-taking abilities. In the spatial condition, tasks were framed as measures of spatial ability on which males have an advantage. In the social condition, modified tasks contained human figures and were framed as measures of empathy on which females have an advantage. Results showed a sex difference in favor of males in the spatial condition but not the social condition. Experiments 2 and 3 indicated that both stereotype threat and including human figures contributed to these effects. Results suggest that females may underperform on spatial tests in part because of negative performance expectations and the character of the spatial tests rather than because of actual lack of abilities.
Previous research has demonstrated that emotional information processing can be modulated by what is being held in working memory (WM). Here, we showed that such content-based WM effects can occur even when the emotional information is suppressed from conscious awareness. Using the delayed-match-to-sample paradigm in conjunction with continuous flash suppression, we found that suppressed threatening (fearful and angry) faces emerged from suppression faster when they matched the emotional valence of WM contents than when they did not. This effect cannot be explained by perceptual priming, as it disappeared when the faces were only passively viewed and not held in WM. Crucially, such an effect is highly specific to threatening faces but not to happy or neutral faces. Our findings together suggest that WM can modulate nonconscious emotion processing, which highlights the functional association between nonconsciously triggered emotional processes and conscious emotion representation.
Human social life depends heavily on social norms that prescribe and proscribe specific actions. Typically, young children learn social norms from adult instruction. In the work reported here, we showed that this is not the whole story: Three-year-old children are promiscuous normativists. In other words, they spontaneously inferred the presence of social norms even when an adult had done nothing to indicate such a norm in either language or behavior. And children of this age even went so far as to enforce these self-inferred norms when third parties "broke" them. These results suggest that children do not just passively acquire social norms from adult behavior and instruction; rather, they have a natural and proactive tendency to go from "is" to "ought." That is, children go from observed actions to prescribed actions and do not perceive them simply as guidelines for their own behavior but rather as objective normative rules applying to everyone equally.
Does the warmth of children’s family environments predict the quality of their intimate relationships at the other end of the life span? Using data collected prospectively on 81 men from adolescence through the eighth and ninth decades of life, this study tested the hypotheses that warmer relationships with parents in childhood predict greater security of attachment to intimate partners in late life, and that this link is mediated in part by the degree to which individuals in midlife rely on emotion-regulatory styles that facilitate or inhibit close relationship connections. Findings supported this mediational model, showing a positive link between more nurturing family environments in childhood and greater security of attachment to spouses more than 60 years later. This link was partially mediated by reliance on more engaging and less distorting styles of emotion regulation in midlife. The findings underscore the far-reaching influence of childhood environment on well-being in adulthood.
Studies on crowding out document that incentives sometimes backfire—decreasing motivation in prosocial tasks. In the present research, we demonstrated an additional channel through which incentives can be harmful. Incentivized advocates for a cause are perceived as less sincere than nonincentivized advocates and are ultimately less effective in persuading other people to donate. Further, the negative effects of incentives hold only when the incentives imply a selfish motive; advocates who are offered a matching incentive (i.e., who are told that the donations they successfully solicit will be matched), which is not incompatible with altruism, perform just as well as those who are not incentivized. Thus, incentives may affect prosocial outcomes in ways not previously investigated: by crowding out individuals’ sincerity of expression and thus their ability to gain support for a cause.
When engaging in joint activities, humans tend to sacrifice some of their own sensorimotor comfort and efficiency to facilitate a partner’s performance. In the two experiments reported here, we investigated whether ownership—a socioculturally based nonphysical feature ascribed to objects—influenced facilitatory motor behavior in joint action. Participants passed mugs that differed in ownership status across a table to a partner. We found that participants oriented handles less toward their partners when passing their own mugs than when passing mugs owned by their partners (Experiment 1) and mugs owned by the experimenter (Experiment 2). These findings indicate that individuals plan and execute actions that assist their partners but do so to a smaller degree if it is the individuals’ own property that the partners intend to manipulate. We discuss these findings in terms of underlying variables associated with ownership and conclude that a self-other distinction can be found in the human sensorimotor system.
Research on sustainability behaviors has been based on the assumption that increasing personal concerns about the environment will increase proenvironmental action. We tested whether this assumption is more applicable to individualistic cultures than to collectivistic cultures. In Study 1, we compared 47 countries (N = 57,268) and found that they varied considerably in the degree to which environmental concern predicted support for proenvironmental action. National-level individualism explained the between-nation variability above and beyond the effects of other cultural values and independently of person-level individualism. In Study 2, we compared individualistic and collectivistic nations (United States vs. Japan; N = 251) and found culture-specific predictors of proenvironmental behavior. Environmental concern predicted environmentally friendly consumer choice among European Americans but not Japanese. For Japanese participants, perceived norms about environmental behavior predicted proenvironmental decision making. Facilitating sustainability across nations requires an understanding of how culture determines which psychological factors drive human action.
Do people appear more attractive or less attractive depending on the company they keep? A divisive-normalization account—in which representation of stimulus intensity is normalized (divided) by concurrent stimulus intensities—predicts that choice preferences among options increase with the range of option values. In the first experiment reported here, I manipulated the range of attractiveness of the faces presented on each trial by varying the attractiveness of an undesirable distractor face that was presented simultaneously with two attractive targets, and participants were asked to choose the most attractive face. I used normalization models to predict the context dependence of preferences regarding facial attractiveness. The more unattractive the distractor, the more one of the targets was preferred over the other target, which suggests that divisive normalization (a potential canonical computation in the brain) influences social evaluations. I obtained the same result when I manipulated faces’ averageness and participants chose the most average face. This finding suggests that divisive normalization is not restricted to value-based decisions (e.g., attractiveness). This new application to social evaluation of normalization, a classic theory, opens possibilities for predicting social decisions in naturalistic contexts such as advertising or dating.
Perceptions of racial bias have been linked to poorer circulatory health among Blacks compared with Whites. However, little is known about whether Whites’ actual racial bias contributes to this racial disparity in health. We compiled racial-bias data from 1,391,632 Whites and examined whether racial bias in a given county predicted Black-White disparities in circulatory-disease risk (access to health care, diagnosis of a circulatory disease; Study 1) and circulatory-disease-related death rate (Study 2) in the same county. Results revealed that in counties where Whites reported greater racial bias, Blacks (but not Whites) reported decreased access to health care (Study 1). Furthermore, in counties where Whites reported greater racial bias, both Blacks and Whites showed increased death rates due to circulatory diseases, but this relationship was stronger for Blacks than for Whites (Study 2). These results indicate that racial disparities in risk of circulatory disease and in circulatory-disease-related death rate are more pronounced in communities where Whites harbor more explicit racial bias.
Plural societies require individuals to forecast how others—both in-group and out-group members—will respond to gains and setbacks. Typically, correcting affective forecasts to include more relevant information improves their accuracy by reducing their extremity. In contrast, we found that providing affective forecasters with social-category information about their targets made their forecasts more extreme and therefore less accurate. In both political and sports contexts, forecasters across five experiments exhibited greater impact bias for both in-group and out-group members (e.g., a Democrat or Republican) than for unspecified targets when predicting experiencers’ responses to positive and negative events. Inducing time pressure reduced the extremity of forecasts for group-labeled but not unspecified targets, which suggests that the increased impact bias was due to overcorrection for social-category information, not different intuitive predictions for identified targets. Finally, overcorrection was better accounted for by stereotypes than by spontaneous retrieval of extreme group exemplars.
Despite considerable interest in the role of spatial intelligence in science, technology, engineering, and mathematics (STEM) achievement, little is known about the ontogenetic origins of individual differences in spatial aptitude or their relation to later accomplishments in STEM disciplines. The current study provides evidence that spatial processes present in infancy predict interindividual variation in both spatial and mathematical competence later in development. Using a longitudinal design, we found that children’s performance on a brief visuospatial change-detection task administered between 6 and 13 months of age was related to their spatial aptitude (i.e., mental-transformation skill) and mastery of symbolic-math concepts at 4 years of age, even when we controlled for general cognitive abilities and spatial memory. These results suggest that nascent spatial processes present in the first year of life not only act as precursors to later spatial intelligence but also predict math achievement during childhood.
Both repeated practice and sleep improve long-term retention of information. The assumed common mechanism underlying these effects is memory reactivation, either on-line and effortful or off-line and effortless. In the study reported here, we investigated whether sleep-dependent memory consolidation could help to save practice time during relearning. During two sessions occurring 12 hr apart, 40 participants practiced foreign vocabulary until they reached a perfect level of performance. Half of them learned in the morning and relearned in the evening of a single day. The other half learned in the evening of one day, slept, and then relearned in the morning of the next day. Their retention was assessed 1 week later and 6 months later. We found that interleaving sleep between learning sessions not only reduced the amount of practice needed by half but also ensured much better long-term retention. Sleeping after learning is definitely a good strategy, but sleeping between two learning sessions is a better strategy.
Personal identity is an important determinant of behavior, yet how people mentally represent their self-concepts and their concepts of other people is not well understood. In the current studies, we examined the age-old question of what makes people who they are. We propose a novel approach to identity that suggests that the answer lies in people’s beliefs about how the features of identity (e.g., memories, moral qualities, personality traits) are causally related to each other. We examined the impact of the causal centrality of a feature, a key determinant of the extent to which a feature defines a concept, on judgments of identity continuity. We found support for this approach in three experiments using both measured and manipulated causal centrality. For judgments both of one’s self and of others, we found that some features are perceived to be more causally central than others and that changes in such causally central features are believed to be more disruptive to identity.
Children and adults respond negatively to inequity. Traditional accounts of inequity aversion suggest that as children mature into adults, they become less likely to endorse all forms of inequity. We challenge the idea that children have a unified concern with inequity that simply becomes stronger with age. Instead, we argue that the developmental trajectory of inequity aversion depends on whether the inequity is seen as fair or unfair. In three studies (N = 501), 7- to 8-year-olds were more likely than 4- to 6-year-olds to create inequity that disadvantaged themselves—a fair type of inequity. In findings consistent with our theory, 7- to 8-year-olds were not more likely than 4- to 6-year-olds to endorse advantageous inequity (Study 1) or inequity created by third parties (Studies 2 and 3)—unfair types of inequity. We discuss how these results expand on recent accounts of children’s developing concerns with generosity and partiality.
Attention switching is a crucial ability required in everyday life, from toddlerhood to adulthood. In adults, shifting attention from one word (e.g., dog) to another (e.g., sea) results in backward semantic inhibition, that is, the inhibition of the initial word (dog). In this study, we used the preferential-looking paradigm to examine whether attention switching is accompanied by backward semantic inhibition in toddlers. We found that 24-month-olds can indeed refocus their attention to a new item by selectively inhibiting attention to the old item. The consequence of backward inhibition is that subsequent attention to a word semantically related to the old item is impaired. These findings have important implications for understanding the underlying mechanism of backward semantic inhibition and the development of lexical-semantic inhibition in early childhood.
When searching a crowd, people can detect a target face only by direct fixation and attention. Once the target is found, it is consciously experienced and remembered, but what is the perceptual fate of the fixated nontarget faces? Whereas introspection suggests that one may remember nontargets, previous studies have proposed that almost no memory should be retained. Using a gaze-contingent paradigm, we asked subjects to visually search for a target face within a crowded natural scene and then tested their memory for nontarget faces, as well as their confidence in those memories. Subjects remembered up to seven fixated, nontarget faces with more than 70% accuracy. Memory accuracy was correlated with trial-by-trial confidence ratings, which implies that the memory was consciously maintained and accessed. When the search scene was inverted, no more than three nontarget faces were remembered. These findings imply that incidental memory for faces, such as those recalled by eyewitnesses, is more reliable than is usually assumed.
Childhood adversity is associated with poor health outcomes in adulthood; the hypothalamic-pituitary-adrenal (HPA) axis has been proposed as a crucial biological intermediary of these long-term effects. Here, we tested whether childhood adversity was associated with diurnal cortisol parameters and whether this link was partially explained by self-esteem. In both adults and youths, childhood adversity was associated with lower levels of cortisol at awakening, and this association was partially driven by low self-esteem. Further, we found a significant indirect pathway through which greater adversity during childhood was linked to a flatter cortisol slope via self-esteem. Finally, youths who had a caregiver with high self-esteem experienced a steeper decline in cortisol throughout the day compared with youths whose caregiver reported low self-esteem. We conclude that self-esteem is a plausible psychological mechanism through which childhood adversity may get embedded in the activity of the HPA axis across the life span.
Linguistic communication builds on prelinguistic communicative gestures, but the ontogenetic origins and complexities of these prelinguistic gestures are not well known. The current study tested whether 8-month-olds, who do not yet point communicatively, use instrumental actions for communicative purposes. In two experiments, infants reached for objects when another person was present and when no one else was present; the distance to the objects was varied. When alone, the infants reached for objects within their action boundaries and refrained from reaching for objects out of their action boundaries; thus, they knew about their individual action efficiency. However, when a parent (Experiment 1) or a less familiar person (Experiment 2) sat next to them, the infants selectively increased their reaching for out-of-reach objects. The findings reveal that before they communicate explicitly through pointing gestures, infants use instrumental actions with the apparent expectation that a partner will adopt and complete their goals.
Current neurocognitive research suggests that the efficiency of visual word recognition rests on abstract memory representations of written letters and words stored in the visual word form area (VWFA) in the left ventral occipitotemporal cortex. These representations are assumed to be invariant to visual characteristics such as font and case. In the present functional MRI study, we tested this assumption by presenting written words and varying the case format of the initial letter of German nouns (which are always capitalized) as well as German adjectives and adverbs (both usually in lowercase). As evident from a Word Type x Case Format interaction, activation in the VWFA was greater to words presented in unfamiliar case formats relative to familiar case formats. Our results suggest that neural representations of written words in the VWFA are not fully abstract and still contain information about the visual format in which words are most frequently perceived.
Eyewitness-identification studies have focused on the idea that unfair lineups (i.e., ones in which the police suspect stands out) make witnesses more willing to identify the police suspect. We examined whether unfair lineups also influence subjects’ ability to distinguish between innocent and guilty suspects and their ability to judge the accuracy of their identification. In a single experiment (N = 8,925), we compared three fair-lineup techniques used by the police with unfair lineups in which we did nothing to prevent distinctive suspects from standing out. Compared with the fair lineups, doing nothing not only increased subjects’ willingness to identify the suspect but also markedly impaired subjects’ ability to distinguish between innocent and guilty suspects. Accuracy was also reduced at every level of confidence. These results advance theory on witnesses’ identification performance and have important practical implications for how police should construct lineups when suspects have distinctive features.
To advance cognitive theory, researchers must be able to parse the performance of a task into its significant mental stages. In this article, we describe a new method that uses functional MRI brain activation to identify when participants are engaged in different cognitive stages on individual trials. The method combines multivoxel pattern analysis to identify cognitive stages and hidden semi-Markov models to identify their durations. This method, applied to a problem-solving task, identified four distinct stages: encoding, planning, solving, and responding. We examined whether these stages corresponded to their ascribed functions by testing whether they are affected by appropriate factors. Planning-stage duration increased as the method for solving the problem became less obvious, whereas solving-stage duration increased as the number of calculations to produce the answer increased. Responding-stage duration increased with the difficulty of the motor actions required to produce the answer.
Early-life adversity is a potent risk factor for mental-health disorders in exposed individuals, and effects of adversity are exhibited across generations. Such adversities are also associated with poor gastrointestinal outcomes. In addition, emerging evidence suggests that microbiota-gut-brain interactions may mediate the effects of early-life stress on psychological dysfunction. In the present study, we administered an early-life stressor (i.e., maternal separation) to infant male rats, and we investigated the effects of this stressor on conditioned aversive reactions in the rats’ subsequent infant male offspring. We demonstrated, for the first time, longer-lasting aversive associations and greater relapse after extinction in the offspring (F1 generation) of rats exposed to maternal separation (F0 generation), compared with the offspring of rats not exposed to maternal separation. These generational effects were reversed by probiotic supplementation, which was effective as both an active treatment when administered to infant F1 rats and as a prophylactic when administered to F0 fathers before conception (i.e., in fathers’ infancy). These findings have high clinical relevance in the identification of early-emerging putative risk phenotypes across generations and of potential therapies to ameliorate such generational effects.
Does cooperating require the inhibition of selfish urges? Or does "rational" self-interest constrain cooperative impulses? I investigated the role of intuition and deliberation in cooperation by meta-analyzing 67 studies in which cognitive-processing manipulations were applied to economic cooperation games (total N = 17,647; no indication of publication bias using Egger’s test, Begg’s test, or p-curve). My meta-analysis was guided by the social heuristics hypothesis, which proposes that intuition favors behavior that typically maximizes payoffs, whereas deliberation favors behavior that maximizes one’s payoff in the current situation. Therefore, this theory predicts that deliberation will undermine pure cooperation (i.e., cooperation in settings where there are few future consequences for one’s actions, such that cooperating is not in one’s self-interest) but not strategic cooperation (i.e., cooperation in settings where cooperating can maximize one’s payoff). As predicted, the meta-analysis revealed 17.3% more pure cooperation when intuition was promoted over deliberation, but no significant difference in strategic cooperation between more intuitive and more deliberative conditions.
Two episodes of attentional selection cannot occur very close in time. This is the traditional account of the attentional blink, whereby observers fail to report the second of two temporally proximal targets. Recent analyses have challenged this simple account, suggesting that attentional selection during the attentional blink is not only (a) suppressed, but also (b) temporally advanced then delayed, and (c) temporally diffused. Here, we reanalyzed six data sets using mixture modeling of report errors, and revealed much simpler dynamics. Exposing a problem inherent in previous analyses, we found evidence of a second attentional episode only when the second target (T2) follows the first (T1) by more than 100 to 250 ms. When a second episode occurs, suppression and delay reduce steadily as lag increases and temporal precision is stable. At shorter lags, both targets are reported from a single episode, which explains why T2 can escape the attentional blink when it immediately follows T1 (Lag-1 sparing).
The idea behind ego depletion is that willpower draws on a limited mental resource, so that engaging in an act of self-control impairs self-control in subsequent tasks. To present ego depletion as more than a convenient metaphor, some researchers have proposed that glucose is the limited resource that becomes depleted with self-control. However, there have been theoretical challenges to the proposed glucose mechanism, and the experiments that have tested it have found mixed results. We used a new meta-analytic tool, p-curve analysis, to examine the reliability of the evidence from these experiments. We found that the effect sizes reported in this literature are possibly influenced by publication or reporting bias and that, even within studies yielding significant results, the evidential value of this research is weak. In light of these results, and pending further evidence, researchers and policymakers should refrain from drawing any conclusions about the role of glucose in self-control.
Can playing action video games improve visuomotor control? If so, can these games be used in training people to perform daily visuomotor-control tasks, such as driving? We found that action gamers have better lane-keeping and visuomotor-control skills than do non–action gamers. We then trained non–action gamers with action or nonaction video games. After they played a driving or first-person-shooter video game for 5 or 10 hr, their visuomotor control improved significantly. In contrast, non–action gamers showed no such improvement after they played a nonaction video game. Our model-driven analysis revealed that although different action video games have different effects on the sensorimotor system underlying visuomotor control, action gaming in general improves the responsiveness of the sensorimotor system to input error signals. The findings support a causal link between action gaming (for as little as 5 hr) and enhancement in visuomotor control, and suggest that action video games can be beneficial training tools for driving.
People tend to judge what is typical as also good and appropriate—as what ought to be. What accounts for the prevalence of these judgments, given that their validity is at best uncertain? We hypothesized that the tendency to reason from "is" to "ought" is due in part to a systematic bias in people’s (nonmoral) explanations, whereby regularities (e.g., giving roses on Valentine’s Day) are explained predominantly via inherent or intrinsic facts (e.g., roses are beautiful). In turn, these inherence-biased explanations lead to value-laden downstream conclusions (e.g., it is good to give roses). Consistent with this proposal, results from five studies (N = 629 children and adults) suggested that, from an early age, the bias toward inherence in explanations fosters inferences that imbue observed reality with value. Given that explanations fundamentally determine how people understand the world, the bias toward inherence in these judgments is likely to exert substantial influence over sociomoral understanding.
Metacognition is the ability to think about thinking. Although monitoring and controlling one’s knowledge is a key feature of human cognition, its evolutionary origins are debated. In the current study, we examined whether rhesus monkeys (Macaca mulatta; N = 120) could make metacognitive inferences in a one-shot decision. Each monkey experienced one of four conditions, observing a human appearing to hide a food reward in an apparatus consisting of either one or two tubes. The monkeys tended to search the correct location when they observed this baiting event, but engaged in information seeking—by peering into a center location where they could check both potential hiding spots—if their view had been occluded and information seeking was possible. The monkeys only occasionally approached the center when information seeking was not possible. These results show that monkeys spontaneously use information about their own knowledge states to solve naturalistic foraging problems, and thus provide the first evidence that nonhumans exhibit information-seeking responses in situations with which they have no prior experience.
Several models of judgment propose that people struggle with absolute judgments and instead represent options on the basis of their relative standing. This leads to a conundrum when people make judgments from memory: They may encode an option’s ordinal rank relative to the surrounding options but later observe a different distribution of options. Do people update their representations when making judgments from memory, or do they maintain their representations based on the initial encoding? In three studies, we found that people making memory-based judgments rely on a stimulus’s relative standing in the distribution at the time of encoding rather than attending to absolute quality or updating the stimulus’s ordinal ranking in light of the distribution at the time of the later judgment.
The phenomenon of increased desire for, and use of, appearance-enhancing items during times of economic recession has been termed the lipstick effect. The motivation underlying this effect has been attributed to women’s desires to enhance their attractiveness to financially stable partners (Hill, Rodeheffer, Griskevicius, Durante, & White, 2012). In the present research, we found evidence for our proposal that during times of economic recession, the heightened economic concern experienced by women translates into increased desire to use appearance-enhancing items to both attract romantic partners and create a favorable impression of themselves in the workplace, as both strategies can help women become secure financially. We also found that women with high economic concern elect to improve their professional appearance more frequently than their romantic attractiveness, which suggests that their motivation to obtain resources through a job dominates their motivation to obtain resources through a partner.
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies.
Although fear-conditioning research has demonstrated that certain survival-threatening stimuli, namely prepared fear stimuli, are readily associated with fearful events, little research has explored whether a parallel category exists for safety stimuli. We examined whether social-support figures, who have typically benefited survival, can serve as prepared safety stimuli, a category that has not been explored previously. Across three experiments, we uncovered three key findings. First, social-support figures were less readily associated with fear than were strangers or neutral stimuli (in a retardation-of-acquisition test). Second, social-support stimuli inhibited conditional fear responses to other cues (in a summation test), and this inhibition continued even after the support stimulus was removed. Finally, these effects were not simply due to familiarity or reward because both familiar and rewarding stimuli were readily associated with fear, whereas social-support stimuli were not. These findings suggest that social-support figures are one category of prepared safety stimuli that may have long-lasting effects on fear-learning processes.
This research integrated implicit theories of personality and the biopsychosocial model of challenge and threat, hypothesizing that adolescents would be more likely to conclude that they can meet the demands of an evaluative social situation when they were taught that people have the potential to change their socially relevant traits. In Study 1 (N = 60), high school students were assigned to an incremental-theory-of-personality or a control condition and then given a social-stress task. Relative to control participants, incremental-theory participants exhibited improved stress appraisals, more adaptive neuroendocrine and cardiovascular responses, and better performance outcomes. In Study 2 (N = 205), we used a daily-diary intervention to test high school students’ stress reactivity outside the laboratory. Threat appraisals (Days 5–9 after intervention) and neuroendocrine responses (Days 8 and 9 after intervention only) were unrelated to the intensity of daily stressors when adolescents received the incremental-theory intervention. Students who received the intervention also had better grades over freshman year than those who did not. These findings offer new avenues for improving theories of adolescent stress and coping.
Divorce is a stressor associated with long-term health risk, though the mechanisms of this effect are poorly understood. Cardiovascular reactivity is one biological pathway implicated as a predictor of poor long-term health after divorce. A sample of recently separated and divorced adults (N = 138) was assessed over an average of 7.5 months to explore whether individual differences in heart rate variability—assessed by respiratory sinus arrhythmia—operate in combination with subjective reports of separation-related distress to predict prospective changes in cardiovascular reactivity, as indexed by blood pressure reactivity. Participants with low resting respiratory sinus arrhythmia at baseline showed no association between divorce-related distress and later blood pressure reactivity, whereas participants with high respiratory sinus arrhythmia showed a positive association. In addition, within-person variation in respiratory sinus arrhythmia and between-persons variation in separation-related distress interacted to predict blood pressure reactivity at each laboratory visit. Individual differences in heart rate variability and subjective distress operate together to predict cardiovascular reactivity and may explain some of the long-term health risk associated with divorce.
To investigate whether dogs could recognize contingent reactivity as a marker of agents’ interaction, we performed an experiment in which dogs were presented with third-party contingent events. In the perfect-contingency condition, dogs were shown an unfamiliar self-propelled agent (SPA) that performed actions corresponding to audio clips of verbal commands played by a computer. In the high-but-imperfect-contingency condition, the SPA responded to the verbal commands on only two thirds of the trials; in the low-contingency condition, the SPA responded to the commands on only one third of the trials. In the test phase, the SPA approached one of two tennis balls, and then the dog was allowed to choose one of the balls. The proportion of trials on which a dog chose the object indicated by the SPA increased with the degree of contingency: Dogs chose the target object significantly above chance level only in the perfect-contingency condition. This finding suggests that dogs may use the degree of temporal contingency observed in third-party interactions as a cue to identify agents.
A previous genome-wide association study (GWAS) of more than 100,000 individuals identified molecular-genetic predictors of educational attainment. We undertook in-depth life-course investigation of the polygenic score derived from this GWAS using the four-decade Dunedin Study (N = 918). There were five main findings. First, polygenic scores predicted adult economic outcomes even after accounting for educational attainments. Second, genes and environments were correlated: Children with higher polygenic scores were born into better-off homes. Third, children’s polygenic scores predicted their adult outcomes even when analyses accounted for their social-class origins; social-mobility analysis showed that children with higher polygenic scores were more upwardly mobile than children with lower scores. Fourth, polygenic scores predicted behavior across the life course, from early acquisition of speech and reading skills through geographic mobility and mate choice and on to financial planning for retirement. Fifth, polygenic-score associations were mediated by psychological characteristics, including intelligence, self-control, and interpersonal skill. Effect sizes were small. Factors connecting DNA sequence with life outcomes may provide targets for interventions to promote population-wide positive development.
We investigated a unique way in which adolescent peer influence occurs on social media. We developed a novel functional MRI (fMRI) paradigm to simulate Instagram, a popular social photo-sharing tool, and measured adolescents’ behavioral and neural responses to likes, a quantifiable form of social endorsement and potential source of peer influence. Adolescents underwent fMRI while viewing photos ostensibly submitted to Instagram. They were more likely to like photos depicted with many likes than photos with few likes; this finding showed the influence of virtual peer endorsement and held for both neutral photos and photos of risky behaviors (e.g., drinking, smoking). Viewing photos with many (compared with few) likes was associated with greater activity in neural regions implicated in reward processing, social cognition, imitation, and attention. Furthermore, when adolescents viewed risky photos (as opposed to neutral photos), activation in the cognitive-control network decreased. These findings highlight possible mechanisms underlying peer influence during adolescence.
A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression.
Long-term collaborative relationships require that any jointly produced resources be shared in mutually satisfactory ways. Prototypically, this sharing involves partners dividing up simultaneously available resources, but sometimes the collaboration makes a resource available to only one individual, and any sharing of resources must take place across repeated instances over time. Here, we show that beginning at 5 years of age, human children stabilize cooperation in such cases by taking turns across instances of obtaining a resource. In contrast, chimpanzees do not take turns in this way, and so their collaboration tends to disintegrate over time. Alternating turns in obtaining a collaboratively produced resource does not necessarily require a prosocial concern for the other, but rather requires only a strategic judgment that partners need incentives to continue collaborating. These results suggest that human beings are adapted for thinking strategically in ways that sustain long-term cooperative relationships and that are absent in their nearest primate relatives.
The educational, occupational, and creative accomplishments of the profoundly gifted participants (IQs >= 160) in the Study of Mathematically Precocious Youth (SMPY) are astounding, but are they representative of equally able 12-year-olds? Duke University’s Talent Identification Program (TIP) identified 259 young adolescents who were equally gifted. By age 40, their life accomplishments also were extraordinary: Thirty-seven percent had earned doctorates, 7.5% had achieved academic tenure (4.3% at research-intensive universities), and 9% held patents; many were high-level leaders in major organizations. As was the case for the SMPY sample before them, differential ability strengths predicted their contrasting and eventual developmental trajectories—even though essentially all participants possessed both mathematical and verbal reasoning abilities far superior to those of typical Ph.D. recipients. Individuals, even profoundly gifted ones, primarily do what they are best at. Differences in ability patterns, like differences in interests, guide development along different paths, but ability level, coupled with commitment, determines whether and the extent to which noteworthy accomplishments are reached if opportunity presents itself.
In response to the Ebola scare in 2014, many people evinced strong fear and xenophobia. The present study, informed by the pathogen-prevalence hypothesis, tested the influence of individualism and collectivism on xenophobic response to the threat of Ebola. A nationally representative sample of 1,000 Americans completed a survey, indicating their perceptions of their vulnerability to Ebola, ability to protect themselves from Ebola (protection efficacy), and xenophobic tendencies. Overall, the more vulnerable people felt, the more they exhibited xenophobic responses, but this relationship was moderated by individualism and collectivism. The increase in xenophobia associated with increased vulnerability was especially pronounced among people with high individualism scores and those with low collectivism scores. These relationships were mediated by protection efficacy. State-level collectivism had the same moderating effect on the association between perceived vulnerability and xenophobia that individual-level value orientation did. Collectivism—and the set of practices and rituals associated with collectivistic cultures—may serve as psychological protection against the threat of disease.
People often fail to follow through on good intentions. While limited self-control is frequently the culprit, another cause is simply forgetting to enact intentions when opportunities arise. We introduce a novel, potent approach to facilitating follow-through: the reminders-through-association approach. This approach involves associating intentions (e.g., to mail a letter on your desk tomorrow) with distinctive cues that will capture attention when you have opportunities to act on those intentions (e.g., Valentine’s Day flowers that arrived late yesterday, which are sitting on your desk). We showed that cue-based reminders are more potent when the cues they employ are distinctive relative to (a) other regularly encountered stimuli and (b) other stimuli encountered concurrently. Further, they can be more effective than written or electronic reminder messages, and they are undervalued and underused. The reminders-through-association approach, developed by integrating and expanding on past research on self-control, reminders, and prospective memory, can be a powerful tool for policymakers and individuals.
Some effects are statistically significant. Other effects do not reach the threshold of statistical significance and are sometimes described as "marginally significant" or as "approaching significance." Although the concept of marginal significance is widely deployed in academic psychology, there has been very little systematic examination of psychologists’ attitudes toward these effects. Here, we report an observational study in which we investigated psychologists’ attitudes concerning marginal significance by examining their language in over 1,500 articles published in top-tier cognitive, developmental, and social psychology journals. We observed a large change over the course of four decades in psychologists’ tendency to describe a p value as marginally significant, and overall rates of use appear to differ across subfields. We discuss possible explanations for these findings, as well as their implications for psychological research.
Pupillary contagion—responding to pupil size observed in other people with changes in one’s own pupil—has been found in adults and suggests that arousal and other internal states could be transferred across individuals using a subtle physiological cue. Examining this phenomenon developmentally gives insight into its origins and underlying mechanisms, such as whether it is an automatic adaptation already present in infancy. In the current study, 6- and 9-month-olds viewed schematic depictions of eyes with smaller and larger pupils—pairs of concentric circles with smaller and larger black centers—while their own pupil sizes were recorded. Control stimuli were comparable squares. For both age groups, infants’ pupil size was greater when they viewed large-center circles than when they viewed small-center circles, and no differences were found for large-center compared with small-center squares. The findings suggest that infants are sensitive and responsive to subtle cues to other people’s internal states, a mechanism that would be beneficial for early social development.
For decades, there has been controversy about whether forgetting is caused by decay over time or by interference from irrelevant information. We suggest that forgetting occurs because of decay or interference, depending on the memory representation. Recollection-based memories, supported by the hippocampus, are represented in orthogonal patterns and are therefore relatively resistant to interference from one another. Decay should be a major source of their forgetting. By contrast, familiarity-based memories, supported by extrahippocampal structures, are not represented in orthogonal patterns and are therefore sensitive to interference. In a study in which we manipulated the postencoding task-interference level and the length of the delay between study and testing, we provide direct evidence in support of our representation theory of forgetting. Recollection and familiarity were measured using the remember/know procedure. We show that the causes of forgetting depend on the nature of the underlying memory representation, which places the century-old puzzle of forgetting in a coherent framework.
When participants search for a target letter while reading for comprehension, they miss more instances if the target letter is embedded in frequent function words than in less frequent content words. This phenomenon, called the missing-letter effect, has been considered a window on the cognitive mechanisms involved in the visual processing of written language. In the present study, one group of participants read two texts for comprehension while searching for a target letter, and another group listened to a narration of the same two texts while listening for the target letter’s corresponding phoneme. The ubiquitous missing-letter effect was replicated and extended to a missing-phoneme effect. Item-based correlations between the reading and listening tasks were high, which led us to conclude that both tasks involve cognitive processes that reading and listening have in common and that both processes are rooted in psycholinguistically driven allocation of attention.
Social reticence is expressed as shy, anxiously avoidant behavior in early childhood. With development, overt signs of social reticence may diminish but could still manifest themselves in neural responses to peers. We obtained measures of social reticence across 2 to 7 years of age. At age 11, preadolescents previously characterized as high (n = 30) or low (n = 23) in social reticence completed a novel functional-MRI-based peer-interaction task that quantifies neural responses to the anticipation and receipt of distinct forms of social evaluation. High (but not low) social reticence in early childhood predicted greater activity in dorsal anterior cingulate cortex and left and right insula, brain regions implicated in processing salience and distress, when participants anticipated unpredictable compared with predictable feedback. High social reticence was also associated with negative functional connectivity between insula and ventromedial prefrontal cortex, a region commonly implicated in affect regulation. Finally, among participants with high social reticence, negative evaluation was associated with increased amygdala activity, but only during feedback from unpredictable peers.
This article introduces a generative model of category representation that uses computer vision methods to extract category-consistent features (CCFs) directly from images of category exemplars. The model was trained on 4,800 images of common objects, and CCFs were obtained for 68 categories spanning subordinate, basic, and superordinate levels in a category hierarchy. When participants searched for these same categories, targets cued at the subordinate level were preferentially fixated, but fixated targets were verified faster when they followed a basic-level cue. The subordinate-level advantage in guidance is explained by the number of target-category CCFs, a measure of category specificity that decreases with movement up the category hierarchy. The basic-level advantage in verification is explained by multiplying the number of CCFs by sibling distance, a measure of category distinctiveness. With this model, the visual representations of real-world object categories, each learned from the vast numbers of image exemplars accumulated throughout everyday experience, can finally be studied.
When people cannot get what they want, they often satisfy their desire by consuming a substitute. Substitutes can originate from within the taxonomic category of the desired stimulus (i.e., within-category substitutes) or from a different taxonomic category that serves the same basic goal (i.e., cross-category substitutes). Both a store-brand chocolate (within-category substitute) and a granola bar (cross-category substitute), for example, can serve as substitutes for gourmet chocolate. Here, we found that people believe that within-category substitutes, which are more similar to desired stimuli, will more effectively satisfy their cravings than will cross-category substitutes (Experiments 1, 2a, and 2b). However, because within-category substitutes are more similar than cross-category substitutes to desired stimuli, they are more likely to evoke an unanticipated negative contrast effect. As a result, unless substitutes are equivalent in quality to the desired stimulus, cross-category substitutes more effectively satisfy cravings for the desired stimulus (Experiments 3 and 4).
Do people know when, or whether, they have made a conscious choice? Here, we explore the possibility that choices can seem to occur before they are actually made. In two studies, participants were asked to quickly choose from a set of options before a randomly selected option was made salient. Even when they believed that they had made their decision prior to this event, participants were significantly more likely than chance to report choosing the salient option when this option was made salient soon after the perceived time of choice. Thus, without participants’ awareness, a seemingly later event influenced choices that were experienced as occurring at an earlier time. These findings suggest that, like certain low-level perceptual experiences, the experience of choice is susceptible to "postdictive" influence and that people may systematically overestimate the role that consciousness plays in their chosen behavior.
Associative activation is commonly assumed to rely on associative strength, such that if A is strongly associated with B, B is activated whenever A is activated. We challenged this assumption by examining whether the activation of associations is state dependent. In three experiments, subjects performed a free-association task while the level of a simultaneous load was manipulated in various ways. In all three experiments subjects in the low-load conditions provided significantly more diverse and original associations compared with subjects in the high-load conditions, who exhibited high consensus. In an additional experiment, we found increased semantic priming of immediate associations under high load and of remote associations under low load. Taken together, these findings imply that activation of associations is an exploratory process by default, but is narrowed to exploiting the more immediate associations under conditions of high load. We propose a potential mechanism for processing associations in exploration and in exploitation modes, and suggest clinical implications.
The perception of shape, it has been argued, also often entails the perception of time. A cookie missing a bite, for example, is seen as a whole cookie that was subsequently bitten. It has never been clear, however, whether such observations truly reflect visual processing. To explore this possibility, we tested whether the perception of history in static shapes could actually induce illusory motion perception. Observers watched a square change to a truncated form, with a "piece" of it missing, and they reported whether this change was sudden or gradual. When the contours of the missing piece suggested a type of historical "intrusion" (as when one pokes a finger into a lump of clay), observers actually saw that intrusion occur: The change appeared to be gradual even when it was actually sudden, in a type of transformational apparent motion. This provides striking phenomenological evidence that vision involves reconstructing causal history from static shapes.
Children from different socioeconomic backgrounds have differing abilities to delay gratification, and impoverished children have the greatest difficulties in doing so. In the present study, we examined the role of vagal tone in predicting the ability to delay gratification in both resource-rich and resource-poor environments. We derived hypotheses from evolutionary models of children’s conditional adaptation to proximal rearing contexts. In Study 1, we tested whether elevated vagal tone was associated with shorter delay of gratification in impoverished children. In Study 2, we compared the relative role of vagal tone across two groups of children, one that had experienced greater impoverishment and one that was relatively middle-class. Results indicated that in resource-rich environments, higher vagal tone was associated with longer delay of gratification. In contrast, high vagal tone in children living in resource-poor environments was associated with reduced delay of gratification. We interpret the results with an eye to evolutionary-developmental models of the function of children’s stress-response system and adaptive behavior across varying contexts of economic risk.
Children’s intelligence mind-sets (i.e., their beliefs about whether intelligence is fixed or malleable) robustly influence their motivation and learning. Yet, surprisingly, research has not linked parents’ intelligence mind-sets to their children’s. We tested the hypothesis that a different belief of parents—their failure mind-sets—may be more visible to children and therefore more prominent in shaping their beliefs. In Study 1, we found that parents can view failure as debilitating or enhancing, and that these failure mind-sets predict parenting practices and, in turn, children’s intelligence mind-sets. Study 2 probed more deeply into how parents display failure mind-sets. In Study 3a, we found that children can indeed accurately perceive their parents’ failure mind-sets but not their parents’ intelligence mind-sets. Study 3b showed that children’s perceptions of their parents’ failure mind-sets also predicted their own intelligence mind-sets. Finally, Study 4 showed a causal effect of parents’ failure mind-sets on their responses to their children’s hypothetical failure. Overall, parents who see failure as debilitating focus on their children’s performance and ability rather than on their children’s learning, and their children, in turn, tend to believe that intelligence is fixed rather than malleable.
We used functional MRI (fMRI) to assess neural representations of physics concepts (momentum, energy, etc.) in juniors, seniors, and graduate students majoring in physics or engineering. Our goal was to identify the underlying neural dimensions of these representations. Using factor analysis to reduce the number of dimensions of activation, we obtained four physics-related factors that were mapped to sets of voxels. The four factors were interpretable as causal motion visualization, periodicity, algebraic form, and energy flow. The individual concepts were identifiable from their fMRI signatures with a mean rank accuracy of .75 using a machine-learning (multivoxel) classifier. Furthermore, there was commonality in participants’ neural representation of physics; a classifier trained on data from all but one participant identified the concepts in the left-out participant (mean accuracy = .71 across all nine participant samples). The findings indicate that abstract scientific concepts acquired in an educational setting evoke activation patterns that are identifiable and common, indicating that science education builds abstract knowledge using inherent, repurposed brain systems.
A strong predisposition to engage in sexual intercourse likely evolved in humans because sex is crucial to reproduction. Given that meeting interpersonal preferences tends to promote positive relationship evaluations, sex within a relationship should be positively associated with relationship satisfaction. Nevertheless, prior research has been inconclusive in demonstrating such a link, with longitudinal and experimental studies showing no association between sexual frequency and relationship satisfaction. Crucially, though, all prior research has utilized explicit reports of satisfaction, which reflect deliberative processes that may override the more automatic implications of phylogenetically older evolved preferences. Accordingly, capturing the implications of sexual frequency for relationship evaluations may require implicit measurements that bypass deliberative reasoning. Consistent with this idea, one cross-sectional and one 3-year study of newlywed couples revealed a positive association between sexual frequency and automatic partner evaluations but not explicit satisfaction. These findings highlight the importance of automatic measurements to understanding interpersonal relationships.
Theoretical models distinguish two decision-making strategies that have been formalized in reinforcement-learning theory. A model-based strategy leverages a cognitive model of potential actions and their consequences to make goal-directed choices, whereas a model-free strategy evaluates actions based solely on their reward history. Research in adults has begun to elucidate the psychological mechanisms and neural substrates underlying these learning processes and factors that influence their relative recruitment. However, the developmental trajectory of these evaluative strategies has not been well characterized. In this study, children, adolescents, and adults performed a sequential reinforcement-learning task that enabled estimation of model-based and model-free contributions to choice. Whereas a model-free strategy was apparent in choice behavior across all age groups, a model-based strategy was absent in children, became evident in adolescents, and strengthened in adults. These results suggest that recruitment of model-based valuation systems represents a critical cognitive component underlying the gradual maturation of goal-directed behavior.
Intuitively, how you feel about potential outcomes will determine your decisions. Indeed, an implicit assumption in one of the most influential theories in psychology, prospect theory, is that feelings govern choice. Surprisingly, however, very little is known about the rules by which feelings are transformed into decisions. Here, we specified a computational model that used feelings to predict choices. We found that this model predicted choice better than existing value-based models, showing a unique contribution of feelings to decisions, over and above value. Similar to the value function in prospect theory, our feeling function showed diminished sensitivity to outcomes as value increased. However, loss aversion in choice was explained by an asymmetry in how feelings about losses and gains were weighted when making a decision, not by an asymmetry in the feelings themselves. The results provide new insights into how feelings are utilized to reach a decision.
How do people get attention to operate at peak efficiency in high-pressure situations? We tested the hypothesis that the general mechanism that allows this is the maintenance of multiple target representations in working and long-term memory. We recorded subjects’ event-related potentials (ERPs) indexing the working memory and long-term memory representations used to control attention while performing visual search. We found that subjects used both types of memories to control attention when they performed the visual search task with a large reward at stake, or when they were cued to respond as fast as possible. However, under normal circumstances, one type of target memory was sufficient for slower task performance. The use of multiple types of memory representations appears to provide converging top-down control of attention, allowing people to step on the attentional accelerator in a variety of high-pressure situations.
When faced with risky decisions, people typically choose to diversify their choices by allocating resources across a variety of options and thus avoid putting "all their eggs in one basket." The current research revealed that this tendency is reversed when people face an important cue to mating-related risk: skew in the operational sex ratio, or the ratio of men to women in the local environment. Counter to the typical strategy of choice diversification, findings from four studies demonstrated that the presence of romantically unfavorable sex ratios (those featuring more same-sex than opposite-sex individuals) led heterosexual people to diversify financial resources less and instead concentrate investment in high-risk/high-return options when making lottery, stock-pool, retirement-account, and research-funding decisions. These studies shed light on a key process by which people manage risks to mating success implied by unfavorable interpersonal environments. These choice patterns have important implications for mating behavior as well as other everyday forms of decision making.
Studies on first impressions from facial appearance have rapidly proliferated in the past decade. Almost all of these studies have relied on a single face image per target individual, and differences in impressions have been interpreted as originating in stable physiognomic differences between individuals. Here we show that images of the same individual can lead to different impressions, with within-individual image variance comparable to or exceeding between-individuals variance for a variety of social judgments (Experiment 1). We further show that preferences for images shift as a function of the context (e.g., selecting an image for online dating vs. a political campaign; Experiment 2), that preferences are predictably biased by the selection of the images (e.g., an image fitting a political campaign vs. a randomly selected image; Experiment 3), and that these biases are evident after extremely brief (40-ms) presentation of the images (Experiment 4). We discuss the implications of these findings for studies on the accuracy of first impressions.
Taking notes on laptops rather than in longhand is increasingly common. Many researchers have suggested that laptop note taking is less effective than longhand note taking for learning. Prior studies have primarily focused on students’ capacity for multitasking and distraction when using laptops. The present research suggests that even when laptops are used solely to take notes, they may still be impairing learning because their use results in shallower processing. In three studies, we found that students who took notes on laptops performed worse on conceptual questions than students who took notes longhand. We show that whereas taking more notes can be beneficial, laptop note takers’ tendency to transcribe lectures verbatim rather than processing information and reframing it in their own words is detrimental to learning.
Prior research suggests that cultural groups vary on an overarching dimension of independent versus interdependent social orientation, with European Americans being more independent, or less interdependent, than Asians. Drawing on recent evidence suggesting that the dopamine D4 receptor gene (DRD4) plays a role in modulating cultural learning, we predicted that carriers of DRD4 polymorphisms linked to increased dopamine signaling (7- or 2-repeat alleles) would show higher levels of culturally dominant social orientations, compared with noncarriers. European Americans and Asian-born Asians (total N = 398) reported their social orientation on multiple scales. They were also genotyped for DRD4. As in earlier work, European Americans were more independent, and Asian-born Asians more interdependent. This cultural difference was significantly more pronounced for carriers of the 7- or 2-repeat alleles than for noncarriers. Indeed, no cultural difference was apparent among the noncarriers. Implications for potential coevolution of genes and culture are discussed.
The perception of speech is notably malleable in adults, yet alterations in perception seem to have little impact on speech production. However, we hypothesized that speech perceptual training might immediately influence speech motor learning. To test this, we paired a speech perceptual-training task with a speech motor-learning task. Subjects performed a series of perceptual tests designed to measure and then manipulate the perceptual distinction between the words head and had. Subjects then produced head with the sound of the vowel altered in real time so that they heard themselves through headphones producing a word that sounded more like had. In support of our hypothesis, the amount of motor learning in response to the voice alterations depended on the perceptual boundary acquired through perceptual training. The studies show that plasticity in adults’ speech perception can have immediate consequences for speech production in the context of speech learning.
The human mind tends to excessively discount the value of delayed rewards relative to immediate ones, and it is thought that "hot" affective processes drive desires for short-term gratification. Supporting this view, recent findings demonstrate that sadness exacerbates financial impatience even when the sadness is unrelated to the economic decision at hand. Such findings might reinforce the view that emotions must always be suppressed to combat impatience. But if emotions serve adaptive functions, then certain emotions might be capable of reducing excessive impatience for delayed rewards. We found evidence supporting this alternative view. Specifically, we found that (a) the emotion gratitude reduces impatience even when real money is at stake, and (b) the effects of gratitude are differentiable from those of the more general positive state of happiness. These findings challenge the view that individuals must tamp down affective responses through effortful self-regulation to reach more patient and adaptive economic decisions.
We investigated how literacy modifies one of the mechanisms of the visual system that is essential for efficient reading: flexible position coding. To do so, we focused on the abilities of literates and illiterates to compare two-dimensional strings of letters (Experiment 1) and symbols (Experiment 2) in which the positions of characters had been manipulated. Results from two perceptual matching experiments revealed that literates were sensitive to alterations in characters’ within-string position and identity, whereas illiterates are almost blind to these changes. We concluded that letter-position coding is a mechanism that emerges during literacy acquisition and that the recognition of sequences of objects is highly modulated by reading skills. These data offer new insights about the manner in which reading acquisition shapes the visual system by making it highly sensitive to the internal structure of sequences of characters.
Forgiveness is considered to play a key role in the maintenance of social relationships, the avoidance of unnecessary conflict, and the ability to move forward with one’s life. But why is it that some people find it easier to forgive and forget than others? In the current study, we explored the supposed relationship between forgiveness and forgetting. In an initial session, 30 participants imagined that they were the victim in a series of hypothetical incidents and indicated whether or not they would forgive the transgressor. Following a standard think/no-think procedure, in which participants were trained to think or not to think about some of these incidents, more forgetting was observed for incidents that had been forgiven following no-think instructions compared with either think or baseline instructions. In contrast, no such forgetting effects emerged for incidents that had not previously been forgiven. These findings have implications for goal-directed forgetting and the relationship between forgiveness and memory.
When people are faced with opinions different from their own, they often revise their own opinions to match those held by other people. This is known as the social-conformity effect. Although the immediate impact of social influence on people’s decision making is well established, it is unclear whether this reflects a transient capitulation to public opinion or a more enduring change in privately held views. In an experiment using a facial-attractiveness rating task, we asked participants to rate each face; after providing their rating, they were informed of the rating given by a peer group. They then rerated the same faces after 1, 3, or 7 days or 3 months. Results show that individuals’ initial judgments are altered by the differing opinions of other people for no more than 3 days. Our findings suggest that because the social-conformity effect lasts several days, it reflects a short-term change in privately held views rather than a transient public compliance.
In the study reported here, data from implicit and behavioral choice measures did not support sexual economics theory’s (SET’s) central tenet that women view female sexuality as a commodity. Instead, men endorsed sexual exchange more than women did, which supports the idea that SET is a vestige of patriarchy. Further, men’s sexual advice, more than women’s, enforced the sexual double standard (i.e., men encouraged men more than women to have casual sex)—a gender difference that was mediated by hostile sexism, but also by men’s greater implicit investment in sexual economics. That is, men were more likely to suppress female sexuality because they resisted female empowerment and automatically associated sex with money more than women did. It appears that women are not invested in sexual economics, but rather, men are invested in patriarchy, even when it means raising the price of sexual relations.
A large body of evidence supports the importance of focused attention for encoding and task performance. Yet young children with immature regulation of focused attention are often placed in elementary-school classrooms containing many displays that are not relevant to ongoing instruction. We investigated whether such displays can affect children’s ability to maintain focused attention during instruction and to learn the lesson content. We placed kindergarten children in a laboratory classroom for six introductory science lessons, and we experimentally manipulated the visual environment in the classroom. Children were more distracted by the visual environment, spent more time off task, and demonstrated smaller learning gains when the walls were highly decorated than when the decorations were removed.
For an organism to perceive coherent and unified objects, its visual system must bind color and shape features into integrated color-shape representations in memory. However, the origins of this ability have not yet been established. To examine whether newborns can build an integrated representation of the first object they see, I raised newly hatched chicks (Gallus gallus) in controlled-rearing chambers that contained a single virtual object. This object rotated continuously, revealing a different color and shape combination on each of its two faces. Chicks were able to build an integrated representation of this object. For example, they reliably distinguished an object defined by a purple circle and yellow triangle from an object defined by a purple triangle and yellow circle. This result shows that newborns can begin binding color and shape features into integrated representations at the onset of their experience with visual objects.
We analyzed the microstructure of child-adult interaction during naturalistic, daylong, automatically labeled audio recordings (13,836 hr total) of children (8- to 48-month-olds) with and without autism. We found that an adult was more likely to respond when the child’s vocalization was speech related rather than not speech related. In turn, a child’s vocalization was more likely to be speech related if the child’s previous speech-related vocalization had received an immediate adult response rather than no response. Taken together, these results are consistent with the idea that there is a social feedback loop between child and caregiver that promotes speech development. Although this feedback loop applies in both typical development and autism, children with autism produced proportionally fewer speech-related vocalizations, and the responses they received were less contingent on whether their vocalizations were speech related. We argue that such differences will diminish the strength of the social feedback loop and have cascading effects on speech development over time. Differences related to socioeconomic status are also reported.
Hormonal fluctuation across the menstrual cycle explains temporal variation in women’s judgment of the attractiveness of members of the opposite sex. Use of hormonal contraceptives could therefore influence both initial partner choice and, if contraceptive use subsequently changes, intrapair dynamics. Associations between hormonal contraceptive use and relationship satisfaction may thus be best understood by considering whether current use is congruent with use when relationships formed, rather than by considering current use alone. In the study reported here, we tested this congruency hypothesis in a survey of 365 couples. Controlling for potential confounds (including relationship duration, age, parenthood, and income), we found that congruency in current and previous hormonal contraceptive use, but not current use alone, predicted women’s sexual satisfaction with their partners. Congruency was not associated with women’s nonsexual satisfaction or with the satisfaction of their male partners. Our results provide empirical support for the congruency hypothesis and suggest that women’s sexual satisfaction is influenced by changes in partner preference associated with change in hormonal contraceptive use.
The parental caregiving motivational system leads people to behave selflessly. However, given that the purpose of this motivation is the protection of close kin, it might also lead to aggression toward distant, threatening others. In the present studies, we wished to investigate the effects of behaviorally activating the caregiving motivational system on out-group bias. On the basis of previous work in behavioral ecology, we predicted that activation of the caregiving system would enhance bias against out-groups whenever their members posed a salient threat. This prediction was confirmed in three studies (total N = 866) across different populations, manipulations, and measures. We discuss the possible importance of continued research into the behavioral consequences of caregiving salience.
The distinction between access consciousness and phenomenal consciousness is a subject of intensive debate. According to one view, visual experience overflows the capacity of the attentional and working memory system: We see more than we can report. According to the opposed view, this perceived richness is an illusion—we are aware only of information that we can subsequently report. This debate remains unresolved because of the inevitable reliance on report, which is limited in capacity. To bypass this limitation, this study utilized color diversity—a unique summary statistic—which is sensitive to detailed visual information. Participants were shown a Sperling-like array of colored letters, one row of which was precued. After reporting a letter from the cued row, participants estimated the color diversity of the noncued rows. Results showed that people could estimate the color diversity of the noncued array without a cost to letter report, which suggests that color diversity is registered automatically, outside focal attention, and without consuming additional working memory resources.
How do people sustain resources for the benefit of individuals and communities and avoid the tragedy of the commons, in which shared resources become exhausted? In the present study, we examined the role of serotonin activity and social norms in the management of depletable resources. Healthy adults, alongside social partners, completed a multiplayer resource-dilemma game in which they repeatedly harvested from a partially replenishable monetary resource. Dietary tryptophan depletion, leading to reduced serotonin activity, was associated with aggressive harvesting strategies and disrupted use of the social norms given by distributions of other players’ harvests. Tryptophan-depleted participants more frequently exhausted the resource completely and also accumulated fewer rewards than participants who were not tryptophan depleted. Our findings show that rank-based social comparisons are crucial to the management of depletable resources, and that serotonin mediates responses to social norms.
Being objectively close to or far from a place changes how people perceive the location of that place in a subjective, psychological sense. In the six studies reported here, we investigated whether people’s spatial orientation (defined as moving toward or away from a place) will produce similar effects—by specifically influencing psychological closeness in each of its forms (i.e., spatial, temporal, probabilistic, and social distance). Orientation influenced subjective spatial distance at various levels of objective distance (Study 1), regardless of the direction people were facing (Study 2). In addition, when spatially oriented toward, rather than away from, a particular place, participants felt that events there had occurred more recently (Studies 3a and 3b) and that events there would be more likely to occur (Study 4). Finally, participants felt more similarity to people who were spatially oriented toward them than to people who were spatially oriented away from them (Study 5). Our investigation broadens the study of psychological distance from static spatial locations to dynamically moving points in space.
Individual differences in fixation duration are considered a reliable measure of attentional control in adults. However, the degree to which individual differences in fixation duration in infancy (0–12 months) relate to temperament and behavior in childhood is largely unknown. In the present study, data were examined from 120 infants (mean age = 7.69 months, SD = 1.90) who previously participated in an eye-tracking study. At follow-up, parents completed age-appropriate questionnaires about their child’s temperament and behavior (mean age of children = 41.59 months, SD = 9.83). Mean fixation duration in infancy was positively associated with effortful control (β = 0.20, R2 = .02, p = .04) and negatively with surgency (β = –0.37, R2 = .07, p = .003) and hyperactivity-inattention (β = –0.35, R2 = .06, p = .005) in childhood. These findings suggest that individual differences in mean fixation duration in infancy are linked to attentional and behavioral control in childhood.
Having a purpose in life has been cited consistently as an indicator of healthy aging for several reasons, including its potential for reducing mortality risk. In the current study, we sought to extend previous findings by examining whether purpose in life promotes longevity across the adult years, using data from the longitudinal Midlife in the United States (MIDUS) sample. Proportional-hazards models demonstrated that purposeful individuals lived longer than their counterparts did during the 14 years after the baseline assessment, even when controlling for other markers of psychological and affective well-being. Moreover, these longevity benefits did not appear to be conditional on the participants’ age, how long they lived during the follow-up period, or whether they had retired from the workforce. In other words, having a purpose in life appears to widely buffer against mortality risk across the adult years.
In this article, we describe a phenomenon we discovered while conducting experiments on walking and reaching. We asked university students to pick up either of two buckets, one to the left of an alley and one to the right, and to carry the selected bucket to the alley’s end. In most trials, one of the buckets was closer to the end point. We emphasized choosing the easier task, expecting participants to prefer the bucket that would be carried a shorter distance. Contrary to our expectation, participants chose the bucket that was closer to the start position, carrying it farther than the other bucket. On the basis of results from nine experiments and participants’ reports, we concluded that this seemingly irrational choice reflected a tendency to pre-crastinate, a term we introduce to refer to the hastening of subgoal completion, even at the expense of extra physical effort. Other tasks also reveal this preference, which we ascribe to the desire to reduce working memory loads.
Despite widespread interest in narcissism, relatively little is known about the conditions that encourage or dampen it. Drawing on research showing that macroenvironmental conditions in emerging adulthood can leave a lasting imprint on attitudes and behaviors, I argue that people who enter adulthood during recessions are less likely to be narcissistic later in life than those who come of age in more prosperous times. Using large samples of American adults, Studies 1 and 2 showed that people who entered adulthood during worse economic times endorsed fewer narcissistic items as older adults. Study 3 extended these findings to a behavioral manifestation of narcissism: the relative pay of CEOs. CEOs who came of age in worse economic times paid themselves less relative to other top executives in their firms. These findings suggest that macroenvironmental experiences at a critical life stage can have lasting implications for how unique, special, and deserving people believe themselves to be.
How efficiently do people integrate the disconnected image fragments that fall on their eyes when they view partly occluded objects? In the present study, I used a psychophysical summation-at-threshold technique to address this question by measuring discrimination performance with both isolated and combined features of physically fragmented but perceptually complete objects. If visual completion promotes superior integration efficiency, performance with a visually completed object should exceed what would be expected from performance with the individual object parts shown in isolation. Contrary to this prediction, results showed that discrimination performance with both static and moving versions of physically fragmented but perceptually complete objects was significantly worse than would be expected from performance with their constituent parts. These results present a challenge for future theories of visual completion.
This study examined whether, as mothers’ depressive symptoms increase, their expressions of negative emotion to children increasingly reflect aversion sensitivity and motivation to minimize ongoing stress or discomfort. In multiple interactions over 2 years, negative affect expressed by 319 mothers and their children was observed across variations in mothers’ depressive symptoms, the aversiveness of children’s immediate behavior, and observed differences in children’s general negative reactivity. As expected, depressive symptoms predicted reduced maternal negative reactivity when child behavior was low in aversiveness, particularly with children who were high in negative reactivity. Depressive symptoms predicted high negative reactivity and steep increases in negative reactivity as the aversiveness of child behavior increased, particularly when high and continued aversiveness from the child was expected (i.e., children were high in negative reactivity). The findings are consistent with the proposal that deficits in parenting competence as depressive symptoms increase reflect aversion sensitivity and motivation to avoid conflict and suppress children’s aversive behavior.
The ability to control desires, whether for food, sex, or drugs, enables people to function successfully within society. Yet, in tempting situations, strong impulses often result in self-control failure. Although many triggers of self-control failure have been identified, the question remains as to why some individuals are more likely than others to give in to temptation. In this study, we combined functional neuroimaging and experience sampling to determine if there are brain markers that predict whether people act on their food desires in daily life. We examined food-cue-related activity in the nucleus accumbens (NAcc), as well as activity associated with response inhibition in the inferior frontal gyrus (IFG). Greater NAcc activity was associated with greater likelihood of self-control failures, whereas IFG activity supported successful resistance to temptations. These findings demonstrate an important role for the neural mechanisms underlying desire and self-control in people’s real-world experiences of temptations.
Speech is usually assumed to start with a clearly defined preverbal message, which provides a benchmark for self-monitoring and a robust sense of agency for one’s utterances. However, an alternative hypothesis states that speakers often have no detailed preview of what they are about to say, and that they instead use auditory feedback to infer the meaning of their words. In the experiment reported here, participants performed a Stroop color-naming task while we covertly manipulated their auditory feedback in real time so that they said one thing but heard themselves saying something else. Under ideal timing conditions, two thirds of these semantic exchanges went undetected by the participants, and in 85% of all nondetected exchanges, the inserted words were experienced as self-produced. These findings indicate that the sense of agency for speech has a strong inferential component, and that auditory feedback of one’s own voice acts as a pathway for semantic monitoring, potentially overriding other feedback loops.
In order to engage in goal-directed behavior, cognitive agents have to control the processing of task-relevant features in their environments. Although cognitive control is critical for performance in unpredictable task environments, it is currently unknown how it affects performance in highly structured and predictable environments. In the present study, we showed that, counterintuitively, top-down control can impair and interfere with the otherwise automatic integration of statistical information in a predictable task environment, and it can render behavior less efficient than it would have been without the attempt to control the flow of information. In other words, less can sometimes be more (in terms of cognitive control), especially if the environment provides sufficient information for the cognitive system to behave on autopilot based on automatic processes alone.
Early life stressors are associated with elevated inflammation, a key physiological risk factor for disease. However, the mechanisms by which early stress leads to inflammation remain largely unknown. Using a longitudinal data set, we examined smoking, alcohol consumption, and body mass index (BMI) as health-behavior pathways by which early adversity might lead to inflammation during young adulthood. Contemporaneously measured early adversity predicted increased BMI and smoking but not alcohol consumption, and these effects were partially accounted for by chronic stress in young adulthood. Higher BMI in turn predicted higher levels of soluble tumor necrosis factor receptor type II (sTNF-RII) and C-reactive protein (CRP), and smoking predicted elevated sTNF-RII. These findings establish that early adversity contributes to inflammation in part through ongoing stress and maladaptive health behavior. Given that maladaptive health behaviors portend inflammation in young adulthood, they serve as promising targets for interventions designed to prevent the negative consequences of early adversity.
This study examined whether national income can have effects on happiness, or subjective well-being (SWB), over and above those of personal income. To assess the incremental effects of national income on SWB, we conducted cross-sectional multilevel analysis on data from 838,151 individuals in 158 nations. Although greater personal income was consistently related to higher SWB, we found that national income was a boon to life satisfaction but a bane to daily feelings of well-being; individuals in richer nations experienced more worry and anger on average. We also found moderating effects: The income-SWB relationship was stronger at higher levels of national income. This result might be explained by cultural norms, as money is valued more in richer nations. The SWB of more residentially mobile individuals was less affected by national income. Overall, our results suggest that the wealth of the nation one resides in has consequences for one’s happiness.
A recent wave of studies—more than 100 conducted over the last decade—has shown that exerting effort at controlling impulses or behavioral tendencies leaves a person depleted and less able to engage in subsequent rounds of regulation. Regulatory depletion is thought to play an important role in everyday problems (e.g., excessive spending, overeating) as well as psychiatric conditions, but its neurophysiological basis is poorly understood. Using a placebo-controlled, double-blind design, we demonstrated that the psychostimulant methylphenidate (commonly known as Ritalin), a catecholamine reuptake blocker that increases dopamine and norepinephrine at the synaptic cleft, fully blocks effort-induced depletion of regulatory control. Spectral analysis of trial-by-trial reaction times revealed specificity of methylphenidate effects on regulatory depletion in the slow-4 frequency band. This band is associated with the operation of resting-state brain networks that produce mind wandering, which raises potential connections between our results and recent brain-network-based models of control over attention.
A burgeoning literature has established that exposure to atrocities committed by in-group members triggers moral-disengagement strategies. There is little research, however, on how such moral disengagement affects the degree to which conversations shape people’s memories of the atrocities and subsequent justifications for those atrocities. We built on the finding that a speaker’s selective recounting of past events can result in retrieval-induced forgetting of related, unretrieved memories for both the speaker and the listener. In the present study, we investigated whether American participants listening to the selective remembering of atrocities committed by American soldiers (in-group condition) or Afghan soldiers (out-group condition) resulted in the retrieval-induced forgetting of unmentioned justifications. Consistent with a motivated-recall account, results showed that the way people’s memories are shaped by selective discussions of atrocities depends on group-membership status.
Recent Web apps have spurred excitement around the prospect of achieving speed reading by eliminating eye movements (i.e., with rapid serial visual presentation, or RSVP, in which words are presented briefly one at a time and sequentially). Our experiment using a novel trailing-mask paradigm contradicts these claims. Subjects read normally or while the display of text was manipulated such that each word was masked once the reader’s eyes moved past it. This manipulation created a scenario similar to RSVP: The reader could read each word only once; regressions (i.e., rereadings of words), which are a natural part of the reading process, were functionally eliminated. Crucially, the inability to regress affected comprehension negatively. Furthermore, this effect was not confined to ambiguous sentences. These data suggest that regressions contribute to the ability to understand what one has read and call into question the viability of speed-reading apps that eliminate eye movements (e.g., those that use RSVP).
People often talk about musical pitch using spatial metaphors. In English, for instance, pitches can be "high" or "low" (i.e., height-pitch association), whereas in other languages, pitches are described as "thin" or "thick" (i.e., thickness-pitch association). According to results from psychophysical studies, metaphors in language can shape people’s nonlinguistic space-pitch representations. But does language establish mappings between space and pitch in the first place, or does it only modify preexisting associations? To find out, we tested 4-month-old Dutch infants’ sensitivity to height-pitch and thickness-pitch mappings using a preferential-looking paradigm. The infants looked significantly longer at cross-modally congruent stimuli for both space-pitch mappings, which indicates that infants are sensitive to these associations before language acquisition. The early presence of space-pitch mappings means that these associations do not originate from language. Instead, language builds on preexisting mappings, changing them gradually via competitive associative learning. Space-pitch mappings that are language-specific in adults develop from mappings that may be universal in infants.
Most daily routines are determined by habits. However, the experienced ease and automaticity of habit formation and execution come at a cost when habits that are no longer appropriate must be overcome. So far, proactive and reactive control strategies that prevent inappropriate habit execution either by preparation or "on the fly" have been identified. Here, we present evidence for a third, retroactive control strategy. In two experiments using the list method of directed forgetting, the accessibility of newly learned and practiced stimulus-response rules was significantly reduced when participants were cued to forget the rules rather than to remember them. The results thus show that directed forgetting, so far observed and investigated only for episodic memory traces, can also be applied to habits. The findings further emphasize the adaptive value of forgetting and can be taken as evidence of a retroactive strategy of habit control.
Previous research has revealed a moderate and positive correlation between procrastination and impulsivity. However, little is known about why these two constructs are related. In the present study, we used behavior-genetics methodology to test three predictions derived from an evolutionary account that postulates that procrastination arose as a by-product of impulsivity: (a) Procrastination is heritable, (b) the two traits share considerable genetic variation, and (c) goal-management ability is an important component of this shared variation. These predictions were confirmed. First, both procrastination and impulsivity were moderately heritable (46% and 49%, respectively). Second, although the two traits were separable at the phenotypic level (r = .65), they were not separable at the genetic level (rgenetic = 1.0). Finally, variation in goal-management ability accounted for much of this shared genetic variation. These results suggest that procrastination and impulsivity are linked primarily through genetic influences on the ability to use high-priority goals to effectively regulate actions.
The U.S. Census Bureau projects that racial minority groups will make up a majority of the U.S. national population in 2042, effectively creating a so-called majority-minority nation. In four experiments, we explored how salience of such racial demographic shifts affects White Americans’ political-party leanings and expressed political ideology. Study 1 revealed that making California’s majority-minority shift salient led politically unaffiliated White Americans to lean more toward the Republican Party and express greater political conservatism. Studies 2, 3a, and 3b revealed that making the changing national racial demographics salient led White Americans (regardless of political affiliation) to endorse conservative policy positions more strongly. Moreover, the results implicate group-status threat as the mechanism underlying these effects. Taken together, this work suggests that the increasing diversity of the nation may engender a widening partisan divide.
Although the semantic relationships among words have long been acknowledged as a crucial component of adult lexical knowledge, the ontogeny of lexical networks remains largely unstudied. To determine whether learners encode relationships among novel words, we trained 2-year-olds on four novel words that referred to four novel objects, which were grouped into two visually similar pairs. Participants then listened to repetitions of word pairs (in the absence of visual referents) that referred to objects that were either similar or dissimilar to each other. Toddlers listened significantly longer to word pairs referring to similar objects, which suggests that their representations of the novel words included knowledge about the similarity of the referents. A second experiment confirmed that toddlers can learn all four distinct words from the training regime, which suggests that the results from Experiment 1 reflected the successful encoding of referents. Together, these results show that toddlers encode the similarities among referents from their earliest exposures to new words.
People who feel embarrassed are often motivated to avoid social contact—that is, to hide their face. At the same time, they may be motivated to restore the positive image that has been tarnished by the embarrassing event (or, in other words, to restore the face lost in the event). Individuals can symbolically employ these coping strategies by choosing commercial products that literally either hide their face (e.g., sunglasses) or repair it (e.g., restorative cosmetics). However, the two coping strategies have different consequences. Although symbolically repairing one’s face eliminates aversive feelings of embarrassment and restores one’s willingness to engage in social activities, symbolically hiding one’s face has little impact.
In the ongoing debate about the efficacy of visual working memory for more than three items, a consensus has emerged that memory precision declines as memory load increases from one to three. Many studies have reported that memory precision seems to be worse for two items than for one. We argue that memory for two items appears less precise than that for one only because two items present observers with a correspondence challenge that does not arise when only one item is stored—the need to relate observations to their corresponding memory representations. In three experiments, we prevented correspondence errors in two-item trials by varying sample items along task-irrelevant but integral (as opposed to separable) dimensions. (Initial experiments with a classic sorting paradigm identified integral feature relationships.) In three memory experiments, our manipulation produced equally precise representations of two items and of one item.
The Google Books Ngram Viewer allows researchers to quantify culture across centuries by searching millions of books. This tool was used to test theory-based predictions about implications of an urbanizing population for the psychology of culture. Adaptation to rural environments prioritizes social obligation and duty, giving to other people, social belonging, religion in everyday life, authority relations, and physical activity. Adaptation to urban environments requires more individualistic and materialistic values; such adaptation prioritizes choice, personal possessions, and child-centered socialization in order to foster the development of psychological mindedness and the unique self. The Google Ngram Viewer generated relative frequencies of words indexing these values from the years 1800 to 2000 in American English books. As urban populations increased and rural populations declined, word frequencies moved in the predicted directions. Books published in the United Kingdom replicated this pattern. The analysis established long-term relationships between ecological change and cultural change, as predicted by the theory of social change and human development (Greenfield, 2009).
In this research, we examined the impact of physiological arousal on negotiation outcomes. Conventional wisdom and the prescriptive literature suggest that arousal should be minimized given its negative effect on negotiations, whereas prior research on misattribution of arousal suggests that arousal might polarize outcomes, either negatively or positively. In two experiments, we manipulated arousal and measured its effect on subjective and objective negotiation outcomes. Our results support the polarization effect. When participants had negative prior attitudes toward negotiation, arousal had a detrimental effect on outcomes, whereas when participants had positive prior attitudes toward negotiation, arousal had a beneficial effect on outcomes. These effects occurred because of the construal of arousal as negative or positive affect, respectively. Our findings have important implications not only for negotiation, but also for research on misattribution of arousal, which previously has focused on the target of evaluation, in contrast to the current research, which focused on the critical role of the perceiver.
Using archival and experimental data, we showed that vicarious defeats experienced by fans when their favorite football team loses lead them to consume less healthy food. On the Mondays following a Sunday National Football League (NFL) game, saturated-fat and food-calorie intake increase significantly in cities with losing teams, decrease in cities with winning teams, and remain at their usual levels in comparable cities without an NFL team or with an NFL team that did not play. These effects are greater in cities with the most committed fans, when the opponents are more evenly matched, and when the defeats are narrow. We found similar results when measuring the actual or intended food consumption of French soccer fans who had previously been asked to write about or watch highlights from victories or defeats of soccer teams. However, these unhealthy consequences of vicarious defeats disappear when supporters spontaneously self-affirm or are given the opportunity to do so.
In the present research, we examined the hypothesis that cues of social connectedness to a member of another social group can spark interest in the group’s culture, and that such interest, when freely enacted, contributes to reductions in intergroup prejudice. In two pilot studies and Experiment 1, we found that extant and desired cross-group friendships and cues of social connectedness to an out-group member predicted increased interest in the target group’s culture. In Experiments 2 and 3, we manipulated cues of social connectedness between non–Latino American participants and a Latino American (i.e., Mexican American) peer and whether participants freely worked with this peer on a Mexican cultural task. This experience reduced the participants’ implicit bias against Latinos, an effect that was mediated by increased cultural engagement, and, 6 months later in an unrelated context, improved intergroup outcomes (e.g., interest in interacting with Mexican Americans; Experiment 4). The Discussion section addresses the inter- and intragroup benefits of policies that encourage people to express and share diverse cultural interests in mainstream settings.
Distinguishing between living (animate) and nonliving (inanimate) things is essential for survival and successful reproduction. Animacy is widely recognized as a foundational dimension, appearing early in development, but its role in remembering is currently unknown. We report two studies suggesting that animacy is a critical mnemonic dimension and is one of the most important item dimensions ultimately controlling retention. Both studies show that animate words are more likely to be recalled than inanimate words, even after the stimulus classes have been equated along other mnemonically relevant dimensions (e.g., imageability and meaningfulness). Mnemonic "tunings" for animacy are easily predicted a priori by a functional-evolutionary analysis.
Humans and nonhuman animals share an approximate number system (ANS) that permits estimation and rough calculation of quantities without symbols. Recent studies show a correlation between the acuity of the ANS and performance in symbolic math throughout development and into adulthood, which suggests that the ANS may serve as a cognitive foundation for the uniquely human capacity for symbolic math. Such a proposition leads to the untested prediction that training aimed at improving ANS performance will transfer to improvement in symbolic-math ability. In the two experiments reported here, we showed that ANS training on approximate addition and subtraction of arrays of dots selectively improved symbolic addition and subtraction. This finding strongly supports the hypothesis that complex math skills are fundamentally linked to rudimentary preverbal quantitative abilities and provides the first direct evidence that the ANS and symbolic math may be causally related. It also raises the possibility that interventions aimed at the ANS could benefit children and adults who struggle with math.
Warnings that a promoted product can have adverse side effects (e.g., smoking cigarettes can cause cancer) should dampen the product’s allure. We predicted that with temporal distance (e.g., when an ad relates to future consumption or was viewed some time earlier), this common type of warning can have a worrisome alternative consequence: It can ironically boost the product’s appeal. Building on construal-level theory, we argue that this is because temporal distance evokes high-level construal, which deemphasizes side effects and emphasizes message trustworthiness. In four studies, we demonstrated this phenomenon. For example, participants could buy cigarettes or artificial sweeteners after viewing an ad promoting the product. Immediately afterward, the quantity that participants bought predictably decreased if the ad they saw included a warning about adverse side effects. With temporal distance (product to be delivered 3 months later, or 2 weeks after the ad was viewed), however, participants who had seen an ad noting the benefits of the product but warning of risky side effects bought more than those who had seen an ad noting only benefits.
Order and disorder are prevalent in both nature and culture, which suggests that each environ confers advantages for different outcomes. Three experiments tested the novel hypotheses that orderly environments lead people toward tradition and convention, whereas disorderly environments encourage breaking with tradition and convention—and that both settings can alter preferences, choice, and behavior. Experiment 1 showed that relative to participants in a disorderly room, participants in an orderly room chose healthier snacks and donated more money. Experiment 2 showed that participants in a disorderly room were more creative than participants in an orderly room. Experiment 3 showed a predicted crossover effect: Participants in an orderly room preferred an option labeled as classic, but those in a disorderly room preferred an option labeled as new. Whereas prior research on physical settings has shown that orderly settings encourage better behavior than disorderly ones, the current research tells a nuanced story of how different environments suit different outcomes.
Millions of people witnessed early, repeated television coverage of the September 11 (9/11), 2001, terrorist attacks and were subsequently exposed to graphic media images of the Iraq War. In the present study, we examined psychological- and physical-health impacts of exposure to these collective traumas. A U.S. national sample (N = 2,189) completed Web-based surveys 1 to 3 weeks after 9/11; a subsample (n = 1,322) also completed surveys at the initiation of the Iraq War. These surveys measured media exposure and acute stress responses. Posttraumatic stress symptoms related to 9/11 and physician-diagnosed health ailments were assessed annually for 3 years. Early 9/11- and Iraq War–related television exposure and frequency of exposure to war images predicted increased posttraumatic stress symptoms 2 to 3 years after 9/11. Exposure to 4 or more hr daily of early 9/11-related television and cumulative acute stress predicted increased incidence of health ailments 2 to 3 years later. These findings suggest that exposure to graphic media images may result in physical and psychological effects previously assumed to require direct trauma exposure.
Localization of tactile stimuli to the hand and digits is fundamental to somatosensory perception. However, little is known about the development or genetic bases of this ability in humans. We examined tactile localization in normally developing children, adolescents, and adults and in people with Williams syndrome (WS), a genetic disorder resulting in a wide range of severe visual-spatial deficits. Normally developing 4-year-olds made large stimulus-localization errors, sometimes across digits, but nevertheless their errors revealed a structured internal representation of the hand. In normally developing individuals, errors became exponentially smaller over age, reaching the adult level by adolescence. In contrast, people with WS showed large localization errors regardless of age and a significant proportion of cross-digit errors, a profile similar to that of normally developing 4-year-olds. Thus, tactile localization reflects internal organization of the hand even early in normal development, undergoes substantial development in normal children, and is susceptible to developmental, but not organizational, impairment under genetic deficit.
Physical activity enhances cognitive performance, yet individual variability in its effectiveness limits its widespread therapeutic application. Genetic differences might be one source of this variation. For example, carriers of the methionine-specifying (Met) allele of the brain-derived neurotrophic factor (BDNF) Val66Met polymorphism have reduced secretion of BDNF and poorer memory, yet physical activity increases BDNF levels. To determine whether the BDNF polymorphism moderated an association of physical activity with cognitive functioning among 1,032 midlife volunteers (mean age = 44.59 years), we evaluated participants’ performance on a battery of tests assessing memory, learning, and executive processes, and evaluated their physical activity with the Paffenbarger Physical Activity Questionnaire. BDNF genotype interacted robustly with physical activity to affect working memory, but not other areas of cognitive functioning. In particular, greater levels of physical activity offset a deleterious effect of the Met allele on working memory performance. These findings suggest that physical activity can modulate domain-specific genetic (BDNF) effects on cognition.
Identifying the processes by which people remember to execute an intention at an appropriate moment (prospective memory) remains a fundamental theoretical challenge. According to one account, top-down attentional control is required to maintain activation of the intention, initiate intention retrieval, or support monitoring. A diverging account suggests that bottom-up, spontaneous retrieval can be triggered by cues that have been associated with the intention and that sustained attentional processes are not required. We used a specialized experimental design and functional MRI methods to selectively marshal and identify each process. Results revealed a clear dissociation. One prospective-memory task recruited sustained activity in attentional-control areas, such as the anterior prefrontal cortex; the other engaged purely transient activity in parietal and ventral brain regions associated with attentional capture, target detection, and episodic retrieval. These patterns provide critical evidence that there are two neural routes to prospective memory, with each route emerging under different circumstances.
In this study, we sought to elucidate both stable and changing factors in the longitudinal structure of neuroticism using a behavioral genetic twin design. We tested whether this structure is best accounted for by a trait-state, a trait-only, or a state-only model. In line with classic views on personality, our results revealed substantial trait and state components. The contributions of genetic and environmental influences on the trait component were nearly equal, whereas environmental influences on the state component were much stronger than genetic influences. Although the overall findings were similar for older and younger twins, genetic influences on the trait component were stronger than environmental influences in younger twins, whereas the opposite was found for older twins. The current findings help to elucidate how the complex interplay between genetic and environmental factors contributes to both stability and change in neuroticism.
The solicitation of charitable donations costs billions of dollars annually. Here, we introduce a virtually costless method for boosting charitable donations to a group of needy persons: merely asking donors to indicate a hypothetical amount for helping one of the needy persons before asking donors to decide how much to donate for all of the needy persons. We demonstrated, in both real fund-raisers and scenario-based research, that this simple unit-asking method greatly increases donations for the group of needy persons. Different from phenomena such as the foot-in-the-door and identifiable-victim effects, the unit-asking effect arises because donors are initially scope insensitive and subsequently scope consistent. The method applies to both traditional paper-based fund-raisers and increasingly popular Web-based fund-raisers and has implications for domains other than fund-raisers, such as auctions and budget proposals. Our research suggests that a subtle manipulation based on psychological science can generate a substantial effect in real life.
Humans can perceive depth when viewing with one eye, and even when viewing a two-dimensional picture of a three-dimensional scene. However, viewing a real scene with both eyes produces a more compelling three-dimensional experience of immersive space and tangible solid objects. A widely held belief is that this qualitative visual phenomenon (stereopsis) is a by-product of binocular vision. In the research reported here, we empirically established, for the first time, the qualitative characteristics associated with stereopsis to show that they can occur for static two-dimensional pictures without binocular vision. Critically, we show that stereopsis is a measurable qualitative attribute and that its induction while viewing pictures is not consistent with standard explanations based on depth-cue conflict or the perception of greater depth magnitude. These results challenge the conventional understanding of the underlying cause, variation, and functional role of stereopsis.
People and societies seek to combat harmful events. However, because resources are limited, every wrong righted leaves another wrong left unchecked. Responses must therefore be calibrated to the magnitude of the harm. One underappreciated factor that affects this calibration may be people’s oversensitivity to intent. Across a series of studies, people saw intended harms as worse than unintended harms, even though the two harms were identical. This harm-magnification effect occurred for both subjective and monetary estimates of harm, and it remained when participants were given incentives to be accurate. The effect was fully mediated by blame motivation. People may therefore focus on intentional harms to the neglect of unintentional (but equally damaging) harms.
Medical noncompliance is a major public-health problem. One potential source of this noncompliance is patient inertia. It has been hypothesized that one cause of patient inertia might be the status quo bias—which is the tendency to select the default choice among a set of options. To test this hypothesis, we created a laboratory analogue of the decision context that frequently occurs in situations involving patient inertia, and we examined whether participants would stay with a default option even when it was clearly inferior to other available options. Specifically, in Studies 1 and 2, participants were given the option to reduce their anxiety while waiting for an electric shock. When doing nothing was the status quo option, participants frequently did not select the option that would reduce their anxiety. In Study 3, we demonstrated a simple way to overcome status quo bias in a context relevant to patient inertia.
Researchers have shown that people often miss the occurrence of an unexpected yet salient event if they are engaged in a different task, a phenomenon known as inattentional blindness. However, demonstrations of inattentional blindness have typically involved naive observers engaged in an unfamiliar task. What about expert searchers who have spent years honing their ability to detect small abnormalities in specific types of images? We asked 24 radiologists to perform a familiar lung-nodule detection task. A gorilla, 48 times the size of the average nodule, was inserted in the last case that was presented. Eighty-three percent of the radiologists did not see the gorilla. Eye tracking revealed that the majority of those who missed the gorilla looked directly at its location. Thus, even expert searchers, operating in their domain of expertise, are vulnerable to inattentional blindness.
The ability to maintain the serial order of events is recognized as a major function of working memory. Although general models of working memory postulate a close link between working memory and attention, such a link has so far not been proposed specifically for serial-order working memory. The present study provided the first empirical demonstration of a direct link between serial order in verbal working memory and spatial selective attention. We show that the retrieval of later items of a sequence stored in working memory—compared with that of earlier items—produces covert attentional shifts toward the right. This observation suggests the conceptually surprising notion that serial-order working memory, even for nonspatially defined verbal items, draws on spatial attention.
In a number of domains, humans adopt a strategy of systematically reducing and minimizing a codified system of movement. One particularly interesting case is "marking" in dance, wherein the dancer performs an attenuated version of the choreography during rehearsal. This is ostensibly to save the dancer’s physical energy, but a number of considerations suggest that it may serve a cognitive function as well. In this study, we tested this embodied-cognitive-load hypothesis by manipulating whether dancers rehearsed by marking or by dancing "full out" and found that performance was superior in the dancers who had marked. This finding indicates that marking confers cognitive benefits during the rehearsal process, and it raises questions regarding the cognitive functions of other movement-reduction systems, such as whispering, gesturing, and subvocalizing. In addition, it has implications for a variety of topics in cognitive science, including embodied cognition and the nascent fields of dance and music cognition.
Four experiments tested the novel hypothesis that ritualistic behavior potentiates and enhances ensuing consumption—an effect found for chocolates, lemonade, and even carrots. Experiment 1 showed that participants who engaged in ritualized behavior, compared with those who did not, evaluated chocolate as more flavorful, valuable, and deserving of behavioral savoring. Experiment 2 demonstrated that random gestures do not boost consumption as much as ritualistic gestures do. It further showed that a delay between a ritual and the opportunity to consume heightens enjoyment, which attests to the idea that ritual behavior stimulates goal-directed action (to consume). Experiment 3 found that performing a ritual oneself enhances consumption more than watching someone else perform the same ritual, suggesting that personal involvement is crucial for the benefits of rituals to emerge. Finally, Experiment 4 provided direct evidence of the underlying process: Rituals enhance the enjoyment of consumption because of the greater involvement in the experience that they prompt.
Is the suppression of negative emotions ever associated with beneficial outcomes in relationships? The study reported here drew on research and theory on emotion regulation, self-construal, and sacrifice to test the hypothesis that individual differences in interdependent self-construal moderate the association between negative-emotion suppression and the personal and interpersonal outcomes of sacrifice. In a 14-day daily-experience study of people in romantic relationships, people with higher levels of interdependence experienced boosts in personal well-being and relationship quality if they suppressed their negative emotions during sacrifice, whereas those who construed the self in less interdependent terms experienced lower well-being and relationship quality if they suppressed their negative emotions during sacrifice. Feelings of authenticity for the sacrifice mediated these associations. These findings identify a critical condition under which the suppression of negative emotions may be personally and interpersonally beneficial.
In the late 1970s, 563 intellectually talented 13-year-olds (identified by the SAT as in the top 0.5% of ability) were assessed on spatial ability. More than 30 years later, the present study evaluated whether spatial ability provided incremental validity (beyond the SAT’s mathematical and verbal reasoning subtests) for differentially predicting which of these individuals had patents and three classes of refereed publications. A two-step discriminant-function analysis revealed that the SAT subtests jointly accounted for 10.8% of the variance among these outcomes (p < .01); when spatial ability was added, an additional 7.6% was accounted for—a statistically significant increase (p < .01). The findings indicate that spatial ability has a unique role in the development of creativity, beyond the roles played by the abilities traditionally measured in educational selection, counseling, and industrial-organizational psychology. Spatial ability plays a key and unique role in structuring many important psychological phenomena and should be examined more broadly across the applied and basic psychological sciences.
Inner speech is one of the most common, but least investigated, mental activities humans perform. It is an internal copy of one’s external voice and so is similar to a well-established component of motor control: corollary discharge. Corollary discharge is a prediction of the sound of one’s voice generated by the motor system. This prediction is normally used to filter self-caused sounds from perception, which segregates them from externally caused sounds and prevents the sensory confusion that would otherwise result. The similarity between inner speech and corollary discharge motivates the theory, tested here, that corollary discharge provides the sensory content of inner speech. The results reported here show that inner speech attenuates the impact of external sounds. This attenuation was measured using a context effect (an influence of contextual speech sounds on the perception of subsequent speech sounds), which weakens in the presence of speech imagery that matches the context sound. Results from a control experiment demonstrated this weakening in external speech as well. Such sensory attenuation is a hallmark of corollary discharge.
Although females of many species closely related to humans signal their fertile window in an observable manner, often involving red or pink coloration, no such display has been found for humans. Building on evidence that men are sexually attracted to women wearing or surrounded by red, we tested whether women show a behavioral tendency toward wearing reddish clothing when at peak fertility. Across two samples (N = 124), women at high conception risk were more than 3 times more likely to wear a red or pink shirt than were women at low conception risk, and 77% of women who wore red or pink were found to be at high, rather than low, risk. Conception risk had no effect on the prevalence of any other shirt color. Our results thus suggest that red and pink adornment in women is reliably associated with fertility and that female ovulation, long assumed to be hidden, is associated with a salient visual cue.
In sentence processing, semantic and syntactic violations elicit differential brain responses observable in event-related potentials: An N400 signals semantic violations, whereas a P600 marks inconsistent syntactic structure. Does the brain register similar distinctions in scene perception? To address this question, we presented participants with semantic inconsistencies, in which an object was incongruent with a scene’s meaning, and syntactic inconsistencies, in which an object violated structural rules. We found a clear dissociation between semantic and syntactic processing: Semantic inconsistencies produced negative deflections in the N300-N400 time window, whereas mild syntactic inconsistencies elicited a late positivity resembling the P600 found for syntactic inconsistencies in sentence processing. Extreme syntactic violations, such as a hovering beer bottle defying gravity, were associated with earlier perceptual processing difficulties reflected in the N300 response, but failed to produce a P600 effect. We therefore conclude that different neural populations are active during semantic and syntactic processing of scenes, and that syntactically impossible object placements are processed in a categorically different manner than are syntactically resolvable object misplacements.
Despite the importance of learning about one’s health, people sometimes opt to remain ignorant. In three studies, we investigated whether prompting people to contemplate their reasons for seeking or avoiding information would reduce avoidance of personal health information. In Study 1, people were more likely to opt to learn their risk for type 2 diabetes if they had completed a motives questionnaire prior to making their decision than if they had not. In Study 2, people were more likely to opt to learn their risk for cardiovascular disease if they had first listed and rated reasons for seeking or avoiding the information than if they had not. Study 3 replicated Study 2 but also showed that contemplating reasons for avoiding versus seeking reduced avoidance of personal-risk information only when the risk condition was treatable.
People often attribute poor performance to having bad days. Given that cognitive aging leads to lower average levels of performance and more moment-to-moment variability, one might expect that older adults should show greater day-to-day variability and be more likely to experience bad days than younger adults. However, both researchers and ordinary people typically sample only one performance per day for a given activity. Hence, the empirical basis for concluding that cognitive performance does substantially vary from day to day is inadequate. On the basis of data from 101 younger and 103 older adults who completed nine cognitive tasks in 100 daily sessions, we show that the contributions of systematic day-to-day variability to overall observed variability are reliable but small. Thus, the impression of good versus bad days is largely due to performance fluctuations at faster timescales. Despite having lower average levels of performance, older adults showed more consistent levels of performance across days.
Research on emotion suppression has shown a rebound effect, in which expression of the targeted emotion increases following a suppression attempt. In prior investigations, participants have been explicitly instructed to suppress their responses, which has drawn the act of suppression into metaconsciousness. Yet emerging research emphasizes the importance of nonconscious approaches to emotion regulation. This study is the first in which a craving rebound effect was evaluated without simultaneously raising awareness about suppression. We aimed to link spontaneously occurring attempts to suppress cigarette craving to increased smoking motivation assessed immediately thereafter. Smokers (n = 66) received a robust cued smoking-craving manipulation while their facial responses were videotaped and coded using the Facial Action Coding System. Following smoking-cue exposure, participants completed a behavioral choice task previously found to index smoking motivation. Participants evincing suppression-related facial expressions during cue exposure subsequently valued smoking more than did those not displaying these expressions, which suggests that internally generated suppression can exert powerful rebound effects.
Altruism is thought to be a major contributor to the development of large-scale human societies. However, much of the evidence supporting this belief comes from individuals living in pacific and often affluent environments. It is entirely unknown whether humans act altruistically when facing adversity. Adversity is arguably a common human experience (as manifested in, e.g., personal tragedies, political upheavals, and natural disasters). In the research reported here, we found that experiencing a natural disaster affected children’s altruistic giving. Immediately after witnessing devastations caused by a major earthquake, 9-year-olds became more altruistic. In addition, the more empathic they were, the more they gave. In contrast, experiencing a major earthquake caused 6-year-olds to be more selfish. Three years after the earthquake, children’s altruistic tendencies returned to pre-earthquake levels, which suggests that changes in children’s altruistic giving are an acute response to the immediate aftermath of a major natural disaster. These findings suggest that environmental insults and empathy play crucial roles in human altruism.
Simple decisions require the processing and evaluation of perceptual and cognitive information, the formation of a decision, and often the execution of a motor response. This process involves the accumulation of evidence over time until a particular choice reaches a decision threshold. Using a random-dot-motion stimulus, we showed that simply delaying responses after the stimulus offset can almost double accuracy, even in the absence of new incoming visual information. However, under conditions in which the otherwise blank interval was filled with a sensory mask or concurrent working memory load was high, performance gains were lost. Further, memory and perception showed equivalent rates of evidence accumulation, suggesting a high-capacity memory store. We propose an account of continued evidence accumulation by sequential sampling from a simultaneously decaying memory trace. Memories typically decay with time, hence immediate inquiry trumps later recall from memory. However, the results we report here show the inverse: Inspecting a memory trumps viewing the actual object.
People often make multiple choices at the same time, choosing a snack and drink or a cell phone and case, only to learn that some of their choices are unavailable. Do they take the available item (or items) or something else entirely? Culture-as-situated-cognition theory predicts that this choice is determined by one’s accessible cultural mind-set. An accessible collectivist (vs. individualist) mind-set should heighten sensitivity to an emergent relationship among items chosen together so that having some is not acceptable if not all can be obtained. Indeed, we found that Latinos (but not Anglos) refuse chosen items if not all can be obtained (Study 1a). Further, making a collectivist mind-set accessible reproduces this between-groups difference (Study 1b), increases people’s willingness to pay to complete sets (Study 1b), and shifts choice to previously undesired items if no set-completing option is provided (Studies 2–4). Finally, we found that increased sensitivity to an emergent relationship among chosen items mediates these effects (Studies 3 and 4).
Although the perceived compatibility between one’s gender and science, technology, engineering, and mathematics (STEM) identities (gender-STEM compatibility) has been linked to women’s success in STEM fields, no work to date has examined how the stability of identity over time contributes to subjective and objective STEM success. In the present study, 146 undergraduate female STEM majors rated their gender-STEM compatibility weekly during their freshman spring semester. STEM women higher in gender rejection sensitivity, or gender RS, a social-cognitive measure assessing the tendency to perceive social-identity threat, experienced larger fluctuations in gender-STEM compatibility across their second semester of college. Fluctuations in compatibility predicted impaired outcomes the following school year, including lower STEM engagement and lower academic performance in STEM (but not non-STEM) classes, and significantly mediated the relationship between gender RS and STEM engagement and achievement in the 2nd year of college. The week-to-week changes in gender-STEM compatibility occurred in response to negative academic (but not social) experiences.
Accurate assessment of competitive ability is a critical component of contest behavior in animals, and it could be just as important in human competition, particularly in human ancestral populations. Here, we tested the role that facial perception plays in this assessment by investigating the association between both perceived aggressiveness and perceived fighting ability in fighters’ faces and their actual fighting success. Perceived aggressiveness was positively associated with the proportion of fights won, after we controlled for the effect of weight, which also independently predicted perceived aggression. In contrast, perception of fighting ability was confounded by weight, and an association between perceived fighting ability and actual fighting success was restricted to heavyweight fighters. Shape regressions revealed that aggressive-looking faces are generally wider and have a broader chin, more prominent eyebrows, and a larger nose than less aggressive-looking faces. Our results indicate that perception of aggressiveness and fighting ability might cue different aspects of success in male-male physical confrontation.
Genes account for increasing proportions of variation in cognitive ability across development, but the mechanisms underlying these increases remain unclear. We conducted a meta-analysis of longitudinal behavioral genetic studies spanning infancy to adolescence. We identified relevant data from 16 articles with 11 unique samples containing a total of 11,500 twin and sibling pairs who were all reared together and measured at least twice between the ages of 6 months and 18 years. Longitudinal behavioral genetic models were used to estimate the extent to which early genetic influences on cognition were amplified over time and the extent to which innovative genetic influences arose with time. Results indicated that in early childhood, innovative genetic influences predominate but that innovation quickly diminishes, and amplified influences account for increasing heritability following age 8 years.