MetaTOC stay on top of your field, easily

Language and Speech

Impact factor: 0.822 5-Year impact factor: 1.216 Print ISSN: 0023-8309 Publisher: Sage Publications

Subjects: Experimental Psychology, Linguistics

Most recent papers:

  • Processing Relationships Between Language-Being-Spoken and Other Speech Dimensions in Monolingual and Bilingual Listeners.
    Vaughn, C. R., Bradlow, A. R.
    Language and Speech. October 14, 2016

    While indexical information is implicated in many levels of language processing, little is known about the internal structure of the system of indexical dimensions, particularly in bilinguals. A series of three experiments using the speeded classification paradigm investigated the relationship between various indexical and non-linguistic dimensions of speech in processing. Namely, we compared the relationship between a lesser-studied indexical dimension relevant to bilinguals, which language is being spoken (in these experiments, either Mandarin Chinese or English), with: talker identity (Experiment 1), talker gender (Experiment 2), and amplitude of speech (Experiment 3). Results demonstrate that language-being-spoken is integrated in processing with each of the other dimensions tested, and that these processing dependencies seem to be independent of listeners’ bilingual status or experience with the languages tested. Moreover, the data reveal processing interference asymmetries, suggesting a processing hierarchy for indexical, non-linguistic speech features.

    October 14, 2016   doi: 10.1177/0023830916669536   open full text
  • Prosodic Variation and Segmental Reduction and Their Roles in Cuing Turn Transition in Swedish.
    Zellers, M.
    Language and Speech. October 06, 2016

    Prosody has often been identified alongside syntax as a cue to turn hold or turn transition in conversational interaction. However, evidence for which prosodic cues are most relevant, and how strong those cues are, has been somewhat scattered. The current study addresses prosodic cues to turn transition in Swedish. A perception study looking closely at turn changes and holds in cases where the syntax does not lead inevitably to a particular outcome shows that Swedish listeners are sensitive to duration variations, even in the very short space of the final unstressed syllable of a turn, and that they may use pitch cues to a lesser extent. An investigation of production data indicates that duration, and to some extent segmental reduction, demonstrate consistent variation in relation to the types of turn boundaries they accompany, while fundamental frequency and glottalization do not. Taken together, these data suggest that duration may be the primary cue to turn transition in Swedish conversation, rather than fundamental frequency, as some other studies have suggested.

    October 06, 2016   doi: 10.1177/0023830916658680   open full text
  • Phrase Lengths and the Perceived Informativeness of Prosodic Cues in Turkish.
    Dinctopal Deniz, N., Fodor, J. D.
    Language and Speech. September 26, 2016

    It is known from previous studies that in many cases (though not all) the prosodic properties of a spoken utterance reflect aspects of its syntactic structure, and also that in many cases (though not all) listeners can benefit from these prosodic cues. A novel contribution to this literature is the Rational Speaker Hypothesis (RSH), proposed by Clifton, Carlson and Frazier. The RSH maintains that listeners are sensitive to possible reasons for why a speaker might introduce a prosodic break: "listeners treat a prosodic boundary as more informative about the syntax when it flanks short constituents than when it flanks longer constituents", because in the latter case the speaker might have been motivated solely by consideration of optimal phrase lengths. This would effectively reduce the cue value of an appropriately placed prosodic boundary. We present additional evidence for the RSH from Turkish, a language typologically different from English. In addition, our study shows for the first time that the RSH also applies to a prosodic break which conflicts with the syntactic structure, reducing its perceived cue strength if it might have been motivated by length considerations. In this case, the RSH effect is beneficial. Finally, the Turkish data show that prosody-based explanations for parsing preferences such as the RSH do not take the place of traditional syntax-sensitive parsing strategies such as the two sources of guidance co-exist; both are used when available.

    September 26, 2016   doi: 10.1177/0023830916665653   open full text
  • Voicing Assimilation in Czech and Slovak Speakers of English: Interactions of Segmental Context, Language and Strength of Foreign Accent.
    Skarnitzl, R., Sturm, P.
    Language and Speech. July 08, 2016

    This study focuses on voicing assimilation across word boundaries in the speech of second language (L2) users. We compare native speakers of British English to speakers of two West Slavic languages, Czech and Slovak, which, despite their many similarities, differ with respect to voicing assimilation rules. Word-final voicing was analysed in 30 speakers, using the static value of voicing percentage and the voicing profile method. The results of linear mixed-effects modelling suggest an effect of first language (L1) transfer in all L2 English speaker groups, with the tendency to assimilate being correlated with the strength of foreign accent. Importantly, the two language groups differed in assimilation strategies before sonorant consonants, as a clear effect of L1-based phonetic influence.

    July 08, 2016   doi: 10.1177/0023830916654509   open full text
  • Imitation of Non-Speech Oral Gestures by 8-Month-Old Infants.
    Diepstra, H., Trehub, S. E., Eriks-Brophy, A., van Lieshout, P. H.
    Language and Speech. June 21, 2016

    This study investigates the oral gestures of 8-month-old infants in response to audiovisual presentation of lip and tongue smacks. Infants exhibited more lip gestures than tongue gestures following adult lip smacks and more tongue gestures than lip gestures following adult tongue smacks. The findings, which are consistent with predictions from Articulatory Phonology, imply that 8-month-old infants are capable of producing goal-directed oral gestures by matching the articulatory organ of an adult model.

    June 21, 2016   doi: 10.1177/0023830916647080   open full text
  • Communicative Success in Spatial Dialogue: The Impact of Functional Features and Dialogue Strategies.
    Tenbrink, T., Andonova, E., Schole, G., Coventry, K. R.
    Language and Speech. June 08, 2016

    This paper addresses the impact of dialogue strategies and functional features of spatial arrangements on communicative success. To examine the sharing of cognition between two minds in order to achieve a joint goal, we collected a corpus of 24 extended German-language dialogues in a referential communication task that involved furnishing a dolls’ house. Results show how successful communication, as evidenced by correct placement of furniture items, is affected by: (a) functionality of the furniture arrangement; (b) previous task experience; and (c) dialogue features such as description length and orientation information. To enhance research in this area, our ‘Dolldialogue’ corpus (www.dolldialogue.space) is now available as a free resource.

    June 08, 2016   doi: 10.1177/0023830916651097   open full text
  • The Role of Fundamental Frequency and Temporal Envelope in Processing Sentences with Temporary Syntactic Ambiguities.
    Sharpe, V., Fogerty, D., den Ouden, D.-B.
    Language and Speech. June 08, 2016

    Previous experiments have demonstrated the impact of speech prosody on syntactic processing. The present study was designed to examine how listeners use specific acoustic properties of prosody for grammatical interpretation. We investigated the independent contributions of two acoustic properties associated with the pitch and rhythmic properties of speech; the fundamental frequency and temporal envelope, respectively. The effect of degrading these prosodic components was examined by testing listeners’ ability to parse early-closure garden-path sentences. A second aim was to investigate how effects of prosody interact with semantic effects of sentence plausibility. Using a task that required both a comprehension and a production response, we were able to determine that degradation of the speech envelope more consistently affects syntactic processing than degradation of the fundamental frequency. These effects are exacerbated in sentences with plausible misinterpretations, showing that prosodic degradation interacts with contextual cues to sentence interpretation.

    June 08, 2016   doi: 10.1177/0023830916652649   open full text
  • Postfocal Downstep in German.
    Kügler, F., Fery, C.
    Language and Speech. June 01, 2016

    This article is a follow-up study of Féry and Kügler (2008. Pitch accent scaling on given, new and focused constituents in German. Journal of Phonetics, 36, 680–703). It reports on an experiment of the F0 height of potential pitch accents in the postfocal region of German sentences and addresses in this way an aspect of the influence of information structure on the intonation of sentences that was left open in the previous article. The results of the experiment showed that, when several constituents are located in this position, they are often in a downstep relation, but are rarely upstepped. In 37% of the cases, the pitch accents are only realized dynamically and there is no down- or upstepping. We interpret these results as evidence that postfocal constituents are phrased independently. The data examined speak against a model of postfocal intonation in which postfocal phrasing is eliminated and all accents are reduced to zero. Instead, the pitch accents are often present, although reduced. Moreover, the facts support the existence of prosodic phrasing of the postfocal constituents; the postfocal position implies an extremely compressed register, but no dephrasing or systematic complete deaccentuation of all pitch accents. We propose adopting a model of German intonation in which prosodic phrasing is determined by syntactic structure and cannot be changed by information structure. The role of information structure in prosody is limited to changes in the register relationship of the different parts of the sentence. Prefocally, there is no or only little register compression because of givenness. Postfocally, register compression is the rule. A model of intonation must take this asymmetry into account.

    June 01, 2016   doi: 10.1177/0023830916647204   open full text
  • Effects of Word Frequency and Transitional Probability on Word Reading Durations of Younger and Older Speakers.
    Moers, C., Meyer, A., Janse, E.
    Language and Speech. May 30, 2016

    High-frequency units are usually processed faster than low-frequency units in language comprehension and language production. Frequency effects have been shown for words as well as word combinations. Word co-occurrence effects can be operationalized in terms of transitional probability (TP). TPs reflect how probable a word is, conditioned by its right or left neighbouring word. This corpus study investigates whether three different age groups–younger children (8–12 years), adolescents (12–18 years) and older (62–95 years) Dutch speakers–show frequency and TP context effects on spoken word durations in reading aloud, and whether age groups differ in the size of these effects. Results show consistent effects of TP on word durations for all age groups. Thus, TP seems to influence the processing of words in context, beyond the well-established effect of word frequency, across the entire age range. However, the study also indicates that age groups differ in the size of TP effects, with older adults having smaller TP effects than adolescent readers. Our results show that probabilistic reduction effects in reading aloud may at least partly stem from contextual facilitation that leads to faster reading times in skilled readers, as well as in young language learners.

    May 30, 2016   doi: 10.1177/0023830916649215   open full text
  • Prediction of Agreement and Phonetic Overlap Shape Sublexical Identification.
    Martin, A. E., Monahan, P. J., Samuel, A. G.
    Language and Speech. May 30, 2016

    The mapping between the physical speech signal and our internal representations is rarely straightforward. When faced with uncertainty, higher-order information is used to parse the signal and because of this, the lexicon and some aspects of sentential context have been shown to modulate the identification of ambiguous phonetic segments. Here, using a phoneme identification task (i.e., participants judged whether they heard [o] or [a] at the end of an adjective in a noun–adjective sequence), we asked whether grammatical gender cues influence phonetic identification and if this influence is shaped by the phonetic properties of the agreeing elements. In three experiments, we show that phrase-level gender agreement in Spanish affects the identification of ambiguous adjective-final vowels. Moreover, this effect is strongest when the phonetic characteristics of the element triggering agreement and the phonetic form of the agreeing element are identical. Our data are consistent with models wherein listeners generate specific predictions based on the interplay of underlying morphosyntactic knowledge and surface phonetic cues.

    May 30, 2016   doi: 10.1177/0023830916650714   open full text
  • F2 slope as a Perceptual Cue for the Front-Back Contrast in Standard Southern British English.
    Chladkova, K., Hamann, S., Williams, D., Hellmuth, S.
    Language and Speech. May 30, 2016

    Acoustic studies of several languages indicate that second-formant (F2) slopes in high vowels have opposing directions (independent of consonantal context): front [i]-like vowels are produced with a rising F2 slope, whereas back [u]-like vowels are produced with a falling F2 slope. The present study first reports acoustic measurements that confirm this pattern for the English variety of Standard Southern British English (SSBE), where /u/ has shifted from the back to the front area of the vowel space and is now realized with higher midpoint F2 values than several decades ago. Subsequently, we test whether the direction of F2 slope also serves as a reliable cue to the /i/-/u/ contrast in perception. The findings show that F2 slope direction is used as a cue (additional to midpoint formant values) to distinguish /i/ from /u/ by both young and older Standard Southern British English listeners: an otherwise ambiguous token is identified as /i/ if it has a rising F2 slope and as /u/ if it has a falling F2 slope. Furthermore, our results indicate that listeners generalize their reliance on F2 slope to other contrasts, namely //-// and /æ/-//, even though F2 slope is not employed to differentiate these vowels in production. This suggests that in Standard Southern British English, a rising F2 seems to be perceptually associated with an abstract feature such as [+front], whereas a falling F2 with an abstract feature such as [-front].

    May 30, 2016   doi: 10.1177/0023830916650991   open full text
  • The Role of Predictability in Intonational Variability.
    Turnbull, R.
    Language and Speech. May 25, 2016

    Predictability is known to affect many properties of speech production. In particular, it has been observed that highly predictable elements (words, syllables) are produced with less phonetic prominence (shorter duration, less peripheral vowels) than less predictable elements. This tendency has been proposed to be a general property of language. This paper examines whether predictability is correlated with fundamental frequency (F0) production, through analysis of experimental corpora of American English. Predictability was variously defined as discourse mention, utterance probability, and semantic focus. The results revealed consistent effects of utterance probability and semantic focus on F0, in the expected direction: less predictable words were produced with a higher F0 than more predictable words. However, no effect of discourse mention was observed. These results provide further empirical support for the generalization that phonetic prominence is inversely related to linguistic predictability. In addition, the divergent results for different predictability measures suggests that the parameterization of predictability within a particular experimental design can have significant impact on the interpretation of results, and that it cannot be assumed that two measures necessarily reflect the same cognitive reality.

    May 25, 2016   doi: 10.1177/0023830916647079   open full text
  • Relative Salience of Speech Rhythm and Speech Rate on Perceived Foreign Accent in a Second Language.
    Polyanskaya, L., Ordin, M., Busa, M. G.
    Language and Speech. May 25, 2016

    We investigated the independent contribution of speech rate and speech rhythm to perceived foreign accent. To address this issue we used a resynthesis technique that allows neutralizing segmental and tonal idiosyncrasies between identical sentences produced by French learners of English at different proficiency levels and maintaining the idiosyncrasies pertaining to prosodic timing patterns. We created stimuli that (1) preserved the idiosyncrasies in speech rhythm while controlling for the differences in speech rate between the utterances; (2) preserved the idiosyncrasies in speech rate while controlling for the differences in speech rhythm between the utterances; and (3) preserved the idiosyncrasies both in speech rate and speech rhythm. All the stimuli were created in intoned (with imposed intonational contour) and flat (with monotonized, constant F0) conditions. The original and the resynthesized sentences were rated by native speakers of English for degree of foreign accent. We found that both speech rate and speech rhythm influence the degree of perceived foreign accent, but the effect of speech rhythm is larger than that of speech rate. We also found that intonation enhances the perception of fine differences in rhythmic patterns but reduces the perceptual salience of fine differences in speech rate.

    May 25, 2016   doi: 10.1177/0023830916648720   open full text
  • Thai Rate-Varied Vowel Length Perception and the Impact of Musical Experience.
    Cooper, A., Wang, Y., Ashley, R.
    Language and Speech. May 19, 2016

    Musical experience has been demonstrated to play a significant role in the perception of non-native speech contrasts. The present study examined whether or not musical experience facilitated the normalization of speaking rate in the perception of non-native phonemic vowel length contrasts. Native English musicians and non-musicians (as well as native Thai control listeners) completed identification and AX (same–different) discrimination tasks with Thai vowels contrasting in phonemic length at three speaking rates. Results revealed facilitative effects of musical experience in the perception of Thai vowel length categories. Specifically, the English musicians patterned similarly to the native Thai listeners, demonstrating higher accuracy at identifying and discriminating between-category vowel length distinctions than at discriminating within-category durational differences due to speaking rate variations. The English musicians also outperformed non-musicians at between-category vowel length discriminations across speaking rates, indicating musicians’ superiority in perceiving categorical phonemic length differences. These results suggest that musicians’ attunement to rhythmic and temporal information in music transferred to facilitating their ability to normalize contextual quantitative variations (due to speaking rate) and perceive non-native temporal phonemic contrasts.

    May 19, 2016   doi: 10.1177/0023830916642489   open full text
  • Effects of Lexical Competition and Dialect Exposure on Phonological Priming.
    Clopper, C. G., Walker, A.
    Language and Speech. May 13, 2016

    A cross-modal lexical decision task was used to explore the effects of lexical competition and dialect exposure on phonological form priming. Relative to unrelated auditory primes, matching real word primes facilitated lexical decision for visual real word targets, whereas competing minimal pair primes inhibited lexical decision. These effects were robust across two English vowel pairs (mid–front and low–front) and for two listener groups (mono-dialectal and multi-dialectal). However, both the most robust facilitation and the most robust inhibition were observed for the mid–front vowel words with few phonological competitors for the mono-dialectal listener group. The mid–front vowel targets were acoustically more distinct than the low–front vowel targets, suggesting that acoustic–phonetic similarity leads to stronger lexical competition and less robust facilitation and inhibition. The multi-dialectal listeners had more prior exposure to multiple different dialects than the mono-dialectal group, suggesting that long-term exposure to linguistic variability contributes to a more flexible processing strategy in which lexical competition extends over a longer period of time, leading to less robust facilitation and inhibition.

    May 13, 2016   doi: 10.1177/0023830916643737   open full text
  • Perception of Nonnative-accented Sentences by 5- to 8-Year-olds and Adults: The Role of Phonological Processing Skills.
    Bent, T., Atagi, E.
    Language and Speech. May 05, 2016

    To acquire language and successfully communicate in multicultural and multilingual societies, children must learn to understand speakers with various accents and dialects. This study investigated adults’ and 5- to 8-year-old children’s perception of native- and nonnative-accented English sentences in noise. Participants’ phonological memory and phonological awareness were assessed to investigate factors associated with individual differences in word recognition. Although both adults and children performed less accurately with nonnative talkers than native talkers, children showed greater performance decrements. Further, phonological memory was more closely tied to perception of native talkers whereas phonological awareness was more closely related to perception of nonnative talkers. These results suggest that the ability to recognize words produced in unfamiliar accents continues to develop beyond the early school-age years. Additionally, the linguistic skills most related to word recognition in adverse listening conditions may differ depending on the source of the challenge (i.e., noise, talker, or a combination).

    May 05, 2016   doi: 10.1177/0023830916645374   open full text
  • The Effects of Language Experience and Speech Context on the Phonetic Accommodation of English-accented Spanish Voicing.
    Llanos, F., Francis, A. L.
    Language and Speech. March 21, 2016

    Native speakers of Spanish with different amounts of experience with English classified stop-consonant voicing (/b/ versus /p/) across different speech accents: English-accented Spanish, native Spanish, and native English. While listeners with little experience with English classified target voicing with an English- or Spanish-like voice onset time (VOT) boundary, predicted by contextual VOT, listeners familiar with English relied on an English-like VOT boundary in an English-accented Spanish context even in the absence of clear contextual cues to English VOT. This indicates that Spanish listeners accommodated English-accented Spanish voicing differently depending on their degree of familiarization with the English norm.

    March 21, 2016   doi: 10.1177/0023830915623579   open full text
  • Discrimination and Identification of a Third Formant Frequency Cue to Place of Articulation by Young Children and Adults.
    Richardson, K., Sussman, J. E.
    Language and Speech. March 21, 2016

    Typically-developing children, 4 to 6 years of age, and adults participated in discrimination and identification speech perception tasks using a synthetic consonant–vowel continuum ranging from /da/ to /ga/. The seven-step synthetic /da/–/ga/ continuum was created by adjusting the first 40 ms of the third formant frequency transition. For the discrimination task, listeners participated in a Change/No–Change paradigm with four different stimuli compared to the endpoint-1 /da/ token. For the identification task, listeners labeled each token along the /da/–/ga/ continuum as either "DA" or "GA." Results of the discrimination experiment showed that sensitivity to the third-formant transition cue improved for the adult listeners as the stimulus contrast increased, whereas the performance of the children remained poor across all stimulus comparisons. Results of the identification experiment support previous hypotheses of age-related differences in phonetic categorization. Results have implications for normative data on identification and discrimination tasks. These norms provide a metric against which children with auditory-based speech sound disorders can be compared. Furthermore, the results provide some insight into the developmental nature of categorical and non-categorical speech perception.

    March 21, 2016   doi: 10.1177/0023830915625680   open full text
  • Subjective Lexical Characteristics: Comparing Ratings of Members of the Target Population and Doctors for Words Stemming from a Medical Context.
    Robert, C., Cousson-Gelie, F., Faurous, W., Mathey, S.
    Language and Speech. March 21, 2016

    The present study investigated the subjective lexical characteristics of words stemming from a medical context by comparing estimations of the target population (age range = 46–89) and of doctors. A total of 58 members of the target population and 22 oncologists completed measures of subjective frequency and emotional valence for words previously collected in interviews of announcement of cancer diagnosis. The members of the target population also completed tests of word definitions, without and within context. As expected, most of the words were rated less familiar, more negative and as generating more intense emotions to the target population than to the doctors. Moreover, only a few words were correctly defined by the target population. Adding a context helped the participants to define most of the words correctly. Importantly, we identified words that were rated familiar by the patients although they did not know their exact meaning. Overall, these results highlight the importance of taking into account the subjective lexical characteristics of words used in specific contexts.

    March 21, 2016   doi: 10.1177/0023830916636650   open full text
  • Focus in Corrective Exchanges: Effects of Pitch Accent and Syntactic Form.
    Clifton, C., Frazier, L.
    Language and Speech. February 15, 2016

    A dialog consisting of an utterance by one speaker and another speaker’s correction of its content seems intuitively to be made more acceptable when the new information is pitch accented or otherwise focused, and when the utterance and correction have the same syntactic form. Three acceptability judgment studies, one written and two auditory, investigated the interaction of focus (manipulated by sentence position and, in Experiments 2 and 3, pitch accent) and syntactic parallelism. Experiment 1 indicated that syntactic parallelism interacted with position of the new (contrastive) term: nonparallel forms were relatively acceptable when the new term appeared in object position, a position that commonly contains new information (a ‘default focus’ position). Experiments 2 and 3 indicated that presence of a pitch accent and placement in a default focus position had additive effects on acceptability. Surprisingly, spoken dialogs in which the new term appeared in object position were acceptable even when given information carried the most prominent pitch accent. The present studies, and earlier work, suggest that corrected information can be focused either by prosody or position even in spoken English – a language often thought to express focus through pitch accent, not syntactic position.

    February 15, 2016   doi: 10.1177/0023830915623578   open full text
  • Informativeness, Timing and Tempo in Lexical Self-Repair.
    Plug, L.
    Language and Speech. December 15, 2015

    This paper presents a study of the temporal organization of lexical repair in spontaneous Dutch speech. It assesses the extent to which offset-to-repair duration and repair tempo can be predicted on the basis of offset timing, reparandum tempo and measures of the informativeness of the crucial lexical items in the repair. Specifically, we address the expectations that repairs that are initiated relatively early are produced relatively fast throughout, and that relatively highly informative repairs are produced relatively slowly. For informativeness, we implement measures based on repair semantics, lexical frequency counts and cloze probabilities. Our results highlight differences between factual and linguistic error repairs, which have not been consistently distinguished in previous studies, and provide some evidence to support the notion that repairs that are initiated relatively early are produced relatively fast. They confirm that lexical frequency counts are rough measures of contextual predictability at best, and reveal very few significant effects of our informativeness measures on the temporal organization of lexical self-repair. Moreover, although we can confirm that most repairs have a repair portion that is fast relative to its reparandum, this cannot be attributed to the relative informativeness of the two portions. Our findings inform the current debate on the division of labour between inner and overt speech monitoring, and suggest that, although the influence of informativeness on speech production is extensive, it is not ubiquitous.

    December 15, 2015   doi: 10.1177/0023830915618427   open full text
  • Learning the Marshallese Phonological System: The Role of Cross-language Similarity on the Perception and Production of Secondary Articulations.
    Sturman, H. W., Baker-Smemoe, W., Carreno, S., Miller, B. B.
    Language and Speech. December 10, 2015

    The current study determines the influence of cross-language similarity on native English speakers’ perception and production of Marshallese consonant contrasts. Marshallese provides a unique opportunity to study this influence because all Marshallese consonants have a secondary articulation. Results of discrimination and production tasks indicate that learners more easily acquire sounds if they are perceptually less similar to native language phonemes. In addition, the degree of cross-language similarity seemed to affect perception and production and may also interact with the effect of orthography.

    December 10, 2015   doi: 10.1177/0023830915614603   open full text
  • On the Tail of the Scottish Vowel Length Rule in Glasgow.
    Rathcke, T. V., Stuart-Smith, J. H.
    Language and Speech. December 01, 2015

    One of the most famous sound features of Scottish English is the short/long timing alternation of /i u ai/ vowels, which depends on the morpho-phonemic environment, and is known as the Scottish Vowel Length Rule (SVLR). These alternations make the status of vowel quantity in Scottish English (quasi-)phonemic but are also susceptible to change, particularly in situations of intense sustained dialect contact with Anglo-English. Does the SVLR change in Glasgow where dialect contact at the community level is comparably low? The present study sets out to tackle this question, and tests two hypotheses involving (1) external influences due to dialect-contact and (2) internal, prosodically induced factors of sound change. Durational analyses of /i u a/ were conducted on a corpus of spontaneous Glaswegian speech from the 1970s and 2000s; four speaker groups were compared, two of middle-aged men, and two of adolescent boys. Our hypothesis that the development of the SVLR over time may be internally constrained and interact with prosody was largely confirmed. We observed weakening effects in its implementation which were localised in phrase-medial unaccented positions in all speaker groups, and in phrase-final positions in the speakers born after the Second World War. But unlike some other varieties of Scottish or Northern English which show weakening of the Rule under a prolonged contact with Anglo-English, dialect contact seems to be having less impact on the durational patterns in Glaswegian vernacular, probably because of the overall reduced potential for a regular, everyday contact in the West of Scotland.

    December 01, 2015   doi: 10.1177/0023830915611428   open full text
  • Seeking an Anchorage. Stability and Variability in Tonal Alignment of Rising Prenuclear Pitch Accents in Cypriot Greek.
    Themistocleous, C.
    Language and Speech. December 01, 2015

    Although tonal alignment constitutes a quintessential property of pitch accents, its exact characteristics remain unclear. This study, by exploring the timing of the Cypriot Greek L*+H prenuclear pitch accent, examines the predictions of three hypotheses about tonal alignment: the invariance hypothesis, the segmental anchoring hypothesis, and the segmental anchorage hypothesis. The study reports on two experiments: the first of which manipulates the syllable patterns of the stressed syllable, and the second of which modifies the distance of the L*+H from the following pitch accent. The findings on the alignment of the low tone (L) are illustrative of the segmental anchoring hypothesis predictions: the L persistently aligns inside the onset consonant, a few milliseconds before the stressed vowel. However, the findings on the alignment of the high tone (H) are both intriguing and unexpected: the alignment of the H depends on the number of unstressed syllables that follow the prenuclear pitch accent. The ‘wandering’ of the H over multiple syllables is extremely rare among languages, and casts doubt on the invariance hypothesis and the segmental anchoring hypothesis, as well as indicating the need for a modified version of the segmental anchorage hypothesis. To address the alignment of the H, we suggest that it aligns within a segmental anchorage–the area that follows the prenuclear pitch accent–in such a way as to protect the paradigmatic contrast between the L*+H prenuclear pitch accent and the L+H* nuclear pitch accent.

    December 01, 2015   doi: 10.1177/0023830915614602   open full text
  • Alveolar and Velarized Laterals in Albanian and in the Viennese Dialect.
    Moosmüller, S., Schmid, C., Kasess, C. H.
    Language and Speech. December 01, 2015

    A comparison of alveolar and velarized lateral realizations in two language varieties, Albanian and the Viennese dialect, has been performed. Albanian distinguishes the two laterals phonemically, whereas in the Viennese dialect, the velarized lateral was introduced by language contact with Czech immigrants. A categorical distinction between the two lateral phonemes is fully maintained in Albanian. Results are not as straightforward in the Viennese dialect. Most prominently, female speakers, if at all, realize the velarized lateral in word-final position, thus indicating the application of a phonetically motivated process. The realization of the velarized lateral by male speakers, on the other hand, indicates that the velarized lateral replaced the former alveolar lateral phoneme. Alveolar laterals are either realized in perceptually salient positions, thus governed by an input-switch rule, or in front vowel contexts, thus subject to coarticulatory influences. Our results illustrate the subtle interplay of phonology, phonetics and sociolinguistics.

    December 01, 2015   doi: 10.1177/0023830915615375   open full text
  • Local Coherence and Preemptive Digging-in Effects in German.
    Paape, D., Vasishth, S.
    Language and Speech. November 17, 2015

    SOPARSE predicts so-called local coherence effects: locally plausible but globally impossible parses of substrings can exert a distracting influence during sentence processing. Additionally, it predicts digging-in effects: the longer the parser stays committed to a particular analysis, the harder it becomes to inhibit that analysis. We investigated the interaction of these two predictions using German sentences. Results from a self-paced reading study show that the processing difficulty caused by a local coherence can be reduced by first allowing the globally correct parse to become entrenched, which supports SOPARSE’s assumptions.

    November 17, 2015   doi: 10.1177/0023830915608410   open full text
  • Co-articulatory Cues for Communication: An Investigation of Five Environments.
    Pycha, A.
    Language and Speech. September 08, 2015

    We hypothesized that speakers adjust co-articulation in vowel–consonant (VC) sequences in order to provide listeners with enhanced perceptual cues to C, and that they do so specifically in those situations where primary cues to C place of articulation tend to be diminished. We tested this hypothesis in a speech production study of American English, measuring the duration and extent of VC formant transitions in five conditioning environments – consonant voicing, phrasal position, sentence accent, vowel quality, and consonant place – that modulate primary cues to C place in different ways. Results partially support our hypothesis. Although speakers did not exhibit greater temporal co-articulation in contexts that tend to diminish place cues, they did exhibit greater spatial co-articulation. This finding suggests that co-articulation serves specific communicative goals.

    September 08, 2015   doi: 10.1177/0023830915603878   open full text
  • Toddlers' Word Recognition in an Unfamiliar Regional Accent: The Role of Local Sentence Context and Prior Accent Exposure.
    van Heugten, M., Johnson, E. K.
    Language and Speech. September 03, 2015

    Adults are generally adept at recognizing familiar words in unfamiliar accents. However, studies testing young children’s abilities to cope with accent-related variation in the speech signal have generated mixed results, with some work emphasizing toddlers’ early competence and other work focusing more on their long-lasting difficulties in this domain. Here, we set out to unify these two perspectives and propose that task demands may play a crucial role in children’s recognition of accented words. To this end, Canadian-English-learning 28-month-olds’ looks to images on a screen were recorded while they were presented with a Scottish-accented speaker instructing them to find a depicted target object. To examine the effect of task demands, both local sentence context and prior accent exposure were manipulated. Overall, Canadian toddlers were found to recognize Scottish-accented words successfully, showing above-chance performance in the identification of words produced in an unfamiliar accent, even when target labels were presented in isolation. However, word recognition was considerably more robust when target words were presented in sentence context. Prior exposure to the unfamiliar Scottish accent in the laboratory did not modulate children’s performance in this task. Taken together, these findings suggest that at least some task-related factors can affect children’s recognition of accented words. Understanding unfamiliar accents, like understanding familiar accents, is thus not an isolated skill but, rather, is susceptible to contextual circumstances. Future models of spoken language processing in toddlerhood should incorporate these early effects of task demands.

    September 03, 2015   doi: 10.1177/0023830915600471   open full text
  • Emotion Word Type and Affective Valence Priming at a Long Stimulus Onset Asynchrony.
    Kazanas, S. A., Altarriba, J.
    Language and Speech. June 29, 2015

    As the division between emotion and emotion-laden words has been viewed as controversial by, for example, Kousta and colleagues, the current study attempted a replication and extension of findings previously described by Kazanas and Altarriba. In their findings, Kazanas and Altarriba reported significant differences in response times (RTs) and priming effects between emotion and emotion-laden words, with faster RTs and larger priming effects with emotion words than with emotion-laden words. These findings were consistent across unmasked (Experiment 1) and masked (Experiment 2) versions of a lexical decision task, where participants either explicitly or implicitly processed the prime words of each prime-target word pair. Findings from Experiment 2 have been previously replicated by Kazanas and Altarriba with a Spanish–English bilingual sample, when tested in English, the participants’ functionally dominant language. The current study was designed to extend these previous findings, using a l000-ms stimulus onset asynchrony (SOA), which was longer than the 250-ms SOA originally used by Kazanas and Altarriba. Findings from the current study supported the division between emotion and emotion-laden words, as they replicated those previously described by Kazanas and Altarriba. In addition, the current study determined that negative words were processed significantly slower in this experiment, with a long SOA (replicating findings by Rossell and Nobre).

    June 29, 2015   doi: 10.1177/0023830915590677   open full text
  • Does Second Language Experience Modulate Perception of Tones in a Third Language?
    Qin, Z., Jongman, A.
    Language and Speech. June 25, 2015

    It is unclear what roles native language (L1) and second language (L2) play in the perception of lexical tones in a third language (L3). In tone perception, listeners with different language backgrounds use different fundamental frequency (F0). While English listeners use F0 height, Mandarin listeners rely more on F0 direction. The present study addresses whether knowledge of Mandarin, particularly as an L2, results in speakers’ reliance on F0 direction in their perception of L3 (Cantonese) tones. Fifteen English-speaking L2 learners of Mandarin constituted the target group, and 15 English monolinguals and 15 native Mandarin speakers, with no background in other tonal languages, were included as control groups. All groups had to discriminate Cantonese tones either by distinguishing a contour tone from a level tone (F0 direction pair) or a level tone from another level tone (F0 height pair). The results showed that L2 learners patterned differently from both control groups by using F0 direction as well as F0 height under the influence of L1 and L2 experience. The acoustics of the tones also affected all listeners’ discrimination. When L2 and L3 are similar in terms of the presence of lexical tone, L2 experience modulates the perception of L3 tones.

    June 25, 2015   doi: 10.1177/0023830915590191   open full text
  • The Hyper-Modular Associative Mind: A Computational Analysis of Associative Responses of Persons with Asperger Syndrome.
    Kenett, Y. N., Gold, R., Faust, M.
    Language and Speech. June 15, 2015

    Rigidity of thought is considered a main characteristic of persons with Asperger syndrome (AS). This rigidity may explain the poor comprehension of unusual semantic relations, frequently exhibited by persons with AS. Research indicates that such deficiency is related to altered mental lexicon organization, but has never been directly examined. The present study used computational network science tools to compare the mental lexicon structure of persons with AS and matched controls. Persons with AS and matched controls generated free associations, and network tools were used to extract and compare the mental lexicon structure of the two groups. The analysis revealed that persons with AS exhibit a hyper-modular semantic organization: their mental lexicon is more compartmentalized compared to matched controls. We argue that this hyper-modularity may be related to the rigidity of thought which characterizes persons with AS and discuss the clinical and more general cognitive implications of our findings.

    June 15, 2015   doi: 10.1177/0023830915589397   open full text
  • Stop and Fricative Devoicing in European Portuguese, Italian and German.
    Pape, D., Jesus, L. M.
    Language and Speech. May 12, 2014

    This paper describes a cross-linguistic production study of devoicing for European Portuguese (EP), Italian, and German. We recorded all stops and fricatives in four vowel contexts and two word positions. We computed the devoicing of the time-varying patterns throughout the stop and fricative duration. Our results show that regarding devoicing behaviour, EP is more similar to German than Italian. While Italian shows almost no devoicing of all phonologically voiced consonants, both EP and German show strong and consistent devoicing through the entire consonant. Differences in consonant position showed no effect for EP and Italian, but were significantly different for German. The height of the vowel context had an effect for German and EP. For EP, we showed that a more posterior place of articulation and low vowel context lead to significantly more devoicing. However, in contrast to German, we could not find an influence of consonant position on devoicing. The high devoicing for all phonologically voiced stops and fricatives and the vowel context influence are a surprising new result. With respect to voicing maintenance, EP is more like German than other Romance languages.

    May 12, 2014   doi: 10.1177/0023830914530604   open full text
  • Automaticity and Stability of Adaptation to a Foreign-Accented Speaker.
    Witteman, M. J., Bardhan, N. P., Weber, A., McQueen, J. M.
    Language and Speech. May 06, 2014

    In three cross-modal priming experiments we asked whether adaptation to a foreign-accented speaker is automatic, and whether adaptation can be seen after a long delay between initial exposure and test. Dutch listeners were exposed to a Hebrew-accented Dutch speaker with two types of Dutch words: those that contained [i] (globally accented words), and those in which the Dutch [i] was shortened to [i] (specific accent marker words). Experiment 1, which served as a baseline, showed that native Dutch participants showed facilitatory priming for globally accented, but not specific accent, words. In experiment 2, participants performed a 3.5-minute phoneme monitoring task, and were tested on their comprehension of the accented speaker 24 hours later using the same cross-modal priming task as in experiment 1. During the phoneme monitoring task, listeners were asked to detect a consonant that was not strongly accented. In experiment 3, the delay between exposure and test was extended to 1 week. Listeners in experiments 2 and 3 showed facilitatory priming for both globally accented and specific accent marker words. Together, these results show that adaptation to a foreign-accented speaker can be rapid and automatic, and can be observed after a prolonged delay in testing.

    May 06, 2014   doi: 10.1177/0023830914528102   open full text
  • Marked Initial Pitch in Questions Signals Marked Communicative Function.
    Sicoli, M. A., Stivers, T., Enfield, N., Levinson, S. C.
    Language and Speech. May 01, 2014

    In conversation, the initial pitch of an utterance can provide an early phonetic cue of the communicative function, the speech act, or the social action being implemented. We conducted quantitative acoustic measurements and statistical analyses of pitch in over 10,000 utterances, including 2512 questions, their responses, and about 5000 other utterances by 180 total speakers from a corpus of 70 natural conversations in 10 languages. We measured pitch at first prominence in a speaker’s utterance and discriminated utterances by language, speaker, gender, question form, and what social action is achieved by the speaker’s turn. Through applying multivariate logistic regression we found that initial pitch that significantly deviated from the speaker’s median pitch level was predictive of the social action of the question. In questions designed to solicit agreement with an evaluation rather than information, pitch was divergent from a speaker’s median predictably in the top 10% of a speakers range. This latter finding reveals a kind of iconicity in the relationship between prosody and social action in which a marked pitch correlates with a marked social action. Thus, we argue that speakers rely on pitch to provide an early signal for recipients that the question is not to be interpreted through its literal semantics but rather through an inference.

    May 01, 2014   doi: 10.1177/0023830914529247   open full text
  • Inferring Difficulty: Flexibility in the Real-time Processing of Disfluency.
    Heller, D., Arnold, J. E., Klein, N., Tanenhaus, M. K.
    Language and Speech. April 22, 2014

    Upon hearing a disfluent referring expression, listeners expect the speaker to refer to an object that is previously unmentioned, an object that does not have a straightforward label, or an object that requires a longer description. Two visual-world eye-tracking experiments examined whether listeners directly associate disfluency with these properties of objects, or whether disfluency attribution is more flexible and involves situation-specific inferences. Since in natural situations reference to objects that do not have a straightforward label or that require a longer description is correlated with both production difficulty and with disfluency, we used a mini-artificial lexicon to dissociate difficulty from these properties, building on the fact that recently learned names take longer to produce than existing words in one’s mental lexicon. The results demonstrate that disfluency attribution involves situation-specific inferences; we propose that in new situations listeners spontaneously infer what may cause production difficulty. However, the results show that these situation-specific inferences are limited in scope: listeners assessed difficulty relative to their own experience with the artificial names, and did not adapt to the assumed knowledge of the speaker.

    April 22, 2014   doi: 10.1177/0023830914528107   open full text
  • Effects of Age, Sex and Syllable Number on Voice Onset Time: Evidence from Children's Voiceless Aspirated Stops.
    Yu, V. Y., De Nil, L. F., Pang, E. W.
    Language and Speech. March 18, 2014

    Voice onset time (VOT) is a temporal acoustic parameter that reflects motor speech coordination skills. This study investigated the patterns of age and sex differences across development of voice onset time in a group of 70 English-speaking children, ranging in age from 4.1 to 18.4 years, and 12 young adults. The effect of the number of syllables on VOT patterns was also examined. Speech samples were elicited by producing syllables /pa/ and /pataka/. Results supported previous findings showing that younger children produce longer VOT values with higher levels of variability. Markedly higher VOT values and increased variability were found for boys at ages between 8 and 11 years, confirming sex differences in VOT patterns and patterns of variability. In addition, all participants consistently produced shorter VOT with higher variability for multisyllables than monosyllables, indicating an effect of syllable number. Possible explanations for these findings and clinical implications are discussed.

    March 18, 2014   doi: 10.1177/0023830914522994   open full text
  • The Contribution of Segmental and Tonal Information in Mandarin Spoken Word Processing.
    Sereno, J. A., Lee, H.
    Language and Speech. March 10, 2014

    Two priming experiments examined the separate contribution of lexical tone and segmental information in the processing of spoken words in Mandarin Chinese. Experiment 1 contrasted four types of prime–target pairs: tone-and-segment overlap (ru4-ru4), segment-only overlap (ru3-ru4), tone-only overlap (sha4-ru4) and unrelated (qin1-ru4) in an auditory lexical decision task with 48 native Mandarin listeners. Experiment 2 further investigated the minimal segmental overlap needed to trigger priming when tonal information is present. Four prime–target conditions were contrasted: tone-and-segment overlap (ru4-ru4), only onset segment overlap (re4-ru4), only rime overlap (pu4-ru4) and unrelated (qin1-ru4) in an auditory lexical decision task with 68 native Mandarin listeners. The results showed significant priming effects when both tonal and segmental information overlapped or, although to a lesser extent, when only segmental information overlapped, with no priming found when only tones matched. Moreover, any partial segmental overlap, even with matching tonal cues, resulted in significant inhibition. These data clearly indicate that lexical tones are processed differently from segments, with syllabic structure playing a critical role. These findings are discussed in terms of the overall architecture of the processing system that emerges in Mandarin lexical access.

    March 10, 2014   doi: 10.1177/0023830914522956   open full text
  • Language Familiarity, Expectation, and Novice Musical Rhythm Production.
    Neuhoff, J. G., Lidji, P.
    Language and Speech. February 12, 2014

    The music of expert musicians reflects the speech rhythm of their native language. Here, we examine this effect in amateur and novice musicians. English- and French-speaking participants were both instructed to produce simple "English" and "French" tunes using only two keys on a keyboard. All participants later rated the rhythmic variability of English and French speech samples. The rhythmic variability of the "English" and "French" tunes that were produced reflected the perceived rhythmic variability in English and French speech samples. Yet, the pattern was different for English and French participants and did not correspond to the actual measured speech rhythm variability of the speech samples. Surprise recognition tests two weeks later confirmed that the music–speech relationship remained over time. The results show that the relationship between music and speech rhythm is more widespread than previously thought and that musical rhythm production by amateurs and novices is concordant with their rhythmic expectations in the perception of speech.

    February 12, 2014   doi: 10.1177/0023830914520837   open full text
  • Effects of Compatible versus Competing Rhythmic Grouping on Errors and Timing Variability in Speech.
    Katsika, A., Shattuck-Hufnagel, S., Mooshammer, C., Tiede, M., Goldstein, L.
    Language and Speech. December 23, 2013

    In typical speech words are grouped into prosodic constituents. This study investigates how such grouping interacts with segmental sequencing patterns in the production of repetitive word sequences. We experimentally manipulated grouping behavior using a rhythmic repetition task to elicit speech for perceptual and acoustic analysis to test the hypothesis that prosodic structure and patterns of segmental alternation can interact in the production planning process. Talkers produced alternating sequences of two words (top cop) and non-alternating controls (top top and cop cop), organized into six-word sequences. These sequences were further organized into prosodic groupings of three two-word pairs or two three-word triples by means of visual cues and audible metronome clicks. Results for six speakers showed more speech errors in triples, that is, when pairwise word alternation was mismatched with prosodic subgrouping in triples. This result suggests that the planning process for the segmental units of an utterance interacts with the planning process for the prosodic grouping of its words. It also highlights the importance of extending commonly used experimental speech elicitation methods to include more complex prosodic patterns, in order to evoke the kinds of interaction between prosodic structure and planning that occur in the production of lexical forms in continuous communicative speech.

    December 23, 2013   doi: 10.1177/0023830913512776   open full text
  • White Bear Effects in Language Production: Evidence from the Prosodic Realization of Adjectives.
    Kaland, C., Krahmer, E., Swerts, M.
    Language and Speech. December 23, 2013

    A central problem in recent research on speech production concerns the question to what extent speakers adapt their linguistic expressions to the needs of their addressees. It is claimed that speakers sometimes leak information about objects that are only visible for them and not for their listeners. Previous research only takes the occurrence of adjectives as evidence for the leakage of privileged information. The present study hypothesizes that leaked information is also encoded in the prosody of those adjectives. A production experiment elicited adjectives that leak information and adjectives that do not leak information. An acoustic analysis and prominence rating task showed that adjectives that leak information were uttered with a higher pitch and perceived as more prominent compared to adjectives that do not leak information. Furthermore, a guessing task suggested that the adjectives’ prosody relates to how listeners infer possible privileged information.

    December 23, 2013   doi: 10.1177/0023830913513710   open full text
  • OCP-PLACE in Speech Segmentation.
    Boll-Avetisyan, N., Kager, R.
    Language and Speech. November 27, 2013

    OCP-PLACE, a cross-linguistically well-attested constraint against pairs of consonants with shared [place], is psychologically real. Studies have shown that the processing of words violating OCP-PLACE is inhibited. Functionalists assume that OCP arises as a consequence of low-level perception: a consonant following another with the same [place] cannot be faithfully perceived as an independent unit. If functionalist theories were correct, then lexical access would be inhibited if two homorganic consonants conjoin at word boundaries—a problem that can only be solved with lexical feedback.

    Here, we experimentally challenge the functional account by showing that OCP-PLACE can be used as a speech segmentation cue during pre-lexical processing without lexical feedback, and that the use relates to distributions in the input.

    In Experiment 1, native listeners of Dutch located word boundaries between two labials when segmenting an artificial language. This indicates a use of OCP-LABIAL as a segmentation cue, implying a full perception of both labials. Experiment 2 shows that segmentation performance cannot solely be explained by well-formedness intuitions. Experiment 3 shows that knowledge of OCP-PLACE depends on language-specific input: in Dutch, co-occurrences of labials are under-represented, but co-occurrences of coronals are not. Accordingly, Dutch listeners fail to use OCP-CORONAL for segmentation.

    November 27, 2013   doi: 10.1177/0023830913508074   open full text
  • Effects of Age of Learning on Voice Onset Time: Categorical Perception of Swedish Stops by Near-native L2 Speakers.
    Stolten, K., Abrahamsson, N., Hyltenstam, K.
    Language and Speech. November 26, 2013

    This study examined the effects of age of onset (AO) of L2 acquisition on the categorical perception of the voicing contrast in Swedish word-initial stops varying in voice onset time (VOT). Three voicing continua created on the basis of natural Swedish word pairs with /p-b/, /t-d/, /k-/ in initial position were presented to 41 Spanish early (AO < 12) and late (AO > 12) near-native speakers of L2 Swedish. Fifteen native speakers of Swedish served as controls. Categorizations were influenced by AO and listener status as L1/L2 speaker, in that the late learners deviated the most from native-speaker perception. In addition, only a small minority of the late learners perceived the voicing contrast in a way comparable to native-speaker categorization, while most early L2 learners demonstrated nativelike categorization patterns. However, when the results were combined with the L2 learners’ production of Swedish voiceless stops (Stölten, 2005; Stölten, Abrahamsson & Hyltenstam, in press), nativelike production and perception was never found among the late learners, while a majority of the early learners still exhibited nativelike production and perception. It is concluded that, despite their being perceived as mother-tongue speakers of Swedish by native listeners, the late learners do not, after detailed phonetic scrutiny, exhibit a fully nativelike command of Swedish VOT. Consequently, being near-native rather than nativelike speakers of their second language, these individuals do not constitute the evidence necessary to reject the hypothesis of one or several critical (or sensitive) periods for language acquisition.

    November 26, 2013   doi: 10.1177/0023830913508760   open full text
  • Multiple Functional Units in the Preattentive Segmentation of Speech in Japanese: Evidence from Word Illusions.
    Nakamura, M., Kolinsky, R.
    Language and Speech. November 21, 2013

    We explored the functional units of speech segmentation in Japanese using dichotic presentation and a detection task requiring no intentional sublexical analysis. Indeed, illusory perception of a target word might result from preattentive migration of phonemes, morae, or syllables from one ear to the other. In Experiment 1, Japanese listeners detected targets presented in hiragana and/or kanji. Phoneme migrations did occur, suggesting that orthography-independent sublexical constituents play some role in segmentation. However, syllable and especially mora migrations were more numerous. This pattern of results was not observed in French speakers (Experiment 2), suggesting that it reflects native segmentation in Japanese. To control for the intervention of kanji representations (many words are written in kanji, and one kanji often corresponds to one syllable), in Experiment 3, Japanese listeners were presented with target loanwords that can be written only in katakana. Again, phoneme migrations occurred, while the first mora and syllable led to similar rates of illusory percepts. No migration occurred for the second, "special" mora (/J/ or /N/), probably because this constitutes the latter part of a heavy syllable. Overall, these findings suggest that multiple units, such as morae, syllables, and even phonemes, function independently of orthographic knowledge in Japanese preattentive speech segmentation.

    November 21, 2013   doi: 10.1177/0023830913508077   open full text
  • Phonetic Detail and Dimensionality in Sound-shape Correspondences: Refining the Bouba-Kiki Paradigm.
    D'Onofrio, A.
    Language and Speech. November 15, 2013

    Sound symbolism is the process by which speakers link phonetic features with meanings non-arbitrarily. For instance, speakers across languages associate non-words with rounded vowels, like bouba, with round shapes, and non-words without rounded vowels, like kiki, with spiky shapes. Researchers have posited that this link results from a cognitive association between sounds and visual or proprioceptive cues made in their production (e.g. sounds of rounded vowels cue the image of rounded lips, which is mapped to rounded shapes). However, non-words used in previous studies differ from one another along multiple phonetic dimensions, some showing no clear iconic mapping to shape. This study teases apart these features, finding that vowel backness, consonant voicing, and consonant place of articulation each elicit a sound symbolic effect, which is amplified when these dimensions are combined. This investigation also probes object properties that can be involved in sound symbolic association, bringing the "bouba-kiki" paradigm, typically involving the use of abstract shapes, into the realm of real-world objects. To shed light on ways that sound symbolism may operate in natural language, this study suggests that future research in this paradigm would benefit from consideration of both more detailed phonetic correlates and more refined object properties.

    November 15, 2013   doi: 10.1177/0023830913507694   open full text
  • Dynamic Spectral Structure Specifies Vowels for Adults and Children.
    Nittrouer, S., Lowenstein, J. H.
    Language and Speech. November 07, 2013

    The dynamic specification account of vowel recognition suggests that formant movement between vowel targets and consonant margins is used by listeners to recognize vowels. This study tested that account by measuring contributions to vowel recognition of dynamic (i.e., time-varying) spectral structure and coarticulatory effects on stationary structure. Adults and children (four- and seven-year-olds) were tested with three kinds of consonant-vowel-consonant syllables: (1) unprocessed; (2) sine waves that preserved both stationary coarticulated and dynamic spectral structure; and (3) vocoded signals that primarily preserved that stationary, but not dynamic structure. Sections of two lengths were removed from syllable middles: (1) half the vocalic portion; and (2) all but the first and last three pitch periods. Adults performed accurately with unprocessed and sine-wave signals, as long as half the syllable remained; their recognition was poorer for vocoded signals, but above chance. Seven-year-olds performed more poorly than adults with both sorts of processed signals, but disproportionately worse with vocoded than sine-wave signals. Most four-year-olds were unable to recognize vowels at all with vocoded signals. Conclusions were that both dynamic and stationary coarticulated structures support vowel recognition for adults, but children attend to dynamic spectral structure more strongly because early phonological organization favors whole words.

    November 07, 2013   doi: 10.1177/0023830913508075   open full text
  • A Corpus-based Study of Fillers among Native Basque Speakers and the Role of Zera.
    Urizar, X., Samuel, A. G.
    Language and Speech. October 28, 2013

    Although speakers often transmit their messages clearly and concisely, their speech also includes disfluencies, including filler words. We have analyzed the kinds of filler-like words (hereafter fillers) that native Basque speakers produce and the role that these fillers have within the discourse. We recorded six Basque L1 speakers in a natural setting designed to trigger spontaneous speech. Because Basque is an agglutinative language it may offer speakers certain options for filler use that have not been observed in studies of languages that do not have such a rich agglutinative morphology (e.g. English). When speakers are close to the retrieval of a to-be-produced word, but not quite able to access it, they may use the agglutinative morphology to give the listener clues to the syntactic category of the intended word. In Basque such clues could be provided by modifying the surface form of a filler.

    Our corpus includes approximately 300 filler tokens. We provide analyses of the kinds of fillers this population produces and the contexts in which these appear. Certain fillers tend to be produced before beginning large units (e.g. sentences), whereas others usually precede smaller units. One filler (/zera/) behaves differently than the others. In particular, it assumes context-based forms that offer listeners partial information about the almost-retrieved word.

    October 28, 2013   doi: 10.1177/0023830913506422   open full text
  • English Listeners' Use of Distributional and Acoustic-Phonetic Cues to Liaison in French: Evidence from Eye Movements.
    Tremblay, A., Spinelli, E.
    Language and Speech. October 08, 2013

    This study investigates English listeners’ use of distributional and acoustic-phonetic cues to liaison in French. Liaison creates a misalignment of the syllable and word boundaries, but is signaled by distributional cues (/z/ is a frequent liaison but not a frequent word onset; /t/ is a frequent word onset but a less frequent liaison) and acoustic-phonetic cues (liaison consonants are 15 per cent shorter than word-initial consonants). English-speaking French learners completed a visual-world eye-tracking experiment in which they heard adjective-noun sequences where the pivotal consonant was /t/ (expected advantage for consonant-initial words) or /z/ (expected advantage for liaison-initial words). Their results were compared to those of native French speakers. Both groups showed an advantage for consonant-initial targets with /t/ but no advantage for consonant- or liaison-initial targets with /z/. Both groups’ competitor fixations were modulated by the duration of the pivotal consonant, but only the learners’ fixations to liaison-initial targets were modulated by the duration of the pivotal consonant. This suggests that English listeners use both top-down (distributional) and bottom-up (acoustic-phonetic) cues to liaison in French. Their greater reliance on acoustic-phonetic cues is hypothesized to stem in part from English, where such cues play an important role for locating word boundaries.

    October 08, 2013   doi: 10.1177/0023830913504569   open full text
  • Exploring Interactional Features with Prosodic Patterns.
    Zellers, M., Ogden, R.
    Language and Speech. October 07, 2013

    This study adopts a multiple-methods approach to the investigation of prosody, drawing on insights from a quantitative methodology (experimental prosody research) as well as a qualitative one (conversation analysis). We use a k-means cluster analysis to investigate prosodic patterns in conversational sequences involving lexico-semantic contrastive structures. This combined methodology demonstrates that quantitative/statistical methods are a valuable tool for making relatively objective characterizations of acoustic features of speech, while qualitative methods are essential for interpreting the quantitative results. We find that in sequences that maintain global prosodic characteristics across contrastive structures, participants orient to interactional problems, such as determining who has the right to the floor, or avoiding disruption of an ongoing interaction. On the other hand, in sequences in which the global prosody is different across contrastive structures, participants do not generally appear to be orienting to such problems of alignment. Our findings expand the interpretation of "contrastive prosody" that is commonly used in experimental prosody approaches, while providing a way for conversation-analytic research to improve quantification and generalizability of findings.

    October 07, 2013   doi: 10.1177/0023830913504568   open full text
  • Rhythmic Patterning in Malaysian and Singapore English.
    Tan, R. S. K., Low, E.-L.
    Language and Speech. July 31, 2013

    Previous work on the rhythm of Malaysian English has been based on impressionistic observations. This paper utilizes acoustic analysis to measure the rhythmic patterns of Malaysian English. Recordings of the read speech and spontaneous speech of 10 Malaysian English speakers were analyzed and compared with recordings of an equivalent sample of Singaporean English speakers. Analysis was done using two rhythmic indexes, the PVI and VarcoV. It was found that although the rhythm of read speech of the Singaporean speakers was syllable-based as described by previous studies, the rhythm of the Malaysian speakers was even more syllable-based. Analysis of the syllables in specific utterances showed that Malaysian speakers did not reduce vowels as much as Singaporean speakers in cases of syllables in utterances. Results of the spontaneous speech confirmed the findings for the read speech; that is, the same rhythmic patterning was found which normally triggers vowel reductions.

    July 31, 2013   doi: 10.1177/0023830913496058   open full text
  • Syntactic Priming without Lexical Overlap in Reading Comprehension.
    Kim, C. S., Carbary, K. M., Tanenhaus, M. K.
    Language and Speech. July 31, 2013

    Syntactic priming without lexical overlap is well-documented in language production. In contrast, reading-time comprehension studies, which typically use locally ambiguous sentences, generally find syntactic priming only with lexical overlap. This asymmetry has led some researchers to propose that distinct mechanisms underlie the comprehension and production of syntactic structure. Instead, we propose that methodological differences in how priming is assessed are largely responsible for the asymmetry: in comprehension, lexical biases in a locally ambiguous target sentence may overwhelm the influence of syntactic priming effects on a reader’s interpretation. We addressed these issues in a self-paced reading study by (1) using target sentences containing global attachment ambiguities, (2) examining a syntactic structure which does not involve an argument of the verb, and (3) factoring out the unavoidable lexical biases associated with the target sentences in a mixed-effects regression model. Under these conditions, syntactic priming affected how ambiguous sentences were parsed, and facilitated reading times when target sentences were parsed using the primed structure. This resolves discrepancies among previous findings, and suggests that the same mechanism underlies syntactic priming in comprehension and production.

    July 31, 2013   doi: 10.1177/0023830913496052   open full text
  • On the Intonation of German Intonation Questions: The Role of the Prenuclear Region.
    Petrone, C., Niebuhr, O.
    Language and Speech. July 26, 2013

    German questions and statements are distinguished not only by lexical and syntactic but also by intonational means. This study revisits, for Northern Standard German, how questions are signalled intonationally in utterances that have neither lexical nor syntactic cues. Starting from natural productions of such ‘intonation questions’, two perception experiments were run. Experiment I is based on a gating paradigm, which was applied to naturally produced questions and statements. Experiment II includes two indirect-identification tasks. Resynthesized stimuli were judged in relation to two context utterances, each of which was compatible with only one sentence mode interpretation. Results show that utterances with a finally falling nuclear pitch-accent contour can also trigger question perception. An utterance-final rise is not mandatory. Also, question and statement cues are not restricted to the intonational nucleus. Rather, listeners can refer to shape, slope, and alignment differences of the preceding prenuclear pitch accent to identify sentence mode. These findings are in line with studies suggesting that the utterance-final rise versus fall contrast is not directly related to sentence modality, but represents a separate attitudinal meaning dimension. Moreover, the findings support that both prenuclear and nuclear fundamental frequency (F0) patterns must be taken into account in the analysis of tune meaning.

    July 26, 2013   doi: 10.1177/0023830913495651   open full text
  • Functional Load and the Lexicon: Evidence that Syntactic Category and Frequency Relationships in Minimal Lemma Pairs Predict the Loss of Phoneme contrasts in Language Change.
    Wedel, A., Jackson, S., Kaplan, A.
    Language and Speech. July 03, 2013

    All languages use individually meaningless, contrastive categories in combination to create distinct words. Despite their central role in communication, these "phoneme" contrasts can be lost over the course of language change. The century-old functional load hypothesis proposes that loss of a phoneme contrast will be inhibited in relation to the work that it does in distinguishing words. In a previous work we showed for the first time that a simple measure of functional load does significantly predict patterns of contrast loss within a diverse set of languages: the more minimal word pairs that a phoneme contrast distinguishes, the less likely those phonemes are to have merged over the course of language change. Here, we examine several lexical properties that are predicted to influence the uncertainty between word pairs in usage. We present evidence that (a) the lemma rather than surface-form count of minimal pairs is more predictive of merger; (b) the count of minimal lemma pairs that share a syntactic category is a stronger predictor of merger than the count of those with divergent syntactic categories, and (c) that the count of minimal lemma pairs with members of similar frequency is a stronger predictor of merger than that of those with more divergent frequencies. These findings support the broad hypothesis that properties of individual utterances influence long-term language change, and are consistent with findings suggesting that phonetic cues are modulated in response to lexical uncertainty within utterances.

    July 03, 2013   doi: 10.1177/0023830913489096   open full text
  • Introduction to the Special Issue: Parsimony and Redundancy in Models of Language.
    Wiechmann, D., Kerz, E., Snider, N., Jaeger, T. F.
    Language and Speech. June 28, 2013
    There is no abstract available for this paper.
    June 28, 2013   doi: 10.1177/0023830913490877   open full text
  • Sidestepping the Combinatorial Explosion: An Explanation of n-gram Frequency Effects Based on Naive Discriminative Learning.
    Baayen, R. H., Hendrix, P., Ramscar, M.
    Language and Speech. May 20, 2013

    Arnon and Snider ((2010). More than words: Frequency effects for multi-word phrases. Journal of Memory and Language, 62, 67–82) documented frequency effects for compositional four-grams independently of the frequencies of lower-order n-grams. They argue that comprehenders apparently store frequency information about multi-word units. We show that n-gram frequency effects can emerge in a parameter-free computational model driven by naive discriminative learning, trained on a sample of 300,000 four-word phrases from the British National Corpus. The discriminative learning model is a full decomposition model, associating orthographic input features straightforwardly with meanings. The model does not make use of separate representations for derived or inflected words, nor for compounds, nor for phrases. Nevertheless, frequency effects are correctly predicted for all these linguistic units. Naive discriminative learning provides the simplest and most economical explanation for frequency effects in language processing, obviating the need to posit counters in the head for, and the existence of, hundreds of millions of n-gram representations.

    May 20, 2013   doi: 10.1177/0023830913484896   open full text
  • More than Words: The Effect of Multi-word Frequency and Constituency on Phonetic Duration.
    Arnon, I., Cohen Priva, U.
    Language and Speech. May 09, 2013

    There is mounting evidence that language users are sensitive to the distributional properties of multi-word sequences. Such findings expand the range of information speakers are sensitive to and call for processing models that can represent larger chains of relations. In the current paper we investigate the effect of multi-word statistics on phonetic duration using a combination of experimental and corpus-based research. We ask (a) if phonetic duration is affected by multi-word frequency in both elicited and spontaneous speech, and (b) if syntactic constituency modulates the effect. We show that phonetic durations are reduced in higher frequency sequences, regardless of constituency: duration is shorter for more frequent sequences within and across syntactic boundaries. The effects are not reducible to the frequency of the individual words or substrings. These findings open up a novel set of questions about the interaction between surface distributions and higher order properties, and the resulting need (or lack thereof) to incorporate higher order properties into processing models.

    May 09, 2013   doi: 10.1177/0023830913484891   open full text
  • Bayesian Tree Substitution Grammars as a Usage-based Approach.
    Post, M., Gildea, D.
    Language and Speech. May 09, 2013

    Tree substitution grammar (TSG) is a generalization of context-free grammar (CFG) that permits non-terminals to rewrite as fragments of arbitrary size, instead of just depth-one productions. We discuss connections between the TSG framework and the larger family of usage-based approaches to language, showing how TSG allows us to make some of the claims of these approaches sufficiently concrete for computational modeling.

    A fundamental difficulty in defining a TSG is to determine the set of fragments for the grammar, because the set of possible fragments is exponential in the size of the parse trees from which TSGs are typically learned. We describe a model-based approach that learns a TSG using Gibbs sampling with a non-parametric prior to control fragment size, yielding grammars that contain mostly small fragments but that include larger ones as the data permits. We evaluate these grammars on two tasks (parsing accuracy and grammaticality classification), and find that these Bayesian TSGs achieve excellent performance on two tasks relative to a set of heuristically extracted TSGs spanning the spectrum of representations, from a standard depth-one context-free Treebank grammar to explicit approximations of the Data-Oriented Parsing model.

    May 09, 2013   doi: 10.1177/0023830913484901   open full text
  • Three Design Principles of Language: The Search for Parsimony in Redundancy.
    Beekhuizen, B., Bod, R., Zuidema, W.
    Language and Speech. April 30, 2013

    In this paper we present three design principles of language – experience, heterogeneity and redundancy – and present recent developments in a family of models incorporating them, namely Data-Oriented Parsing/Unsupervised Data-Oriented Parsing. Although the idea of some form of redundant storage has become part and parcel of parsing technologies and usage-based linguistic approaches alike, the question how much of it is cognitively realistic and/or computationally optimally efficient is an open one. We argue that a segmentation-based approach (Bayesian Model Merging) combined with an all-subtrees approach reduces the number of rules needed to achieve an optimal performance, thus making the parser more efficient. At the same time, starting from unsegmented wholes comes closer to the acquisitional situation of a language learner, and thus adds to the cognitive plausibility of the model.

    April 30, 2013   doi: 10.1177/0023830913484897   open full text
  • Representing Idioms: Syntactic and Contextual Effects on Idiom Processing.
    Holsinger, E.
    Language and Speech. April 30, 2013

    Recent work on the processing of idiomatic expressions argues against the idea that idioms are simply big words. For example, hybrid models of idiom representation, originally investigated in the context of idiom production, propose a priority of literal computation, and a principled relationship between the conceptual meaning of an idiom, its literal lemmas and its syntactic structure. We examined the predictions of the hybrid representation hypothesis in the domain of idiom comprehension. We conducted two experiments to examine the role of syntactic, lexical and contextual factors on the interpretation of idiomatic expressions. Experiment 1 examines the role of syntactic compatibility and lexical compatibility on the real-time processing of potentially idiomatic strings. Experiment 2 examines the role of contextual information on idiom processing and how context interacts with lexical information during processing. We find evidence that literal computation plays a causal role in the retrieval of idiomatic meaning and that contextual, lexical and structural information influence the processing of idiomatic strings at early stages during processing, which provide support for the hybrid model of idiom representation in the domain of idiom comprehension.

    April 30, 2013   doi: 10.1177/0023830913484899   open full text
  • Implicit Schemata and Categories in Memory-based Language Processing.
    van den Bosch, A., Daelemans, W.
    Language and Speech. April 30, 2013

    Memory-based language processing (MBLP) is an approach to language processing based on exemplar storage during learning and analogical reasoning during processing. From a cognitive perspective, the approach is attractive as a model for human language processing because it does not make any assumptions about the way abstractions are shaped, nor any a priori distinction between regular and exceptional exemplars, allowing it to explain fluidity of linguistic categories, and both regularization and irregularization in processing. Schema-like behaviour and the emergence of categories can be explained in MBLP as by-products of analogical reasoning over exemplars in memory. We focus on the reliance of MBLP on local (versus global) estimation, which is a relatively poorly understood but unique characteristic that separates the memory-based approach from globally abstracting approaches in how the model deals with redundancy and parsimony. We compare our model to related analogy-based methods, as well as to example-based frameworks that assume some systemic form of abstraction.

    April 30, 2013   doi: 10.1177/0023830913484902   open full text
  • Children's Expression of Uncertainty in Collaborative and Competitive Contexts.
    Visser, M., Krahmer, E., Swerts, M.
    Language and Speech. March 25, 2013

    We studied the effect of two social settings (collaborative versus competitive) on the visual and auditory expressions of uncertainty by children in two age groups (8 and 11). We conducted an experiment in which children played a quiz game in pairs. They either had to collaborate or compete with each other. We found that the Feeling-of-Knowing of eight-year-old children did not seem to be affected by the social setting, contrary to the Feeling-of-Knowing of 11-year-old children. In addition, we labelled children’s expressions in clips taken from the experiment for various visual and auditory features. We found that children used some of these features to signal uncertainty and that older children exhibited clearer cues than younger children. In a subsequent perception test, adults rated children’s certainty in clips used for labelling. It appeared that older children and children in competition expressed their confidence level more clearly than younger children and children in collaboration.

    March 25, 2013   doi: 10.1177/0023830913479117   open full text
  • Implications of an Exemplar-theoretic Model of Phoneme Genesis: A Velar Palatalization Case Study.
    Morley, R. L.
    Language and Speech. March 24, 2013

    Diachronic velar palatalization is taken as the case study for modeling the emergence of a new phoneme category. The spread of a palatalized variant through the lexicon is treated as a stochastic classification task for the listener/learner. The model combines two measures of similarity to determine classification within an exemplar-theoretic framework: acoustic distance and phonotactic expectation. There are three model outcomes: contrast, allophony, or contextual neutralization between the plain and palatalized velars. It is shown, through a series of simulations, that these can be predicted from the distribution of sounds within the pre-change lexicons, namely, the ratio of the /k-vowel/ sequences containing naturally palatalizing vowels (i, I, e), to those containing non-palatalizers. "Unnatural" phonotactic associations can arise in individual lexicons, but are sharply limited due to the large size of the lexicon and the local nature of the phoneme changes. "Anti-natural" distributions, which categorically violate the proposed implicational relationship between palatalization and frontness/height, are absent. This work provides an explicit and restrictive model of phoneme change. The results also serve as an existence proof for an outcome-blind mechanism of avoiding over-generation.

    March 24, 2013   doi: 10.1177/0023830913478926   open full text
  • Use of Syntax in Perceptual Compensation for Phonological Reduction.
    Tuinman, A., Mitterer, H., Cutler, A.
    Language and Speech. March 20, 2013

    Listeners resolve ambiguity in speech by consulting context. Extensive research on this issue has largely relied on continua of sounds constructed to vary incrementally between two phonemic endpoints. In this study we presented listeners instead with phonetic ambiguity of a kind with which they have natural experience: varying degrees of word-final /t/-reduction. In two experiments, Dutch listeners decided whether or not the verb in a sentence such as Maar zij ren(t) soms ‘But she sometimes run(s)’ ended in /t/. In Dutch, presence versus absence of final /t/ distinguishes third- from first-person singular present-tense verbs. Acoustic evidence for /t/ varied from clear to absent, and immediately preceding phonetic context was consistent with more versus less likely deletion of /t/. In both experiments, listeners reported more /t/s in sentences in which /t/ would be syntactically correct. In Experiment 1, the disambiguating syntactic information preceded the target verb, as above, while in Experiment 2, it followed the verb. The syntactic bias was greater for fast than for slow responses in Experiment 1, but no such difference appeared in Experiment 2. We conclude that syntactic information does not directly influence pre-lexical processing, but is called upon in making phoneme decisions.

    March 20, 2013   doi: 10.1177/0023830913479106   open full text
  • Phonological Variant Recognition: Representations and Rules.
    Pinnow, E., Connine, C. M.
    Language and Speech. March 13, 2013

    The current research explores the role of lexical representations and processing in the recognition of phonological variants. Two alternative approaches for variant recognition are considered: a representational approach that posits frequency-graded lexical representations for variant forms and inferential processes that mediate between the spoken variant and the lexical representation. In a lexical decision task (Experiment 1) and in a phoneme identification task (Experiment 2) using real words, low-frequency variants, but not high-frequency variants, show improved recognition rates following additional experience with the variants. This knowledge generalized to novel variant forms. Experiment 3 replicated these results using an artificial lexicon and showed that recognition of low-frequency variants was influenced by similarity to a high-frequency variant form. Similarity to a high-frequency variant alone, however, was insufficient to explain recognition of the infrequent variants (Experiments 4 and 5). The results support a hybrid account of variant recognition that relies on both multiple frequency-graded representations and inference processes.

    March 13, 2013   doi: 10.1177/0023830913479105   open full text
  • Lexical Selection in Action: Evidence from Spontaneous Punning.
    Otake, T., Cutler, A.
    Language and Speech. March 10, 2013

    Analysis of a corpus of spontaneously produced Japanese puns from a single speaker over a two-year period provides a view of how a punster selects a source word for a pun and transforms it into another word for humorous effect. The pun-making process is driven by a principle of similarity: the source word should as far as possible be preserved (in terms of segmental sequence) in the pun. This renders homophones (English example: band–banned) the pun type of choice, with part–whole relationships of embedding (cap–capture), and mutations of the source word (peas–bees) rather less favored. Similarity also governs mutations in that single-phoneme substitutions outnumber larger changes, and in phoneme substitutions, subphonemic features tend to be preserved. The process of spontaneous punning thus applies, on line, the same similarity criteria as govern explicit similarity judgments and offline decisions about pun success (e.g., for inclusion in published collections). Finally, the process of spoken-word recognition is word-play-friendly in that it involves multiple word-form activation and competition, which, coupled with known techniques in use in difficult listening conditions, enables listeners to generate most pun types as offshoots of normal listening procedures.

    March 10, 2013   doi: 10.1177/0023830913478933   open full text
  • Prominence in Triconstituent Compounds: Pitch Contours and Linguistic Theory.
    Kosling, K., Kunter, G., Baayen, H., Plag, I.
    Language and Speech. March 10, 2013

    According to the widely accepted Lexical Category Prominence Rule (LCPR), prominence assignment to triconstituent compounds depends on the branching direction. Left-branching compounds, that is, compounds with a left-hand complex constituent, are held to have highest prominence on the left-most constituent, whereas right-branching compounds have highest prominence on the second of the three constituents. The LCPR is, however, only poorly empirically supported. The present paper tests a new hypothesis concerning the prominence of triconstituent compounds and suggests a new methodology for the empirical investigation of compound prominence. According to this hypothesis, the prominence pattern of the embedded compound has a decisive influence on the prominence of the whole compound. Using a mixed-effects generalized additive model for the analysis of the pitch movements, it is shown that all triconstituent compounds have an accent on the first constituent irrespective of branching, and that the placement of a second, or even a third, accent is dependent on the prominence pattern of the embedded compound. The LCPR is wrong.

    March 10, 2013   doi: 10.1177/0023830913478914   open full text
  • Intonational Means to Mark Verum Focus in German and French.
    Turco, G., Dimroth, C., Braun, B.
    Language and Speech. November 18, 2012

    German and French differ in a number of aspects. Regarding the prosody-pragmatics interface, German is said to have a direct focus-to-accent mapping, which is largely absent in French – owing to strong structural constraints. We used a semi-spontaneous dialogue setting to investigate the intonational marking of Verum Focus, a focus on the polarity of an utterance in the two languages (e.g. the child IS tearing the banknote as an opposite claim to the child is not tearing the banknote). When Verum Focus applies to auxiliaries, pragmatic aspects (i.e. highlighting the contrast) directly compete with structural constraints (e.g. avoiding an accent on phonologically weak elements such as monosyllabic function words). Intonational analyses showed that auxiliaries were predominantly accented in German, as expected. Interestingly, we found a high number of (as yet undocumented) focal accents on phrase-initial auxiliaries in French Verum Focus contexts. When French accent patterns were equally distributed across information structural contexts, relative prominence (in terms of peak height) between initial and final accents was shifted towards initial accents in Verum Focus compared to non-Verum Focus contexts. Our data hence suggest that French also may mark Verum Focus by focal accents but that this tendency is partly overridden by strong structural constraints.

    November 18, 2012   doi: 10.1177/0023830912460506   open full text
  • Examining the Acquisition of Phonological Word Forms with Computational Experiments.
    Vitevitch, M. S., Storkel, H. L.
    Language and Speech. October 23, 2012

    It has been hypothesized that known words in the lexicon strengthen newly formed representations of novel words, resulting in words with dense neighborhoods being learned more quickly than words with sparse neighborhoods. Tests of this hypothesis in a connectionist network showed that words with dense neighborhoods were learned better than words with sparse neighborhoods when the network was exposed to the words all at once (Experiment 1), or gradually over time, like human word-learners (Experiment 2). This pattern was also observed despite variation in the availability of processing resources in the networks (Experiment 3). A learning advantage for words with sparse neighborhoods was observed only when the network was initially exposed to words with sparse neighborhoods and exposed to dense neighborhoods later in training (Experiment 4). The benefits of computational experiments for increasing our understanding of language processes and for the treatment of language processing disorders are discussed.

    October 23, 2012   doi: 10.1177/0023830912460513   open full text
  • Football versus football: Effect of topic on /r/ realization in American and English sports fans.
    Love, J., Walker, A.
    Language and Speech. September 11, 2012

    Can the topic of a conversation, when heavily associated with a particular dialect region, influence how a speaker realizes a linguistic variable? We interviewed fans of English Premier League soccer at a pub in Columbus, Ohio. Nine speakers of British English and eleven speakers of American English were interviewed about their favorite American football and English soccer teams. We present evidence that the soccer fans in this speech community produce variants more consistent with Standard American English when talking about American football than English soccer. Specifically, speakers were overall more /r/-ful (F3 values were lower in rhotic environments) when talking about their favorite American football team. Numeric trends in the data also suggest that exposure to both American and British English, being a fan of both sports, and task may mediate these effects.

    September 11, 2012   doi: 10.1177/0023830912453132   open full text
  • Identifying Nonwords: Effects of Lexical Neighborhoods, Phonotactic Probability, and Listener Characteristics.
    Janse, E., Newman, R. S.
    Language and Speech. July 11, 2012

    Listeners find it relatively difficult to recognize words that are similar-sounding to other known words. In contrast, when asked to identify spoken nonwords, listeners perform better when the nonwords are similar to many words in their language. These effects of sound similarity have been assessed in multiple ways, and both sublexical (phonotactic probability) and lexical (neighborhood) effects have been reported, leading to models that incorporate multiple stages of processing. One prediction that can be derived from these models is that there may be differences among individuals in the size of these similarity effects as a function of working memory abilities. This study investigates how item-individual characteristics of nonwords (both phonotactic probability and neighborhood density) interact with listener-individual characteristics (such as cognitive abilities and hearing sensitivity) in the perceptual identification of nonwords. A set of nonwords was used in which neighborhood density and phonotactic probability were not correlated. In our data, neighborhood density affected identification more reliably than did phonotactic probability. The first study, with young adults, showed that higher neighborhood density particularly benefits nonword identification for those with poorer attention-switching control. This suggests that it may be easier to focus attention on a novel item if it activates and receives support from more similar-sounding neighbors. A similar study on nonword identification with older adults showed increased neighborhood density effects for those with poorer hearing, suggesting that activation of long-term linguistic knowledge is particularly important to back up auditory representations that are degraded as a result of hearing loss.

    July 11, 2012   doi: 10.1177/0023830912447914   open full text