Visual Statistical Learning With Stimuli Presented Sequentially Across Space and Time in Deaf and Hearing Adults
Cognitive Science / Cognitive Sciences
Published online on October 15, 2018
Abstract
---
- |2
Abstract
This study investigated visual statistical learning (VSL) in 24 deaf signers and 24 hearing non‐signers. Previous research with hearing individuals suggests that SL mechanisms support literacy. Our first goal was to assess whether VSL was associated with reading ability in deaf individuals, and whether this relation was sustained by a link between VSL and sign language skill. Our second goal was to test the Auditory Scaffolding Hypothesis, which makes the prediction that deaf people should be impaired in sequential processing tasks. For the VSL task, we adopted a modified version of the triplet learning paradigm, with stimuli presented sequentially across space and time. Results revealed that measures of sign language skill (sentence comprehension/repetition) did not correlate with VSL scores, possibly due to the sequential nature of our VSL task. Reading comprehension scores (PIAT‐R) were a significant predictor of VSL accuracy in hearing but not deaf people. This finding might be due to the sequential nature of the VSL task and to a less salient role of the sequential orthography‐to‐phonology mapping in deaf readers compared to hearing readers. The two groups did not differ in VSL scores. However, when reading ability was taken into account, VSL scores were higher for the deaf group than the hearing group. Overall, this evidence is inconsistent with the Auditory Scaffolding Hypothesis, suggesting that humans can develop efficient sequencing abilities even in the absence of sound.
- Cognitive Science, EarlyView.