MetaTOC stay on top of your field, easily

Using eye movements to model the sequence of text–picture processing for multimedia comprehension

,

Journal of Computer Assisted Learning

Published online on

Abstract

This study used eye movement modeling examples (EMME) to support students' integrative processing of verbal and graphical information during the reading of an illustrated text. EMME consists of a replay of eye movements of a model superimposed onto the materials that are processed for accomplishing the task. Specifically, the study investigated the effects of modeling the temporal sequence of text and picture processing as shown in various replays of a model's gazes. Eighty‐four 7th graders were randomly assigned to one of the four experimental conditions: text‐first processing sequence (text‐first EMME), picture‐first processing sequence (picture‐first EMME), picture‐last processing sequence (picture‐last EMME) and no‐EMME (control). Online and offline measures were used. Eye movement indices indicate that only readers in the picture‐first EMME condition spent significantly longer processing the picture and showed stronger integrative processing of verbal and graphical information than students in the no‐EMME condition. Moreover, readers in all EMME conditions outperformed those in the control condition for recall. However, for learning and transfer, only readers in the picture‐first EMME condition were significantly superior to readers of the control condition. Furthermore, both the frequency and duration of integrative processing of verbal and graphical information mediated the effect of condition on learning outcomes.