Extracting audio summaries to support effective spoken document search
Journal of the American Society for Information Science and Technology
Published online on June 28, 2017
Abstract
We address the challenge of extracting query biased audio summaries from podcasts to support users in making relevance decisions in spoken document search via an audio‐only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or “snippets” for supporting users in making relevance judgments against a query. In particular, the results show that summaries generated from ASR transcripts are comparable, in utility and user‐judged preference, to spoken summaries generated from error‐free manual transcripts of the same collection. We also observed that content‐based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection, which we have made publicly available.