MetaTOC stay on top of your field, easily

Interrogating Feature Learning Models to Discover Insights Into the Development of Human Expertise in a Real‐Time, Dynamic Decision‐Making Task

, ,

Topics in Cognitive Science

Published online on

Abstract

Tetris provides a difficult, dynamic task environment within which some people are novices and others, after years of work and practice, become extreme experts. Here we study two core skills; namely, (a) choosing the goal or objective function that will maximize performance and (b)a feature‐based analysis of the current game board to determine where to place the currently falling zoid (i.e., Tetris piece) so as to maximize the goal. In Study 1, we build cross‐entropy reinforcement learning (CERL) models (Szita & Lorincz, 2006) to determine whether different goals result in different feature weights. Two of these optimization strategies quickly rise to performance plateaus, whereas two others continue toward higher but more jagged (i.e., variable) heights. In Study 2, we compare the zoid placement decisions made by our best CERL models with those made by 67 human players. Across 370,131 human game episodes, two CERL models picked the same zoid placements as our lowest scoring human for 43% of the placements and as our three best scoring experts for 65% of the placements. Our findings suggest that people focus on maximizing points, not number of lines cleared or number of levels reached. They also show that goal choice influences the choice of zoid placements for CERLs and suggest that the same is true of humans. Tetris has a repetitive task structure that makes Tetris more tractable and more like a traditional experimental psychology paradigm than many more complex games or tasks. Hence, although complex, Tetris is not overwhelmingly complex and presents a right‐sized challenge to cognitive theories, especially those of integrated cognitive systems.