MetaTOC stay on top of your field, easily

Naïve and Robust: Class‐Conditional Independence in Human Classification Learning

, ,

Cognitive Science / Cognitive Sciences

Published online on

Abstract

Humans excel in categorization. Yet from a computational standpoint, learning a novel probabilistic classification task involves severe computational challenges. The present paper investigates one way to address these challenges: assuming class‐conditional independence of features. This feature independence assumption simplifies the inference problem, allows for informed inferences about novel feature combinations, and performs robustly across different statistical environments. We designed a new Bayesian classification learning model (the dependence‐independence structure and category learning model, DISC‐LM) that incorporates varying degrees of prior belief in class‐conditional independence, learns whether or not independence holds, and adapts its behavior accordingly. Theoretical results from two simulation studies demonstrate that classification behavior can appear to start simple, yet adapt effectively to unexpected task structures. Two experiments—designed using optimal experimental design principles—were conducted with human learners. Classification decisions of the majority of participants were best accounted for by a version of the model with very high initial prior belief in class‐conditional independence, before adapting to the true environmental structure. Class‐conditional independence may be a strong and useful default assumption in category learning tasks.