Individual differences inherent in human perceptual and behavioral data pose challenges for researchers who aim to develop standardized models of phenomena and procedures for normative assessment. A common approach used when modeling individual variation is to adopt criteria for
identifying and excluding the individual data of outliers. We present investigations that use an alternative approach for analyzing response variation, which makes use of individual differences in data, to define a robust process model of both response variation and the information shared
by individuals in a group. Crowdsourced perceptual identification tasks and formal analysis methods – Cultural Consensus Theory (CCT) – are employed to evaluate participants’ responses to transcription tasks, towards the aim of digitizing approximately 23,000 handwritten
pages of an irreplaceable cross-cultural color categorization survey by Robert E. MacLaury. Preliminary results show (1) utility of several original crowdsourced tasks for database transcription, (2) the appropriateness of CCT as a formal model for aggregating transcription data, (3) novel
ways of addressing “expertise” using CCT analyses, and (4) the accurate derivation of correct transcription “answer keys”, suggesting the potential for CCT methods to contribute to accurate transcription results even in the presence of large individual differences in
participants responses. Research presented suggests that crowdsourcing in conjunction with CCT considerably reduces, without loss of accuracy, the number of participants needed for expeditious transcription of large, handwritten, corpora.