In 2 studies we evaluated the efficiency of training raters with a short version of the Aberdeen Report Judgment Scales (ARJS-STV-S) in assessing the truthfulness of transcribed accounts. Participants told both truthful and deceptive accounts of either illegal or immoral actions. In the truthful accounts, the participants described their own misdeeds honestly (true confessions). In the deceptive accounts, the participants also described their own misdeeds but attributed them to someone else (false accusations). In Study 1, guided (n = 32) and unguided (n = 32) raters evaluated 64 transcribed accounts (16 per rater). Only a few ARJS-STV-S criteria differed significantly between false and true accounts. In Study 2 (N = 29), guided raters evaluated the same transcripts using only the most promising criteria of Study 1. Judgments in Study 2 were less biased (in terms of signal detection theory), and the classification of deceptive accounts was significantly better compared with a no-guidance control group and the guided group of Study 1. A Brunswikian lens model analysis showed that with the smaller set of cues there is a better correspondence between the ecological validities and the subjective utilities, which may explain the higher accuracy rates. When the criteria have little or no diagnostic value, or when true and false stories are very similar, providing raters with a larger set of truth criteria does not increase accuracy but instead may bias raters toward making truth judgments. Practical implications for content-based training programs are outlined.