Scholars noted that effective use of CBTIs requires special expertise in integrating computer-based descriptors or predictions with ancillary information acquired by direct observation, clinical interview, or additional psychological measures from the respondent. It noted that, “Even the very best psychological reports, whether they are written by a clinician or generated by a computer, are of little value without a competent clinician who can put the results into the context of the patient’s life”.
General issues regarding the use of psychological testing for treatment planning lie beyond the scope of this article. Excellent references on this topic are readily available. Relevant to implementing results from CBTIs, consumers should pursue expertise linked to individual tests by studying case examples provided in test manuals, CBTI user guides, and other published resources documenting specific strategies for integrating computer-based narratives with clinical assessment and intervention.
Continuing-education workshops emphasizing CBTI systems for various instruments offer an additional valuable resource. An issue often ignored in discussions of implementing CBTI results involves the risks of lowered validity in clinical judgments and decisions when clinicians modify predictions from actuarial findings based on their own subjective (and often unreliable) impressions. On the one hand, consumers of computer-based assessments must consider potential moderators affecting the reliability or validity of computer-based interpretations.
All psychometric indices reported in the literature should be considered as contingent on the extent to which characteristics of the test respondent, clinical setting, and purposes of the test administration are similar to those conditions from which such indices were derived. On the other hand, modification of computer-based interpretations or decisions by the clinician based on non-representative or biased observations will lower their accuracy. (Mirchandani 1999) Computer Based Decision Making
Since classic text that compared clinical versus statistical prediction, research has generally supported the conclusion that “mechanical” prediction (whether from statistical, actuarial, or alternative algorithmic bases) has the potential to outperform clinical prediction by individual judges on the basis of subjective processing of the same data, given reasonably reliable and valid indicators of the criterion. This conclusion generally holds regardless of whether mechanical prediction is based on proper (i. e. , statistically derived) or improper (i. e. , theoretically or rationally derived) models.
Indeed, prediction models that emulate experts’ decision-making processes have consistently been shown to outperform decisions reached by these same experts. To readers already familiar with this literature, the findings by Grove (1996) may not seem particularly surprising. What is compelling about this work is Grove et al. ‘s meta-analytic approach to a large sample of previous studies on clinical versus mechanical prediction from psychology and medicine, with multiple analyses evaluating the extent to which the superiority of mechanical methods potentially results from various study design characteristics.
Grove et al. demonstrated that, on average, mechanical prediction resulted in 10% greater accuracy than clinical prediction. This finding generally held regardless of judgment task, types of data being analyzed, or other methodological considerations. However, it also bears noting that in nearly half of the studies analyzed, clinicians performed equally well as mechanical methods and, in eight cases, outperformed mechanical prediction. In seven of eight studies in which clinical prediction proved superior, clinicians received more data than the mechanical method.
Again, however, the reliability of additional predictors warrants special consideration. for example, when clinicians had access to clinical interview data, they did relatively worse compared with mechanical prediction. What conclusions should be drawn from these findings? Assertion that “… these results indicate that computers should be used to make judgments and decisions” seems overly sweeping. First, for many decisions, inferences, choices, and problems, appropriate mechanical-prediction models are not available.
noted three situations giving rise to this: (a) “unanalyzed stimulus-equivalences” or circumstances in which explicit prediction rules have not yet been formulated, (b) empty cells in which highly relevant factors occur so infrequently as to preclude actuarial prediction, and (c) highly configurative functions (such as pattern recognition) resisting explication for mechanical prediction. Second, not all mechanical-prediction models are good ones.
For example, noted that such models are (a) often based on limited information that has not been demonstrated to be optimal, and (b) almost never shown to be powerful – particularly for making causal judgments and treatment decisions. Third, some assessment situations may favor clinicians on average (eg, prediction from qualitative contextual variables difficult to quantify) and other situations may favor a given clinician specifically (eg, one with unique expertise with a given assessment instrument in a specific prediction context).
Moreover, although clinicians’ inferential process is flawed, at least some of these flaws are remediable with proper training. Rather than viewing the critical issue as one of clinical versus mechanical prediction, a more profitable approach would address the fundamental challenges of how to integrate clinical and mechanical-prediction methods more effectively, and how to promote a more productive partnership between practitioners and researchers. Research cited earlier in this article concerning the incremental utility of CBTIs reflects one avenue for examining this issue. (Raghunathan 1999)