Legal Update

Can Case Outcome Predictions Be Improved?

The reputation and success of a trial lawyer depends, in part, on how well they predict case outcomes.

By Christina A. Studebaker, PhD, MLS

At one time or another, we have probably all tried to predict the outcome of a case – maybe whether O.J. Simpson or Casey Anthony would be found guilty, or maybe whether Samsung would be found liable for infringing Apple’s intellectual property. Despite our involvement in the study of psychology and law, for most of us, whether our predictions about these cases turn out to be correct have little, if any, impact on our livelihood. Trial lawyers, however, routinely face the task of predicting case outcomes, and their reputation and success depends in part on how well they do it. The need for case assessment comes as early as the initial investigation into whether there are sufficient grounds on which to file (or defend) a lawsuit and can continue through trial, where settlement or a plea bargain is an option up until the time a verdict has been reached.

At every stage along the way, the lawyers on each side of a case have to consider the possible outcomes (e.g., summary judgment, settlement, plea bargain, jury trial, bench trial) and estimate the probability of a successful outcome for their client. A client’s decision about whether to proceed with litigation can be affected in large part by her lawyer’s predictions about case outcomes. For example, a lawyer in a civil lawsuit who predicts an unfavorable outcome at trial can strongly recommend and pursue pretrial settlement. An attorney who predicts a favorable trial outcome can recommend that a client accept or reject settlement offers in light of the expected damage award at trial.

Inaccurate predictions, both in terms of overconfidence and under-confidence, can be costly. A lawyer who overestimates the possible outcome of a case (e.g., winning at trial and/or receiving a large damage award) may advise a client to reject reasonable settlement offers. A lawyer who underestimates the possible outcome may advise a client to accept a settlement offer significantly lower than what could have been achieved through a favorable trial verdict.

A recent study of 481 trial lawyers from 44 states across the United States found that 68% of the lawyers were inaccurate in their predictions of the outcome that would be achieved in a particular case (Goodman- Delahunty, Granhag, Hartwig, & Loftus, 2010). More specifically, 44% of the lawyers were overconfident in their predictions (i.e., predicted case outcomes better than what was achieved) and 24% were under-confident in their predictions. The researchers hypothesized that overconfidence would be attenuated if lawyers were asked to generate reasons why they might not achieve their litigation goals at the same time they were asked to assess the probability of achieving a particular outcome or something better, but the results did not support this.

Alternative methods of prediction that involve aggregating the estimates or predictions of many individuals might yield more accurate case outcome predictions. Examples include prediction markets, the Delphi method, and simple averaging.

Prediction markets are futures markets (i.e., an auction market that require traders to buy or sell assets at a set price at a set date in the future) created for the purpose of making predictions. As described by Graefe and Armstrong (2011), “The idea is to set up a contract whose payoff depends on the outcome of an uncertain future event. This contract, which can be interpreted as a bet on the outcome of the underlying future event, can then be traded by participants. As soon as the outcome is known, the participants are paid off in exchange for the contracts they hold. Based on their individual performances, participants can win money. If one thinks that the current group estimate is too low (high), one will buy (sell) stocks. Thus, through the prospect of gaining money, the participants have an incentive to become active in the group process whenever they expect the group estimate to be inaccurate.” The Iowa Electronic Markets (“IEM”) is operated by the University of Iowa College of Business faculty as an educational and research project. The IEM’s contract payoffs depend on economic and political events such as elections. A comparison of IEM’s market predictions and 964 national opinion polls for the 1988 through 2004 Presidential elections showed that the market provided more accurate predictions 74% of the time (Berg, Nelson, & Rietz, 2008). For the 1988 through 2000 presidential elections, the markets predicted vote shares for the Democratic and Republican candidates in the week leading up to the election with an average absolute error of about 1.5 percentage points, whereas the final Gallup poll for each election provided forecasts with about 2.1 percentage points.

The Delphi method aggregates the opinions of experts over the course of an iterative process to achieve greater prediction accuracy than that of any single expert (Helmer, 1967). The Delphi method begins with a prediction/forecast question being posed individually to each member of a panel of experts who are physically dispersed and do not meet or communicate beyond what is necessary to complete the technique. The initial responses are analyzed, and the median and interquartile range (i.e., the interval containing the middle 50% of responses) are identified and summarized. A follow-up questionnaire is then provided to each participant along with the basic data summary of the initial set of responses. Each participant is then asked to reconsider their previous answer and is given the opportunity to revise it, if desired. Any participant who provides a second-round response that is outside the interquartile range is asked to provide an explanation of the response. In the third round, the median and interquartile range of the second-round responses is provided to participants along with a summary of the reasons provided in support of extreme positions. Participants are again asked to provide an answer (revised, if they so desire), and participants providing a response outside the interquartile range are asked to provide an explanation. A final fourth round of responses follows the established procedure. The median of the final responses is then taken as the group prediction. The Delphi method was found to result in more accurate responses than prediction markets in a laboratory study that compared responses to almanac-style questions such as estimating the population of Australia and estimating the percentage of the U.S. population that was age 65 or over in 2000 (Graefe & Armstrong, 2011).

Jacobson and colleagues (2011) examined whether a modified Delphi-type of procedure would increase the accuracy of predicting civil jury verdicts. In the study, law students and experienced plaintiff attorneys were provided basic summaries of actual civil cases (for which verdict awards were known) and asked to estimate the amount of non-economic damages awarded by jurors. Pairs of law students and pairs of attorneys were then shown the estimate provided by their partner, and each individual was given the opportunity to revise their initial estimate. In the third round of the task, each dyad was given time to discuss and agree on a single joint estimate. In the fourth round, participants were again asked to provide individual estimates, that could be the same or different from the estimates they had agreed on with their partners. The accuracy of both law students and attorneys significantly improved after being presented with a partner’s initial estimate and after agreeing on a joint estimate with a partner. However, the most accurate estimates were obtained when means were calculated for “statisticized groups” (i.e., randomly aggregating the estimates made by sets of participants into groups of 2, 4, 6, 8, and so on, up to the total available sample size). Aggregating estimates resulted in a reduction of error for both law students and attorneys. Aggregation, of course, reduces measurement error by reducing the impact of random error on the result. Interestingly, the aggregation of responses in the Jacobson, et al. (2011) study even made up for lack of real-world experience. The mean estimate of 15 law students was more accurate than that of a single experienced attorney.

Increasing the accuracy of case outcome predictions may also be able to be achieved by one or more techniques that aggregate the estimates of multiple individuals, most likely lawyers. The individuals need to have sufficient experience and expertise to be familiar with the range of possible outcomes. The individuals should also be diverse in their opinions and knowledge and be able to provide independent opinions.

Given the importance of accurately predicting case outcomes, psychology/law researchers should be studying ways to improve prediction. And, improvements in prediction may come from research and statistical methods that do not focus on underlying causal explanations. Reliably and accurately predicting outcomes is a worthwhile contribution to the field, in and of itself.

References

Berg, J. E., Forsythe, R., Nelson, F. D., & Rietz, T. A. (2008). Results from a dozen years of election futures market research. In C. A. Plott and V. L. Smith (Ed.) Handbook of Experimental Economic Results, Volume 1, 742-751.

Berg, J. E., Nelson, F. D., & Rietz, T. A. (2008). Prediction market accuracy in the long run. International Journal of Forecasting, 24, 283-298.

Goodman-Delahunty, J., Granhag, P. A., Hartwig, M., & Loftus, E. F. (2010). Insightful or wishful: Lawyers’ ability to predict case outcomes. Psychology, Public Policy, and Law, 16, 133- 157.

Graefe, A., & Armstrong, J. S. (2011). Comparing face-to-face meeting, nominal groups, Delphi, and prediction markets on an estimation task. International Journal of Forecasting, 27, 183-195.

Helmer, O. (1967). Analysis of the future: The Delphi method. The Rand Corporation, Santa Monica, CA.

Jacobson, J., Dobbs-Marsh, J., Liberman, V., & Minson, J. A. (2011). Predicting civil jury verdicts: How to use (and misuse) a second opinion. Journal of Empirical Legal Studies, 8, 99-119.