The Nuclear Verdict blog series

Part IV - Science and research to prevent or suppress nuclear verdicts

George R. Speckart, Ph.D. & Bill Kanasky, Jr., Ph.D.

In the first 3 parts of our nuclear verdicts series, we covered an overview of the definition of a nuclear verdict, some historical context, and the five causative factors of nuclear verdicts. In part 4, we outline – with specific examples - how a scientific approach to litigation research has resulted in the prevention and suppression of nuclear verdicts. 


An Easily Identifiable Goal – Control


Scientific research designed to conclusively identify the causative factors that give rise to “nuclear verdicts” has not, to our knowledge, been designed or implemented, likely for some very fundamental obstacles pertaining to labeling and identification. While the notion of inappropriately high damages seems to be intuitively reasonable, closer scrutiny indicates that a precise definition is elusive, particularly as regards to what is “reasonable” or “rational.”

 

For example, what precisely is a “nuclear verdict”? Does the Exxon Valdez case, a $5 billion award, constitute a “nuclear verdict”? Exxon’s stock went up after the award because Wall Street thought the amount would be $10-15 billion, so in some respects the verdict was less than expected.

 

Is a $1 million verdict for falling in an uncovered manhole a “nuclear verdict”? If so, when does it stop becoming “nuclear”? At $500,000? $250,000? Is the McDonald’s hot coffee case a nuclear verdict?

 

One can therefore readily appreciate the obstacles to studying this phenomenon – namely, the foundational difficulty of even establishing in an uncontroverted manner what a nuclear verdict actually is. However, from the standpoint of the defense bar, insurers, and defense litigators, we do know one thing: We do not want them to happen. In other words, we need to exercise control and suppression of damage awards, but in order to do this we need prediction – knowing when excessively high damages are coming and when they are not – and in order to obtain prediction, we need science.

 

The approach to merely suppressing damages circumvents the labeling problem of identifying precisely what a nuclear verdict is because, in the minimization of damages, one need not determine whether the case falls into any specific category – instead, one only needs to ascertain the probable range of damages and then make the most appropriate strategy decision based on the circumstances of the case. However, these considerations do not obviate the need for prediction, and therefore science.

 

At this juncture we ask the reader to bear with us as we take a brief detour into uncharted territory, namely, the nexus between litigation and scientific method – a nexus that rarely, if ever, is explored or utilized in the practice of litigation.


Litigation and Scientific Method

 

As a scientific endeavor, prediction rests at the highest level of achievement. Recalling basic science classes with the image of Newton sitting under the apple tree, he sees the apple fall (observation), derives an initial explanation to be tested (hypothesis), and then, once the initial idea is tested sufficiently, it evolves to the status of theory. A good theory will then predict accurately, which is the ultimate goal of science. But prediction is the holy grail, the final objective, because from prediction comes control (the desirability of which need not be explicated). When research generates results that predict accurately, we say that the results have predictive validity.

 

Rather than drifting off into a realm that appears to be unnecessarily arcane, it is helpful to conceptualize science as simply society’s preferred means to reliably ascertain what can be known. Therefore, the use of science, or more precisely psychological technology (the application of science) to predict behavior of jurors is nothing other than an all-out assault on the question of exactly what jurors are going to do with a case.

 

It would seem reasonable, therefore, given the stakes involved in litigation, that such an “all-out assault” would be rather commonplace. Millions of dollars can rest in the balance based on juror behavior, and the only route to obtain valid information on this behavior in advance is scientifically designed jury research (unless, of course there is a prior, identical case). In many cases, post-trial jury interviews are impossible, and the only way to know what jurors are thinking, and how they make decisions, is through jury research.

 

Recent judicial opinions have been rendered that the legal industry actively avoids science (Jackson v Pollion, 7th Cir., Oct. 28, 2013: www.ims-expertservices.com/bullseye-blog/november-2013/7th-circuit-excoriates-lawyers,-judges-for-fear-of-science/). The 7th Circuit, in a remarkable statement, charged the legal industry with “fear and loathing of science.” Fear and loathing of that which separates fiction from truth, or clever from correct. In his opinion, Judge Posner cites several other prominent writers who came to a similar conclusion.

 

We have thus found the reason for why we are being forced to explore an uncharted nexus between litigation and scientific method. In fact, it is uncharted for the same reason that there are so few tourists in Turkmenistan – no one wants to go there. (Nor can the jury research industry be counted on to provide scientific method – for details on this issue, see Speckart G., “Trial by science,” Risk & Insurance, 2008, vol.19).


Suppression

 

Suppressing “nuclear verdicts” has been accomplished already, when people insist on, and put their trust in, scientific research methodology addressed to the issue of containing verdicts. The following real-life examples will serve to illustrate:

 

In 1994, working on the Exxon Valdez matter (specifically as regards study of punitive damages) four juries in the multi-day mock trial awarded $2, $3, $4 and $12 billion – an average of $5.2 billion. When the actual award came in at $5 billion, it was obvious that the research had provided predictive results, but it was not so obvious that such success could be replicated in efforts that were less well funded and comprehensive. At this juncture, the work of perfecting the scientific research methodology continued unabated.

 

By 2003, working for one of the world’s largest heavy equipment manufacturers in a Los Angeles case, three mock juries (in a 2-day mock trial) awarded $25, $37 and $112 million. Our client settled out in advance, and the real jury awarded $58 million against the remaining defendants. This was the highest personal injury award in the history of the state at that time, and the average award by the mock juries was also $58 million. (The verdict is a matter of record and the dated research report is still present in our files).  By this time, with perfect prediction, we realized we had a moral obligation – let us repeat that, a moral obligation – to inform the industry that research, when scientifically implemented, could reliably predict damages. The reason for the term “moral obligation” is that there were huge amounts of money to be saved by identifying in advance, and precluding, an oncoming nuclear verdict, as our client had just done.

 

Two years later, in 2005, we had another catastrophic injury case with the same heavy equipment client, this time in Philadelphia, with average damages in the vicinity of $500 million – a nuclear verdict if there ever was one (only one person died). Apparently, plaintiff counsel had no idea of the worth of his case, as he accepted a settlement offer of just under $2 million. If he had held out for $5, $10, $15 even $20 million our client would have had to have paid it – but armed with science, a fortune was saved.

 

By 2008, the Great Recession arrived, and this client decided to discontinue the research program (against our advice). The nuclear verdict suppression program had been an unqualified success – from 1985 to 2008 – 13 years – the highest verdict sustained by the company had been $4.2 million with no punitives in that entire time span.

 

By February of 2009 – two months after the cessation of the research program – the company had been hit for $57 million in San Antonio for a simple back injury.

Other successes of science in heading off the nuclear verdict were also accomplished. In East Texas patent litigation, where 8-, 9- and even 10-figure verdicts had been commonplace, the imposition of scientific methods suppressed verdicts down to the $1-2 million range (average over 14 verdicts), with another 10 defense verdicts. This chain of events was described in a Law.com article entitled “Taming Texas” (Raymond, Nate “Taming Texas,” Law.com, 2008) in which one of the current authors is mentioned by name.

 

Later, working on the plaintiff side in a legal malpractice case, three mock juries awarded an average of $82 million. At the end of the real trial, the defense wanted to settle the case and proffered a check for $20 million during jury deliberations. Defense counsel claimed, “$20 million – that’s as high as this jury is going to go.” Going back to the research results and examining the three mock jury awards, $20 million was found to be representative of the lowest award – not the highest. We rejected the $20 million check (not an easy thing to do). The jury came back at $73 million – one of the largest verdicts in the country that year (2009).

 

Again, we have an exemplar of control – knowing where the “true bottom” is – and how to navigate through the pomp and bluster of settlement negotiations using science instead of clever ideas, but this time creating a nuclear verdict instead of pre-empting one. A friend of ours noted in response to this case, “When you go up against science you incur heavy losses.”

 

It is important to note that “prediction” as currently discussed does not and cannot achieve a level of absolute certainty. Unpredictable court rulings, intractable witnesses, and the “luck of the draw” in jury selection can each play a role in changing trial outcomes. The point here is that the accuracy of scientifically-derived estimates far exceeds that of the hunches and intuition typically used to value and settle cases – for example, the divergence between the Wall Street estimate ($10-15 billion) versus the research-derived estimate ($5.2 billion) in the Valdez case. Our research demonstrates unequivocally that the cost of guessing in settling cases is not only more expensive than the research, but it is in fact far more expensive than the research, when it is based on scientific method and theory (see Speckart, G. “Do mock trials predict actual trial outcomes?” In House, 2010, vol. 5).