762 N.W.2d 24 | Neb. | 2009
Vicki KING, Special Administratrix of the Estate of Bradley B. King, Deceased, Appellant,
v.
BURLINGTON NORTHERN SANTA FE RAILWAY COMPANY, a Delaware corporation, Appellee.
Supreme Court of Nebraska.
*30 Richard J. Dinsmore and Jayson D. Nelson, of Law Offices of Richard J. Dinsmore, P.C., Omaha, for appellant.
Nichole S. Bogen and James A. Snowden, of Wolfe, Snowden, Hurd, Luers & Ahl, L.L.P., Lincoln, for appellee.
HEAVICAN, C.J., WRIGHT, CONNOLLY, GERRARD, McCORMACK, and MILLER-LERMAN, JJ.
*31 CONNOLLY, J.
I. SUMMARY
Bradley B. King brought this toxic tort action under the Federal Employers' Liability Act (FELA) against the appellee, Burlington Northern Santa Fe Railway Company (BNSF). He alleged that he contracted multiple myeloma during his employment with BNSF because of exposure to diesel exhaust emissions. Multiple myeloma is a cancer originating in the bone marrow plasma cells.[1] After Bradley died in 2002, his wife, Vicki King, revived the action in her name.
BNSF moved to exclude the testimony of King's expert witness. Each party presented dueling experts. Differing epidemiological studies supported the experts' deposition testimony. King's expert, Dr. Arthur Frank, blamed Bradley's multiple myeloma on his exposure to diesel exhaust. Of course, BNSF's expert, Dr. Peter G. Shields, disagreed. He believed that the causes were unknown and that the majority of epidemiological studies failed to show that diesel exhaust can cause multiple myeloma. The district court sustained BNSF's motion to exclude Frank's testimony, concluding that it failed to pass muster under our Daubert/Schafersman[2] framework. It reasoned that his methodology was unreliable because the studies he relied on failed to conclusively state that exposure to diesel fuel exhaust causes multiple myeloma. The court later sustained BNSF's motion for summary judgment. The Nebraska Court of Appeals affirmed.[3] We granted King's petition for further review.
The issues at the trial level were whether the studies Frank relied on were sufficient to support his causation opinion and whether he based his opinion on a reliable methodology. We do not reach these issues because we conclude that the district court applied the wrong standard in determining them. We reverse the decision of the Court of Appeals with directions to remand the cause to the district court for further proceedings consistent with this opinion.
II. BACKGROUND
In 1972, at age 20, Bradley started working for BNSF, and, over 28 years, he worked as a brakeman, switchman, conductor, and engineer. He testified that his work exposed him to diesel exhaust, especially his work as a brakeman. Bradley stated that his exposure caused him to experience headaches and nausea and, at times, to feel thick tongued. The record also shows that Bradley smoked about a pack of cigarettes per day for 33 years until he quit because of his illness.
1. KING'S EXPERTS
Dr. Michael Ellenbecker is a certified industrial hygienist and professor of industrial hygiene at the University of Massachusetts Lowell. He testified regarding a proposed industrial hygiene standard for workers' diesel exhaust exposure. The proposed standard called for a worker's maximum allowable exposure to diesel exhaust not to exceed the general population's exposure to diesel exhaust. He stated that the organization had proposed this limit because diesel exhaust is a suspected *32 human carcinogen. He further stated that industrial hygiene standards called for industries to minimize carcinogen exposure to below the permissible exposure limit because any exposure increases the risk of developing cancer.
Ellenbecker had examined a study showing that railroad workers in job categories like Bradley's had exposure to diesel exhaust significantly above the general population's exposure. He had reviewed BNSF's industrial hygiene samples from 1983, 2000, and 2002, and concluded that Bradley had a significant exposure to diesel exhaust. He believed the greatest exposure occurred in Bradley's early years of employment.
Frank is board certified in internal medicine and occupational medicine. At Drexel University, he is chair of the department of environmental and occupational health. Frank stated that benzene is in diesel exhaust and that the scientific evidence supports his opinion that benzene alone and diesel exhaust can cause multiple myeloma. He conceded that contrary statements existed in the scientific literature and that he did not know of any studies explicitly stating that either benzene or diesel exhaust causes multiple myeloma. He explained that scientific studies usually do not state that a definite causal relationship exists or even that the relationship appears to be causal; instead, the studies usually "point to" a causal relationship. He believed that the risk of disease would increase with increased exposure. But he rejected the idea that a minimum exposure level had to be reached before there was a risk.
Frank conceded that he had not conducted his own research, nor had he published his opinion that diesel exhaust can cause multiple myeloma. He stated that benzene was the only diesel exhaust component that has been separately studied as an agent of disease. Frank did not believe that any other diesel exhaust component was a known cause of multiple myeloma. He admitted that he had not found or performed a meta-analysisa method of pooling the results of smaller studies showing a relationship between multiple myeloma and diesel exhaust. Nor had he found studies comprehensively analyzing animal experiments, toxicology studies, and epidemiological studies.
Regarding the specific cause of Bradley's cancer, Frank believed that Bradley's extraordinary exposure level to diesel exhaust made it more likely than not that his exposure was a contributing cause of his disease. Moreover, after reviewing Bradley's medical history and deposition, Frank stated that in his experience as an occupational physician for 30 years, he had never seen a history of that much exposure.
Frank stated that there were few known causes of multiple myeloma. He ruled out radiation exposure as a potential cause because he failed to find evidence of unusual radiation exposure. Similarly, he ruled out diabetes as a possible causative agent because Bradley did not have this disease. Regarding Bradley's possible exposure to pesticides, Frank knew conflicting studies existed on the association between multiple myeloma and pesticide exposure. He did not believe, however, that these associations showed causation to a medical certainty. Likewise, he knew studies existed showing an association with smoking, but he did not believe the evidence supported a causal link to multiple myeloma.
2. BNSF'S EXPERTS
Shields is board certified in oncology and internal medicine. At Georgetown University, he is a professor of oncology and associate director of cancer control and population studies. Shields had also reviewed the studies Frank relied on and *33 disagreed with Frank's opinion. He concluded that regardless of the exposure level, researchers had not established a causal relationship between diesel exhaust or benzene and multiple myeloma. He believed that besides radiation exposure, experts did not know the causes of multiple myeloma. In sum, Shields does not believe that a few studies showing a positive association could support a causation opinion when the majority of studies had failed to show a positive association. Frank disagreed. He believed that scientific knowledge was improving and that scientific evidence from different disciplines did support a causal relationship.
3. DISTRICT COURT EXCLUDES FRANK'S TESTIMONY
The district court concluded that Frank was imminently qualified to give expert medical testimony. But in sustaining BNSF's motion to exclude Frank's testimony, it concluded that his opinion was unreliable because it did not have general acceptance in the field. The court also concluded that Frank's opinion regarding multiple myeloma was unreliable because of his methodology. The court stated that Frank relied on one study that showed a significant association between diesel exhaust and multiple myeloma. But it concluded that Frank could "point to no single study that conclusively states that exposure to diesel exhaust/benzene causes multiple myeloma."
In discussing Frank's differential etiology, the district court determined that it was also unreliable for three reasons: (1) The record did not show what causes "other th[a]n diesel exhaust exposure" Frank considered in his differential etiology; (2) "Frank `ruled in' diesel exhaust exposure as a possible cause, even though no medical or scientific study concluded that such exposure causes multiple myeloma"; and (3) Frank failed to explain why he "`ruled out'" any other potential causes. The court stated that Frank's opinion "merely concludes that diesel exhaust exposure is [the] most probable [agent], even though no medical or scientific study authorizes such a conclusion."
The court sustained BNSF's motion for summary judgment. The court concluded that BNSF had satisfied its burden of demonstrating that no causal connection existed between Bradley's employment, including his exposure to diesel exhaust, and his development of multiple myeloma.
4. COURT OF APPEALS' DECISION
The Court of Appeals affirmed.[4] It recognized that it had previously accepted Frank's expert opinion testimony in another case.[5] It concluded, however, that the earlier case was distinguishable. The court did not explain why Frank's testimony was different here. Instead, it relied on the district court's conclusion that Frank had not performed a reliable differential etiology and found no abuse of discretion.[6]
III. ASSIGNMENTS OF ERROR
Although King assigns several errors, in our order granting King's petition for further review, we limited our review to two issues: (1) whether the district court and Court of Appeals erred in requiring Frank to present studies conclusively stating that diesel exhaust causes multiple myeloma and (2) whether the lower courts erred in *34 concluding that Frank did not perform a reliable differential etiology.
IV. STANDARD OF REVIEW
[1,2] We review de novo whether the trial court applied the correct legal standards for admitting an expert's testimony.[7] We review for abuse of discretion how the trial court applied the appropriate standards in deciding whether to admit or exclude an expert's testimony.[8]
[3,4] As we know, a court should grant summary judgment when the pleadings and evidence admitted show that no genuine issue exists regarding any material fact or the ultimate inferences that may be drawn from those facts and that the moving party is entitled to judgment as a matter of law.[9] In reviewing a summary judgment, we view the evidence in a light most favorable to the party against whom the judgment is granted and give such party the benefit of all reasonable inferences deducible from the evidence.[10]
V. ANALYSIS
This appeal presents our first opportunity to address the legal standards governing the reliability of expert opinion testimony based on epidemiological studies. Unfortunately, these types of cases require trial judges and this court to grapple with scientific and medical issues beyond our normal professional experiences. So we believe it would help to set out a brief, but by no means exhaustive, discussion of the scientific terms and concepts gleaned from scientific literature. Also, we will explain how researchers determine that an association exists between a suspected agent and a disease and how experts interpret those studies to determine whether the relationship is causal.
1. GENERAL VERSUS SPECIFIC CAUSATION
[5,6] In Carlson v. Okerstrom,[11] we alluded to the distinction between general causation and specific causation. Other courts have similarly distinguished between general and specific causation. In a toxic tort case, general causation addresses whether a substance is capable of causing a particular injury or condition in a population, while specific causation addresses whether a substance caused a particular individual's injury.[12] To prevail, a plaintiff must show both general and specific causation. But a court should first consider whether a party has presented admissible general causation evidence before considering the issue of admissible specific causation evidence.[13]
The Federal Judicial Center's Reference Manual on Scientific Evidence (Reference Manual)[14] explains that epidemiology focuses on general causation rather than *35 specific causation.[15] Plaintiffs do not always need epidemiological studies to prove causation.[16] Yet, frequently, plaintiffs find epidemiological studies indispensable in toxic tort cases when direct proof of causation is lacking.[17]
2. EPIDEMIOLOGICAL EVIDENCE
(a) General Concepts
Epidemiological evidence identifies agents that are associated with an increased disease risk in groups of individuals, it quantifies the excess disease that is associated with an agent, and it provides a profile of an individual who is likely to contract a disease after being exposed to the agent.[18] In short, "[e]pidemiological studies examine existing populations to attempt to determine if there is an association between a disease or condition and a factor suspected of causing that disease or condition."[19] And a study may show a positive or negative association or no association.
Epidemiologists use three types of studies to determine whether an association exists between a suspected agent and a disease: (1) experimental trials, (2) cohort studies, and (3) case-control studies. The latter two types are observational studies. Here, the experts relied on observational studies.
In observational studies, researchers "`observe' a group of individuals who have been exposed to an agent of interest, such as cigarette smoking or an industrial chemical."[20] They then compare the exposed group's rate of disease or death incidences to the rate in another group of individuals who have not been exposed.[21] In cohort studies, researchers first identify an exposed group and an unexposed group. They then compare the rates of disease in each group.[22] In contrast, in case-control studies, researchers first identify a group of individuals with the disease and select a comparison group of individuals without the disease. They then compare the past exposures of both groups to see if an association exists between the past exposures and incidences of disease.[23]
In sum, epidemiological studies assess the existence and strength of associations between a suspected agent and a disease or condition. But an association is not equivalent to causation.[24] "[E]pidemiology cannot objectively prove causation."[25] Instead, epidemiological studies show the "degree of statistical relationship between two or more events or variables. Events are said to be associated when they occur more or less frequently together than one would expect by chance."[26] In contrast, "[e]pidemiologists use causation to mean that an increase in the incidence of disease *36 among the exposed subjects would not have occurred had they not been exposed to the agent."[27]
[7] Although epidemiological studies cannot prove causation, they can provide a foundation for an epidemiologist to infer and opine that a certain agent can cause a disease. Epidemiologists and other experts who are qualified to interpret the data and results of these studies assess causality by looking at a study's strengths and weaknesses. They then judge how the study's findings fit with other scientific knowledge on the subject.[28]
[8] We discussed epidemiology and causation in Schafersman v. Agland Coop.[29] We stated that when a party uses epidemiological evidence in legal disputes, the study's methodological soundness and its use in resolving causation require answering three questions. First, does the study reveal an association between an agent and disease? Second, did any errors in the study contribute to an inaccurate result? Third, is the relationship between the agent and the disease causal?[30]
(b) Measuring the Strength of an Association in Epidemiological Studies
When an epidemiological study shows an association, experts often report its strength as the "relative risk."[31] "The relative risk is one of the cornerstones for causal inferences."[32] It refers to the increased probability for an individual in an exposed population to develop a disease.[33] Experts describe relative risk as a ratio of the incidence rate of disease in the exposed group to the incidence rate in the unexposed group: i.e., the incidence rate in the exposed group divided by the incidence rate in the unexposed group.[34]
For example, if a study found that 10 out of 1000 women with breast implants were diagnosed with breast cancer and 5 out of 1000 women without implants (the "control" group) were diagnosed with breast cancer, the relative risk of implants is 2.0, or twice as great as the risk of breast cancer without implants. This is so, because the proportion of women in the implant group with breast cancer is 0.1 (10/1000) and the proportion of women in the non-implant group with breast cancer is 0.05 (5/1000). And 0.1 divided by 0.05 is 2.0.[35]
If both groups have the same incidence rate, the relative risk is 1.0, meaning that no association exists between the agent and the disease. If the study shows a relative risk less than 1.0, the association is negative. This means that the risk to the exposed population is less than the risk to the unexposed population.[36] If the study shows a relative risk greater than 1.0, a positive association exists, which could be causal, because the risk to the exposed population is greater than the risk to the unexposed group.[37] So to support a *37 causal inference, the relative risk must be greater than 1.0. And "[t]he higher the relative risk, the greater the likelihood that the relationship is causal."[38] Some studies, however, use different measurements to express a relationship between an agent and disease.[39] For example, in a case-control study, an "odds ratio" measurement provides essentially the same information as relative risk.[40]
A trial judge might also have to consider whether an expert properly relied on a "meta-analysis." Researchers and experts sometimes use meta-analyses to pool the results of smaller studies that fail to support definitive conclusions.[41] A meta-analysis combines and analyzes the data from several epidemiological studies to arrive at a single figure to represent all of the studies reviewed.[42]
If a study shows a relative risk of 2.0, "the agent is responsible for an equal number of cases of disease as all other background causes."[43] This finding "implies a 50% likelihood that an exposed individual's disease was caused by the agent."[44] If the relative risk is greater than 2.0, the study shows a greater than 50-percent likelihood that the agent caused the disease. Thus, some courts have permitted a relative risk greater than 2.0 to support an inference of specific causation.[45] Lower relative risks can also reflect general causation, but epidemiologists scrutinize weak associations because they have a greater chance of being explained by another factor or an error in the study.[46] But remember, before experts reach any type of causative conclusion based on observational studies, they rule out potential sources of error in the supporting studies.
(c) Potential Sources of Error
Researchers study a small part of the relevant population. Thus, the findings in an epidemiological study could differ from the true association in the larger population because of random variations, or chance, in the selected sample.[47] Epidemiologists refer to this problem as a "sampling error."[48] When researchers find an association (positive or negative), they use significance testing to assess the likelihood of a sampling error.[49] A statistically significant result is unlikely to be the result of random variations in a selected population sample.[50]
In evaluating whether a sampling error caused a study's results, experts often use a convention called the p-value.[51] The p-value *38 is a calculation, based on a study's data, of the probability that a positive association in the study would have resulted from a sampling error when no real association existed.[52] If the p-value falls below a preselected, acceptable significance level, the study's results are statistically significant.[53] Epidemiologists generally consider a p-value that falls below a significance level of .05 to be statistically significant.[54] A significance level of .05 presents a 5-percent probability that researchers observed an association because of chance variations.[55]
But statistical significance addresses only the likelihood that a relative risk would have resulted from chance even if no real association existed between the disease and agent. Statistical significance does not show an association's magnitude.[56] So researchers often express a study's results through confidence intervals. Confidence intervals show the association's magnitude and how statistically stable the association is.[57]
Using the study's relative risk and preselected significance level, researchers calculate the range of values within which the study's results would likely fall if researchers repeated the study many times.[58] Graphically, the calculation is an asymmetrical bell curve around the relative risk point, showing the distribution of possible results. The confidence interval is the range of values between the boundaries of the curve on a numerical axis.[59] If researchers selected .05 for the study's significance level, then the study will show a corresponding 95-percent confidence level in the plotted confidence interval.[60] This means that
if a confidence level of .95 is selected for a study, 95% of similar studies would result in the true relative risk falling within the confidence interval.... [T]he narrower the confidence interval, the greater the confidence in the relative risk estimate found in the study. Where the confidence interval contains a relative risk of 1.0, the results of the study are not statistically significant.[61]
For example, a trial judge might see a hypothetical study stating its results as follows: "relative risk of 1.6 (95% confidence interval = 1.1 to 2.4)." This statement indicates that the study's positive association (greater than 1.0) is statistically significant because the confidence interval does not include 1.0 or less. That is, the confidence interval, with 95-percent confidence, excludes the possibility of no *39 association or a negative association. Conversely, another hypothetical study showing a "relative risk of 1.6 (95% confidence interval = 0.9 to 1.2)" is not statistically significant because the confidence interval includes the possibility that no association exists between the agent and the disease. This logic can be applied to other statistical measures of association.[62]
But significance testing shows only that random chance probably did not produce the observed association.[63] Experts also consider whether a data collection error or design error affected the study's results. Also, they ask whether researchers failed to consider some other exposure or characteristic that varies between the groups and could explain the incidence of disease. Experts refer to these types of errors as bias and uncontrolled confounding, respectively.[64] A poorly conceived or conducted study that is statistically significant could be far less reliable than a well-conceived and conducted study that is not statistically significant.[65]
(d) Determining General Causation
While important, a positive association presents only one piece of the causation puzzle. "Once an association has been found between exposure to an agent and development of a disease, researchers consider whether the association reflects a true cause-effect relationship."[66] As noted, "[e]pidemiologists use causation to mean that an increase in the incidence of disease among the exposed subjects would not have occurred had they not been exposed to the agent."[67] But determining causation differs from the objective inquiry into relative risk. An assessment of a causal relationship is not a scientific methodology as that term is used to describe logic (like a syllogism) and analytic methods. Instead, it involves subjective judgment. Experts consider several factors under different sets of criteria that can point to causation. Relative risk presents only one factor that they consider[68]:
Drawing causal inferences after finding an association and considering [causation] factors requires judgment and searching analysis, based on biology, of why a factor or factors may be absent despite a causal relationship, and vice-versa. While the drawing of causal inferences is informed by scientific expertise, it is not a determination that is made by using scientific methodology.[69]
For example, government agencies and some experts use a weight-of-the-evidence methodology. That methodology comprehensively analyzes the data from different scientific fields, primarily animal tests and epidemiological studies, to assess carcinogenic risks.[70] As Justice Stevens has noted, it cannot be "intrinsically `unscientific' for experienced professionals to arrive at a conclusion by weighing all available scientific evidence" when the Environmental *40 Protection Agency uses this methodology to assess risks.[71] But no generally agreed-upon method exists for determining how much weight to apply to particular types of studies.[72]
Alternatively, the Reference Manual sets out the "Bradford Hill" factors that epidemiologists consider to assess general causation. The U.S. Surgeon General first suggested these criteria in 1964; in 1965, Sir Austin Bradford Hill expanded on them.[73] The factors include (1) temporal relationship, (2) strength of the association, (3) dose-response relationship, (4) replication of the findings, (5) biological plausibility, (6) consideration of alternative explanations, (7) cessation of exposure, (8) specificity of the association, and (9) consistency with other knowledge.[74] The Reference Manual explains that one or more causation factors may be absent even when a true causal relationship exists.[75] In addition, experts emphasize that
[s]ince causal actions of exposures are neither observable nor provable, a subjective element is present in judging whether, for a given exposure, such an action exists. As a result, scientists may differ both in terms of interpretation of available evidence in support of criteria used to aid causal inference, and in relative weight assigned to each criteria.[76]
Here, we comment only on the factors that could raise questions on remand.
(i) Strength of Association
Remember, regarding an association's strength, the higher the relative risk, the greater the likelihood that a relationship is causal.[77] Yet lower relative risks can reflect causality. But researchers and experts using the data will scrutinize these studies to ensure they are not attributable to uncontrolled confounding factors or biases.[78]
(ii) Dose-Response Relationship
A dose-response relationship is primarily a hallmark of toxicology.[79] If higher exposures to the agent increase the incidence of disease, the evidence strongly suggests a causal relationship.[80] "For example, lung cancer risk increases in relation to the number of cigarettes smoked per day."[81] Based on this principle, some courts have held that a plaintiff cannot recover without showing (1) the level of exposure to an agent that is dangerous to human health and (2) the plaintiff's actual exposure to a level of the defendant's toxic substance that is known to cause harm.[82]
*41 In contrast, the Reference Manual states that a dose-response relationship presents strong but not essential evidence of a causal relationship.[83] Often, a physician will not have measures of the environmental exposure. An expert, however, can infer the exposure level from industrial hygiene studies or records and the patient's description of the work environment, duration of exposure, and his or her reactions.[84] Ellenbecker used this kind of data to estimate Bradley's exposure in his testimony.
Relying on the Reference Manual, the Fourth Circuit has held that precise information about the exposure necessary to cause harm and the plaintiff's exact exposure level are not always necessary "to demonstrate that a substance is toxic to humans given substantial exposure."[85] The court reasoned that in occupational settings, humans are rarely "`exposed to chemicals in a manner that permits quantitative determination of adverse outcomes.'"[86]
Similarly, the Eighth Circuit has held that a plaintiff need not produce "`"a mathematically precise table equating levels of exposure with levels of harm"'" to show that she was exposed to a toxic level of a substance.[87] The court concluded that a plaintiff's claim does not fail simply because the medical literature had not yet conclusively shown the connection between the toxic substance and the plaintiff's condition. Thus, the court held that a plaintiff adduces sufficient evidence if a reasonable person could conclude that the plaintiff's exposure probably caused her injuries.[88]
We have similarly upheld an expert's reliance on evidence of the plaintiff's substantial exposure to a known toxic substance.[89] So allowing semiquantitative or qualitative estimates of exposure from occupational studies and the plaintiff's testimony seems appropriate here. The evidence shows that the safe exposure levels to diesel exhaust are set low because it can unquestionably cause some diseases.[90]
(iii) Replication of Findings
Experts also consider replication of findings in assessing causation. The Reference Manual points out that "[r]arely, if ever, does a single study conclusively demonstrate a cause-effect relationship. It is important that a study be replicated in different populations and by different investigators" before epidemiologists and other scientists accept a causal relationship.[91]
(iv) Biological Plausibility
When experts know how a disease develops, an association should show biological consistency with that knowledge.[92] But "`"[w]hat is biologically plausible depends upon the biological knowledge of the *42 day."'"[93] An expert's inability to explain a disease's pathology or progression goes to the weight of the evidence, not to its admissibility.[94]
With these principles and terms in mind, we turn to the parties' contentions, the legal standards for determining the reliability of expert opinion testimony generally, and the standards for determining the reliability of epidemiological expert opinion.
3. PARTIES' CONTENTIONS
The district court did not have the benefit of our decision in Epp v. Lauby.[95] In Epp, we clarified that when an expert bases his or her opinion on a reliable methodology, a court should not exclude it solely because a disagreement exists between the parties' qualified experts. King contends that under Daubert v. Merrell Dow Pharmaceuticals, Inc.,[96] it is unreasonable to require experts to present peer-reviewed studies with absolute conclusions on causation because scientific studies do not address absolute causation. BNSF counters that the court simply found no reliable support for Frank's opinion because of studies on which he relied.
[9] As we know, to recover for exposure to a toxic substance in a FELA action, an employee must present expert testimony evidence supporting an inference that the employee's injuries were caused by exposure to the substance attributable to the railroad's negligent act or omission.[97]
4. GENERAL ADMISSIBILITY STANDARDS FOR EXPERT TESTIMONY
[10,11] Before admitting expert opinion testimony under Neb. Evid. R. 702,[98] a trial court must determine whether the expert's knowledge, skill, experience, training, and education qualify the witness as an expert.[99] If the opinion involves scientific or specialized knowledge, trial courts must also determine whether the reasoning or methodology underlying the expert's opinion is scientifically valid.[100] Under Daubert, evidentiary reliability depends on scientific validity.[101] Normally, after a court finds that the expert's methodology is valid, it must also determine whether the expert reliably applied the methodology.[102] Finally, under Neb. Evid. R. 403,[103] the court weighs whether the expert's evidence and opinions are more probative than prejudicial.[104]
[12] Here, the parties do not dispute Frank's qualification to give expert medical testimony or to interpret epidemiological studies. We see the broad issue as *43 whether under our Daubert/Schafersman framework, Frank based his opinion on a reliable, or scientifically valid, methodology. Under that framework, the proponent of expert testimony must answer two preliminary questions by a preponderance of the evidence. First, is the expert's reasoning or methodology underlying his or her testimony scientifically valid? Second, can the finder of fact properly apply that reasoning or methodology to the facts?[105]
[13] In determining the admissibility of an expert's opinion, the court must focus on the validity of the underlying principles and methodologynot the conclusions that they generate.[106] And reasonable differences in scientific evaluation should not exclude an expert witness' opinion.[107] The trial court's role as the evidentiary gatekeeper is not intended to replace the adversary system but to ensure that "`an expert, whether basing testimony upon professional studies or personal experience, employs in the courtroom the same level of intellectual rigor that characterizes the practice of an expert in the relevant field.'"[108] In sum, while the trial court acts as the evidentiary gatekeeper, it is not a goalkeeper.
[14,15] But a trial court has discretion to exclude expert testimony if "there is simply too great an analytical gap between the data and the opinion proffered."[109] An expert's opinion must be based on good grounds, not mere "subjective belief or unsupported speculation."[110] "Good grounds" mean an inference or assertion derived by scientific method and supported by appropriate validation.[111] "[T]he expert must have `good grounds' for his or her belief" in every step of the analysis.[112] Yet courts should not require absolute certainty.[113] "[A] trial court should admit expert testimony `if there are "good grounds" for the expert's conclusion' notwithstanding the judge's belief that there are better grounds for some alternative conclusion."[114]
5. RELIABILITY FACTORS
[16] We have previously set out the factors for assessing the reliability or scientific validity of an expert's opinion. The factors are whether (1) the theory or technique can be, or has been, tested; (2) the theory or technique has been subjected to peer review and publication; (3) there is a known or potential rate of error; (4) there are standards controlling the technique's *44 operation; and (5) the theory or technique enjoys general acceptance within the relevant scientific community.[115]
[17] But these nonexclusive reliability factors do not bind a trial court. And as we have previously stated, additional factors may prove more significant in different cases, and additional factors may prove relevant under particular circumstances.[116] Under the Daubert/Schafersman framework, a trial court should not require general acceptance of the causal link between an agent and a disease or condition if the expert otherwise bases his or her opinion on a reliable methodology.[117]
[18] Here, Frank had not published his opinion that diesel exhaust can cause multiple myeloma and had not personally conducted research on this subject. These factors are relevant, but not fatal.[118] Absent evidence that an expert's testimony grows out of the expert's own prelitigation research or that an expert's research has been subjected to peer review, experts must show that they reached their opinions by following an accepted scientific method or procedure as it is practiced by others in their field.[119]
[19] Epidemiological statistical techniques for testing a causation theory have been subject to peer review and are generally accepted in the scientific community.[120] The studies Frank relied upon were subject to peer review, and the researchers did not develop the statistical techniques used in the studies for this litigation. Often, a medical expert's reliance on peer-reviewed literature can appropriately support a general causation opinion.[121] And once the expert has established that he or she reliably assessed the data, the weight of the expert's conclusion is an issue for the jury to resolve. Accordingly, the district court needed to consider only two issues regarding Frank's opinion on general causation. Were the results of the epidemiological studies Frank relied on sufficient to support his opinion regarding general causation? And did he review the scientific literature or data in a reliable manner? In other words, did too great an analytical gap exist between the data and Frank's opinion? To determine the appropriate standard for this question, we look to Neb. Evid. R. 703.[122]
6. EXCLUSION TEST FOR EXPERT'S UNREASONABLE RELIANCE ON UNDERLYING STUDIES
[20] In Daubert, the Court required trial judges assessing a proffer of expert scientific testimony under Fed.R.Evid. 702 to consider other evidentiary rules.[123] The Court specifically mentioned Fed.R.Evid. 703, which contains the same language as Nebraska's rule 703.[124] The Court stated *45 that under federal rule 703, "expert opinions based on otherwise inadmissible hearsay are to be admitted only if the facts or data are `of a type reasonably relied upon by experts in the particular field in forming opinions or inferences upon the subject.'"[125] Relying on this language, many courts dealing with professional studies have adopted the following standard for a court's exclusion of expert's opinion: "If the underlying data are so lacking in probative force and reliability that no reasonable expert could base an opinion on them, an opinion which rests entirely upon them must be excluded."[126]
We agree with this general standard. We next set out the standards for its application more fully.
7. STANDARDS FOR DETERMINING THE RELIABILITY OF EPIDEMIOLOGICAL OPINION TESTIMONY
Although we have discussed epidemiological evidence in two other cases,[127] we do not consider either case controlling here. In neither case did we discuss what epidemiological studies must show to support an expert's general causation opinion based primarily on such evidence.
Since Daubert, assessing expert opinion testimony based on epidemiological evidence is an area of law that is still in flux. Despite these shifting sands, we set out four broad standards to assist trial courts in determining the reliability of expert testimony based on epidemiological evidence.
(a) Strength of Association
Scientists' determinations of causation are inherently tentative because they must always remain open to future knowledge.[128] Generally, researchers conservatively assess causal relationships, and they often call for stronger evidence and more research before drawing a conclusion.[129] One study of a particular population sample would not normally contain a conclusion on a causal relationship.[130] So how strong must a relative risk be before an expert can rely on it to support a general causation opinion?
We acknowledge that courts disagree on the appropriate relative risk threshold that a study must satisfy to support a general causation theory. Some courts have required a study to have a relative risk of 2.0 or greater to support a causation opinion.[131] These courts have generally reasoned that "`a relative risk greater than "2" means that the disease more likely than not was caused by the event [under investigation].'"[132] Namely, they equate *46 the relative risk requirement to a plaintiff's preponderance burden of proof in tort cases. Yet, in many of these cases, the courts failed to distinguish between general causation and its brother, specific causation. Moreover, epidemiological evidence appears to have been the only evidence supporting specific causation.[133] One of these courts, the Ninth Circuit, later reversed its position for claims in which the investigated substance is known to cause many adverse health effects.[134] For this type of claim, the Ninth Circuit now applies the "capable of causing" standard for evidence supporting general causation, instead of the doubling of the risk standard it had applied in two earlier cases.[135]
Other courts have similarly recognized that relative risk less than 2.0 can support an expert's general causation opinion.[136] In contrast, the 11th Circuit has held that a district court did not abuse its discretion in finding that a relative risk of 1.24 was insufficient to support a general causation opinion.[137]
Despite this disagreement among the courts, we believe that requiring a study to show a relative risk of 2.0 or greater is too restrictive when the expert relies on the study to support an opinion on general causation. As noted, some courts have held that a relative risk above 2.0 is even sufficient to support an opinion on specific causation: that is, sufficient to support an inference that an agent caused the particular plaintiff's disease.[138] And, remember, weak associations can indicate a causal relationship, depending upon the presence of other factors.[139] Finally, some experts have stated that workplace studies can understate the true relative risk of toxic exposures. They have questioned the validity of requiring a relative risk greater than 2.0 to show general causation.[140]
[21] So we decline to set a minimum threshold for relative risk, or any other statistical measurement, above the minimum requirement that the study show a relative risk greater than 1.0. We agree that "it would be far preferable for the district court to instruct the jury on statistical significance and then let the jury decide whether many studies over the 1.0 mark have any significance in combination."[141] In short, the significance of epidemiological studies with weak positive associations is a question of weight, not *47 admissibility.[142]
(b) Ruling Out Potential Sources of Error
Likewise, disagreements exist among courts regarding the importance of statistical significance. Some courts have required the relative risk in epidemiological studies to be statistically significant.[143] And the U.S. Supreme Court affirmed a district court's exclusion of an expert's opinion, in part, because one supporting study failed to find an association between the agent and the disease and another study failed to show that the increased risk of the disease was statistically significant.[144]
[22] We agree that statistical significance is the most obvious way for a court to determine that researchers properly ruled out random variations in the population sample accounting for the result. But those decisions requiring a study's relative risk to be statistically significant have come under fire. Experts have pointed out that the lack of statistical significance does not demonstrate that there is no relationship.[145] So not all courts impose a requirement of statistical significance.[146] We also decline to impose a statistical significance requirement if an expert shows that others in the field would nonetheless rely on the study to support a causation opinion and that the probability of chance causing the study's results is low.
[23] We also recognize that bias and uncontrolled confounding can present serious flaws in a study. But, as a practical matter, we do not expect trial courts to delve into every possible error in an expert's underlying data unless a party raises it:
[W]here one party alleges that an expert's conclusions do not follow from a given data set, the responsibility ultimately falls on that challenging party to inform (via the record) those of us who are not experts on the subject with an understanding of precisely how and why the expert's conclusions fail to follow from the data set.[147]
[24] Moreover, no study is without some errors of this nature and many prove inconsequential.[148] Thus, a court should normally not question a published epidemiological study's results over the mere possibility of error unless the study's findings plausibly appear attributable to unrecognized error.[149]
*48 (c) Number of Studies
[25] Epidemiological studies assume an important role in determining causation when they are available, and particularly when they are numerous and span a significant period.[150] Courts should normally require more than one epidemiological study showing a positive association to establish general causation, because a study's results must be capable of replication.[151] But courts are understandably reluctant to set a specified minimum number of studies showing a positive association before an expert can reliably base an opinion on themparticularly when there are other, nonepidemiological studies also supporting the expert's opinion.[152]
But we do not preclude a trial court from considering as part of its reliability inquiry whether an expert has cherry-picked a couple of supporting studies from an overwhelming contrary body of literature. Here, however, we need not determine whether Frank relied on a sufficient number of epidemiological studies. While BNSF contests Frank's studies on other grounds, it acknowledges that several studies have shown positive associations between multiple myeloma and exposure to diesel exhaust or benzene.[153]
(d) Method for Reliably Analyzing Body of Evidence
[26] A meta-analysis of observational studies can present problems if the methodologies used in the combined studies differ.[154] Thus, if an epidemiological expert has performed or relied on an unpublished meta-analysis of observational studies, the expert should show the methodology used is generally accepted in the field. Similarly, if an expert's causation opinion has not been subjected to peer review, the expert should explain the accepted criteria that he or she has used to conclude that an agent can cause the plaintiff's disease in the general population[155]: e.g., the Bradford Hill criteria or another set of criteria for determining causal relationships.
Having determined the basic reliability standards for an expert's general causation opinion based on epidemiological evidence, we now decide whether the district court applied the proper standard.
8. DISTRICT COURT IMPROPERLY REQUIRED STUDIES TO SHOW DEFINITE CONCLUSION ON CAUSATION
[27,28] We believe the district court erred in concluding that Frank's causation opinion was unreliable because Frank could not "point to a study that concludes exposure to diesel exhaust causes multiple myeloma." As explained, individual epidemiological studies need not draw definitive conclusions on causation before experts can conclude that an agent can cause a disease.[156] If the expert's methodology appears otherwise consistent with the standards set out above, the court should admit the expert's opinion. But here, the *49 court did not inquire into Frank's methodology.
Instead, the court summarily dismissed Frank's testimony as showing his reliance "on the `totality of information regarding multiple myeloma, benzene and diesel exhaust' to reach his own subjective conclusions." Yet Frank, while admitting that studies existed finding no relationship, testified that a body of evidence supported his conclusion that diesel exhaust can cause multiple myeloma. The evidence he cited included human data studies, animal studies, and toxicology studies. Contrary to the district court's finding, Frank's testimony did not reflect a disconnect between an expert opinion and the underlying data. Frank's inquiry required him to consult the relevant scientific literature and draw a conclusion. We recognize that we have not previously set out legal standards for trial courts to follow in these cases. But, here, the court only considered whether the studies Frank relied upon showed a definite conclusion on a causal relationship. The court erred in applying a "conclusive study" standard.
It is true that King's evidence has some deficiencies. For some of the supporting studies Frank relied on, King only submitted to the court an abstract, or synopsis, of the study. And Frank failed to explain the criteria he used to reach his conclusion on causation. But these failures do not prove fatal here.
Although Frank did not personally conduct studies on the relationship between diesel exhaust and multiple myeloma, he was qualified to interpret studies on that relationship. And his reasoning appears consistent with the causation criteria discussed above. More important, these deficiencies played no role in the district court's decision because it only considered whether a study's results showed a conclusive causal relationship. We reverse the decision of the Court of Appeals with directions to remand the cause to the district court for further proceedings, and the parties can present methodology evidence on remand.
We recognize that a court's wrestling with this type of evidence is no small task. On remand, however, the district court may conduct a Daubert/Schafersman hearing. It should resolve any questions that it has or that BNSF raises regarding the sufficiency of the underlying studies or the reliability of Frank's opinion testimony. But the court should remember that regarding the sufficiency of the underlying studies, it should focus on whether no reasonable expert would rely on the studies to find a causal relationshipnot whether the parties dispute their force or validity. And regarding the admissibility of Frank's opinion, the focus must be on the validity of his methodology and whether good grounds exist for his opinionnot whether his ultimate conclusion differs from that of other experts.[157]
9. SPECIFIC CAUSATION
[29] As discussed, the district court also determined that Frank's differential etiology proved unreliable. We pause here to note that courts, including this court, have not always been careful to distinguish between differential diagnosis and differential etiology. But differential diagnosis refers to a physician's "determination of which one of two or more diseases or conditions a patient is suffering from, by systematically comparing and contrasting their clinical findings."[158] In contrast, *50 etiology refers to determining the causes of a disease or disorder.[159]
The court gave three reasons for its conclusion: (1) The record did not show what causes "other th[a]n diesel exhaust exposure" Frank considered in his differential etiology; (2) "Frank `ruled in' diesel exhaust exposure as a possible cause, even though no medical or scientific study concluded that such exposure causes multiple myeloma"; and (3) Frank failed to explain why he "`ruled out'" any other potential causes.
[30,31] If an expert's general causation opinion is admissible to show that a suspected agent should be ruled in as a possible cause of the plaintiff's disease, the court must next determine whether the expert performed a reliable differential etiology.[160] To perform a reliable differential etiology, a medical expert must first compile a comprehensive list of hypotheses that might explain the set of salient clinical findings under consideration.[161] At the ruling-in stage of the analysis, an expert's opinion is not reliable if the expert considers a suspected agent that cannot cause the patient's disease.[162] Nor is the opinion reliable if the expert "completely fails to consider a cause that could explain the patient's symptoms."[163]
[32] Next, the expert engages in a process of elimination, based on the evidence, to reach a conclusion regarding the most likely cause of the disease.[164] At the ruling-out stage of the analysis, the court should focus on whether the expert had a reasonable basis for concluding that one of the plausible causative agents was the most likely culprit for the patient's symptoms.[165] The expert must have good grounds for eliminating potential hypotheses.[166] Unsupported speculation will not suffice.[167] But "[w]hat constitutes good grounds for eliminating other potential hypotheses will vary depending upon the circumstances of each case."[168]
Under this framework, the district court's first reason was incorrect. Frank's testimony shows that he considered other possible causes of multiple myeloma, including radiation exposure, diabetes, pesticide exposure, and cigarette smoking. The court's second rationale also proves incorrect. Here, the court relied on its finding that Frank improperly ruled in diesel exhaust exposure as the cause of Bradley's cancer "even though no medical or scientific study authorizes such a conclusion." We have already determined that the court applied an erroneous standard in ruling that Frank lacked good grounds for believing that Bradley's exposure to diesel exhaust likely caused his multiple myeloma.
Finally, the court incorrectly determined that Frank failed to give reasons for ruling out other possible hypotheses. Frank ruled out diabetes and radiation exposure based on Bradley's medical and personal history. In performing a differential etiology, a decision to eliminate an alternative hypothesis based on information gathered *51 by using the traditional tools of clinical medicine will usually have the hallmarks of reliability required under the Daubert/Schafersman framework. These tools include physical examinations, medical and personal histories, and medical testing.[169]
Frank explained his reasons for ruling out Bradley's possible pesticide exposure as a teenager and his cigarette smoking. Frank had reviewed epidemiological studies of these agents and believed that they failed to show a causal relationship with multiple myeloma. We emphasized in Carlson v. Okerstrom that the traditional tools for ruling out potential hypotheses in a differential etiology are "just guideposts and that often, an expert's decision to rule out an alternative hypothesis will depend on other factors for which clear rules are not available."[170]
Here, the evidence does not show that Frank failed to consider other possible hypotheses for Bradley's cancer or to explain why his causation opinion was sound despite BNSF's suggestions of alternative hypotheses. Thus, BNSF's alternative suggestions affect the weight, not the admissibility, of Frank's testimony.[171] Accordingly, on remand, the primary admissibility issue for Frank's opinion on specific causation is whether he had good grounds for ruling in Bradley's diesel exhaust exposure as a plausible cause of his cancer.
VI. CONCLUSION
We conclude that the district court applied an erroneous standard for excluding an expert's opinion testimony based on epidemiological studies. Thus, the summary judgment was improper. We therefore reverse the decision of the Court of Appeals which affirmed the district court's decision. We remand the cause to the Court of Appeals with directions to remand the cause to the district court for further proceedings consistent with this opinion.
REVERSED AND REMANDED WITH DIRECTIONS.
STEPHAN, J., not participating.
NOTES
[1] See, 4 J.E. Schmidt, M.D., Attorney's Dictionary of Medicine and Word Finder M-280 (1998); Richard Sloane, The Sloane-Dorland Annotated Medical-Legal Dictionary 470 (1987 & Supp. 1992).
[2] See, Daubert v. Merrell Dow Pharmaceuticals, Inc., 509 U.S. 579, 113 S. Ct. 2786, 125 L. Ed. 2d 469 (1993); Schafersman v. Agland Coop, 262 Neb. 215, 631 N.W.2d 862 (2001).
[3] King v. Burlington Northern Santa Fe Ry. Co., 16 Neb.App. 544, 746 N.W.2d 383 (2008).
[4] See id.
[5] See Boren v. Burlington Northern & Santa Fe Ry. Co., 10 Neb.App. 766, 637 N.W.2d 910 (2002).
[6] See King, supra note 3.
[7] See, e.g., Borawick v. Shay, 68 F.3d 597 (2d Cir.1995); Winters v. Fru-Con Inc., 498 F.3d 734 (7th Cir.2007); U.S. v. Abdush-Shakur, 465 F.3d 458 (10th Cir.2006).
[8] See Schafersman v. Agland Coop, 268 Neb. 138, 681 N.W.2d 47 (2004). See, also, Winters, supra note 7; Abdush-Shakur, supra note 7.
[9] See McNeel v. Union Pacific RR. Co., 276 Neb. 143, 753 N.W.2d 321 (2008).
[10] Id.
[11] Carlson v. Okerstrom, 267 Neb. 397, 675 N.W.2d 89 (2004).
[12] See, Knight v. Kirby Inland Marine Inc., 482 F.3d 347 (5th Cir.2007); Bonner v. ISP Technologies, Inc., 259 F.3d 924 (8th Cir. 2001); In re Hanford Nuclear Reservation Litigation, 292 F.3d 1124 (9th Cir.2002); Goebel v. Denver and Rio Grande Western R. Co., 346 F.3d 987 (10th Cir.2003).
[13] See Knight, supra note 12.
[14] Reference Manual on Scientific Evidence (Federal Judicial Center 2d ed. 2000).
[15] See Michael D. Green et al., Reference Guide on Epidemiology, in Reference Manual, supra note 14 at 335-36.
[16] See, e.g., Benedi v. McNeilP.P.C., Inc., 66 F.3d 1378 (4th Cir.1995); Wells v. Ortho Pharmaceutical Corp., 788 F.2d 741 (11th Cir. 1986).
[17] See, e.g., In re Joint Eastern & Southern Dist. Asbestos Lit., 52 F.3d 1124 (2d Cir. 1995).
[18] Reference Manual, supra note 15 at 335-36.
[19] Merrell Dow Pharmaceuticals v. Havner, 953 S.W.2d 706, 715 (Tex. 1997).
[20] Reference Manual, supra note 15 at 339.
[21] Id.
[22] Id. at 340.
[23] Id. at 342.
[24] Id. at 336.
[25] Id. at 374.
[26] Id. at 387.
[27] Id. at 374.
[28] See id. at 336-37, 374.
[29] Schafersman, supra note 2.
[30] Id.
[31] See Reference Manual, supra note 15 at 348, 350-51.
[32] Id. at 376.
[33] See id. at 348.
[34] See id.
[35] In re Silicone Gel Breast Impl. Prod. Liab. Lit., 318 F. Supp. 2d 879, 892 (C.D.Cal.2004).
[36] Reference Manual, supra note 15 at 349.
[37] Id. See, also, In re Bextra and Celebrex Marketing Sales Practices, 524 F. Supp. 2d 1166 (N.D.Cal.2007).
[38] Reference Manual, supra note 15 at 376.
[39] See id. at 350-54.
[40] See, 2 Michael Dore, Law of Toxic Torts § 28:23 (2008); Reference Manual, supra note 15 at 350.
[41] Reference Manual, supra note 15 at 380. See, also, In re Bextra and Celebrex Marketing Sales Practices, supra note 37.
[42] See, In re Paoli R.R. Yard PCB Litigation, 916 F.2d 829 (3d Cir. 1990); Intern. Un. Loc. 68 Welf. Fund v. Merck, 192 N.J. 372, 929 A.2d 1076 (2007).
[43] Reference Manual, supra note 15 at 384.
[44] Id.
[45] In re Bextra and Celebrex Marketing Sales Practices, supra note 37.
[46] See Reference Manual, supra note 15 at 377.
[47] Id. at 354.
[48] Id.
[49] See, DeLuca v. Merrell Dow Pharmaceuticals, Inc., 911 F.2d 941 (3d Cir.1990), abrogated on other grounds, In re Paoli R.R. Yard PCB Litigation, 35 F.3d 717 (3d Cir.1994).
[50] Reference Manual, supra note 15 at 396.
[51] See id. at 357.
[52] Id. at 357. See, also, David H. Kaye & David A. Freedman, Reference Guide on Statistics, in Reference Manual, supra note 14 at 156; Richard Scheines, Causation, Statistics, and the Law, 16 J.L. & Pol'y 135 (2007).
[53] See, Reference Manual, supra note 15 at 357; Scheines, supra note 52 at 149.
[54] See, Reference Manual, supra note 15 at 357-58; Scheines, supra note 52 at 149.
[55] See Reference Manual, supra note 15 at 358.
[56] See id. at 359.
[57] See, id. at 360; Kenneth J. Rothman, Modern Epidemiology 119 (1986).
[58] Reference Manual, supra note 15 at 360, 389.
[59] See, Reference Manual, supra note 15 at 361; Rothman, supra note 57. See, also, John F. Costello, Jr., Comment, Mandamus as a Weapon of "Class Warfare" in Sixth Amendment Jurisprudence: A Case Comment on United States v. Santos, 36 J. Marshall L.Rev. 733 (2003).
[60] Reference Manual, supra note 15 at 361; Rothman, supra note 57.
[61] Reference Manual, supra note 15 at 389.
[62] See Scheines, supra note 52.
[63] See, 3 David L. Faigman et al., Modem Scientific Evidence § 23:42 (2007); Scheines, supra note 52.
[64] See Reference Manual, supra note 15 at 363-73.
[65] See, e.g., DeLuca, supra note 49.
[66] See Reference Manual, supra note 15 at 374.
[67] Id.
[68] See id. at 376.
[69] Id. at 375. See, also, Douglas L. Weed, Evidence Synthesis and General Causation: Key Methods and an Assessment of Reliability, 54 Drake L.Rev. 639 (2006).
[70] See, Magistrini v. One Hour Martinizing Dry Cleaning, 180 F. Supp. 2d 584 (D.N.J. 2002).
[71] See, General Electric Co. v. Joiner, 522 U.S. 136, 153-54, 118 S. Ct. 512, 139 L. Ed. 2d 508 (1997) (Stevens, J., concurring).
[72] See Weed, supra note 69, citing Sheldon Krimsky, The Weight of Scientific Evidence in Policy and Law, 95 Am. J. Pub. Health 129 (Supp. 1 2005).
[73] See Reference Manual, supra note 15 at 375-76.
[74] Id. at 375.
[75] Id. at 376.
[76] 3 Faigman et al., supra note 63, § 23:45 at 263.
[77] See Reference Manual, supra note 15 at 376.
[78] See id. at 377.
[79] See id. at 403. See, also, Louderback v. Orkin Exterminating Co., Inc., 26 F. Supp. 2d 1298 (D.Kan. 1998); David L. Eaton, Scientific Judgment and Toxic TortsA Primer in Toxicology for Judges and Lawyers, 12 J.L. & Pol'y 5, 15 (2003).
[80] Reference Manual, supra note 15 at 377.
[81] 3 Faigman et al., supra note 63, § 23:45 at 262.
[82] See, e.g., Wright v. Willamette Industries, Inc., 91 F.3d 1105 (8th Cir.1996); Mitchell v. Gencorp Inc., 165 F.3d 778 (10th Cir.1999).
[83] Reference Manual, supra note 15 at 377. See, also, Louderback, supra note 79.
[84] See Reference Manual, supra note 15 at 454-55.
[85] See Westberry v. Gislaved Gummi AB, 178 F.3d 257, 264 (4th Cir. 1999).
[86] Id. See, also, Kannankeril v. Terminix Intern., Inc., 128 F.3d 802 (3d Cir.1997).
[87] Bonner, supra note 12, 259 F.3d at 928, quoting Bednar v. Bassett Furniture Mfg. Co., Inc., 147 F.3d 737 (8th Cir.1998).
[88] Id.
[89] See Sheridan v. Catering Mgmt., Inc., 252 Neb. 825, 566 N.W.2d 110 (1997).
[90] See Eaton, supra note 79.
[91] Reference Manual, supra note 15 at 377.
[92] See id. at 378.
[93] Marcum v. Adventist Health System/West, 345 Or. 237, 193 P.3d 1, (2008), quoting Sir Austin Bradford Hill, The Environment and Disease: Association or Causation? 58 Proc. R. Soc. Med. 295 (1965).
[94] See Marcum, supra note 93.
[95] Epp v. Lauby, 271 Neb. 640, 715 N.W.2d 501 (2006).
[96] See Daubert, supra note 2.
[97] See McNeel, supra note 9.
[98] Neb.Rev.Stat. § 27-702 (Reissue 2008).
[99] See, State v. Mason, 271 Neb. 16, 709 N.W.2d 638 (2006); Carlson, supra note 11.
[100] Epp, supra note 95; Mason, supra note 99.
[101] See McNeel, supra note 9, citing Daubert, supra note 2.
[102] See, Epp, supra note 95; Mason, supra note 99; Carlson, supra note 11. But see McNeel, supra note 9.
[103] Neb.Rev.Stat. § 27-403 (Reissue 2008).
[104] See, Epp, supra note 95; Mason, supra note 99.
[105] See, Daubert, supra note 2; McNeel, supra note 9. See, also, Cooper v. Smith & Nephew, Inc., 259 F.3d 194 (4th Cir.2001); Sigler v. American Honda Motor Co., 532 F.3d 469 (6th Cir.2008); Lauzon v. Senco Products, Inc., 270 F.3d 681 (8th Cir.2001); Cook ex rel. Tessier v. Sheriff of Monroe County, 402 F.3d 1092 (11th Cir.2005).
[106] See, Daubert, supra note 2; Schafersman, supra note 2.
[107] See Schafersman, supra note 2.
[108] See Schafersman, supra note 8, 268 Neb. at 148, 681 N.W.2d at 55, quoting Kumho Tire Co. v. Carmichael, 526 U.S. 137, 119 S. Ct. 1167, 143 L. Ed. 2d 238 (1999).
[109] Joiner, supra note 71, 522 U.S. at 146, 118 S. Ct. 512.
[110] Daubert, supra note 2, 509 U.S. at 590, 113 S. Ct. 2786.
[111] Id.
[112] In re Paoli R.R. Yard PCB Litigation, supra note 49, 35 F.3d at 742, quoting Daubert, supra note 2.
[113] See, Daubert, supra note 2; Epp, supra note 95.
[114] Magistrini, supra note 70, 180 F. Supp. 2d at 595, quoting Heller v. Shaw Industries, Inc., 167 F.3d 146 (3d Cir.1999).
[115] See, Epp, supra note 95; Carlson, supra note 11.
[116] Epp, supra note 95; Carlson, supra note 11; Schafersman, supra note 2.
[117] See Epp, supra note 95.
[118] See Daubert, supra note 2.
[119] Daubert v. Merrell Dow Pharmaceuticals, Inc., 43 F.3d 1311 (9th Cir.1995).
[120] See, e.g., Goebel, supra note 12; In re Silicone Gel Breast Impl. Prod. Liab. Lit., supra note 35; Epp, supra note 95.
[121] See, e.g., Ambrosini v. Labarraque, 101 F.3d 129 (D.C.Cir.1996); Goebel, supra note 12.
[122] See Neb.Rev.Stat. § 27-703 (Reissue 2008).
[123] See Daubert, supra note 2.
[124] See § 27-703. See, also, State v. Draganescu, 276 Neb. 448, 755 N.W.2d 57 (2008).
[125] Daubert, supra note 2, 509 U.S. at 595, 113 S. Ct. 2786.
[126] In re Agent Orange Product Liability Lit., 611 F. Supp. 1223, 1245 (D.C.N.Y.1985). Accord, e.g., In re Paoli R.R. Yard PCB Litigation, supra note 42; Bouchard v. American Home Products Corp., 213 F. Supp. 2d 802 (N.D.Ohio 2002); Smith v. Ortho Pharmaceutical Corp., 770 F. Supp. 1561 (N.D.Ga.1991); Havner, supra note 19.
[127] See, Schafersman, supra note 2; Epp, supra note 95.
[128] Reference Manual, supra note 15 at 374. See, also, Daubert, supra note 2.
[129] Reference Manual, supra note 15 at 375.
[130] See id. See, also, Berry v. CSX Transp., Inc., 709 So. 2d 552 (Fla.App.1998).
[131] See, e.g., DeLuca, supra note 49; Daubert, supra note 119; In re Breast Implant Litigation, 11 F. Supp. 2d 1217 (D.Colo.1998). See, also, Russellyn S. Carruth & Bernard D. Goldstein, Relative Risk Greater Than Two in Proof of Causation in Toxic Tort Litigation, 41 Jurimetrics J. 195 (2001); Reference Manual, supra note 15 at 359 n. 73 (citing cases).
[132] DeLuca, supra note 49, 911 F.2d at 959 (emphasis omitted), quoting Manko v. United States, 636 F. Supp. 1419 (W.D.Mo.1986).
[133] See, e.g., DeLuca, supra note 49; Daubert, supra note 119.
[134] See In re Hanford Nuclear Reservation Litigation, supra note 12.
[135] Id. at 1134.
[136] See, In re Joint Eastern & Southern Dist. Asbestos Lit., supra note 17; In re Bextra and Celebrex Marketing Sales Practices, supra note 37; In re Silicone Gel Breast Impl. Prod. Liab. Lit., supra note 35; Miller v. Pfizer, Inc., 196 F. Supp. 2d 1062 (D.Kan.2002); Pick v. American Medical Systems, 958 F. Supp. 1151 (E.D.La.1997). See, Magistrini, supra note 70; Grassis v. Johns-Manville Corp., 248 N.J.Super. 446, 591 A.2d 671 (1991). Compare Ambrosini, supra note 121.
[137] See Allison v. McGhan Medical Corp., 184 F.3d 1300 (11th Cir.1999).
[138] See, Reference Manual, supra note 15 at 384; In re Bextra and Celebrex Marketing Sales Practices, supra note 37; In re Silicone Gel Breast Impl. Prod. Liab. Lit., supra note 35.
[139] See, Reference Manual, supra note 15 at 376; Rothman, supra note 57. See, also, U.S. v. Philip Morris USA, Inc., 449 F. Supp. 2d 1 (D.D.C.2006).
[140] See, e.g., Carruth & Goldstein, supra note 131.
[141] See In re Joint Eastern & Southern Dist. Asbestos Lit., supra note 17, 52 F.3d at 1134 (emphasis omitted).
[142] See id.
[143] See, In re TMI Litigation, 193 F.3d 613 (3d Cir.1999); DeLuca, supra note 49; Brock v. Merrell Dow Pharmaceuticals, Inc., 884 F.2d 167 (5th Cir.1989); Magistrini, supra note 70. See, also, Reference Manual, supra note 15 at 359 n. 73 (citing cases).
[144] Joiner, supra note 71.
[145] See, DeLuca, supra note 49; Michael D. Green. Expert Witnesses and Sufficiency of Evidence in Toxic Substances Litigation: The Legacy of Agent Orange and Bendectin Litigation, 86 Nw. U.L.Rev. 643 (1992). See, also, Rothman, supra note 57.
[146] See, e.g., Turpin v. Merrell Dow Pharmaceuticals, Inc., 959 F.2d 1349 (6th Cir.1992); Philip Morris USA, Inc., supra note 139; Allen v. United States, 588 F. Supp. 247 (D.Utah 1984), reversed on other grounds 816 F.2d 1417 (10th Cir.1987); Berry, supra note 130.
[147] Goebel, supra note 12, 346 F.3d at 990. Accord State v. King, 269 Neb. 326, 693 N.W.2d 250 (2005).
[148] See, Berry, supra note 130; 3 Faigman et al., supra note 63, § 23:34; Reference Manual, supra note 15 at 363, 365, 369, 395.
[149] Reference Manual, supra note 15 at 372.
[150] See, Richardson by Richardson v. Richardson-Merrell, Inc., 857 F.2d 823 (D.C.Cir. 1988); In re Silicone Gel Breast Impl. Prod. Liab. Lit., supra note 35.
[151] See Reference Manual, supra note 15 at 377.
[152] See, e.g., Ambrosini, supra note 121; In re Joint Eastern & Southern Dist. Asbestos Lit., supra note 17.
[153] See Beck v. Koppers, Inc., No. 3:03 CV 60 P. D, 3:04 CV 160 P. D, 2006 WL 270260 (N.D.Miss. Feb. 2, 2006) (unpublished decision).
[154] See Reference Manual, supra note 15 at 361 n. 76 & 380.
[155] See Daubert, supra note 119.
[156] See Ambrosini, supra note 121.
[157] See, Daubert, supra note 2; Epp, supra note 95.
[158] Dorland's Illustrated Medical Dictionary 458 (28th ed. 1994).
[159] See id. at 585.
[160] See Carlson, supra note 11.
[161] See id.
[162] See id.
[163] Id., 267 Neb. at 414, 675 N.W.2d at 105 (emphasis omitted).
[164] See id.
[165] Id.
[166] See id.
[167] Id.
[168] Id. at 414-15, 675 N.W.2d at 106.
[169] Carlson, supra note 11; Mary Sue Henifin et al., Reference Guide on Medical Testimony, in Reference Manual, supra note 14 at 439, 452-53.
[170] Carlson, supra note 11, 267 Neb. at 415, 675 N.W.2d at 106.
[171] See, In re Paoli R.R. Yard PCB Litigation, supra note 49; Heller, supra note 114; Westberry, supra note 85.