A jury found appellant guilty of, among other things, first-degree felony murder (burglary) while armed, first-degree sexual abuse while armed, first-degree theft of a motor vehicle from a senior citizen, and related lesser included offenses. The principal issue on appeal is whether the trial judge erroneously admitted the expert opinion of an FBI forensic document examiner that a piece of handwriting left on the body of the murder victim had been written by appellant. Specifically, we must decide whether opinion evidence of this kind based on comparison of “known” and “questioned” handwritings, resulting in the opinion that the same individual wrote both documents, meets the test of “general acceptance of a particular scientific methodology,” Ibn-Tamas v. United States,
Rejecting as well appellant’s remaining assignment of error, see part III., infra, we affirm the judgments of conviction except for those the parties agree must be vacated on remand under merger principles.
I.
The government’s proof allowed the jury to find beyond a reasonable doubt that appellant sexually assaulted and killed 78-year-old Martha Byrd in her home (next to his family’s home) within a day or two before September 4, 2004, when her body was found lying in her bed. Ms. Byrd had been strangled from the rear and made unconscious with a cloth ligature wrapped around her neck, and stabbed five times from the front in the torso. Semen found on her thighs and material taken from her vagina contained sperm matching appellant’s DNA profile by an overwhelming statistical probability. His left ring-finger print was found on the inside frame of a sliding glass door that had been forced open to allow entry to the home. The night before Ms. Byrd’s body was found, appellant had been seen “[sjpeeding up and down the street” in her car, and his palm print was later lifted from the car interior. Pieces of black cloth found on the rear floorboard of the vehicle, in the opinion of a forensic fiber analyst, could have originated from the same textile garment as the ligature used to strangle the victim.
An envelope found on Ms. Byrd’s body contained a handwritten note that read: “You souldins [sic] have cheated on me.”
II.
The government’s case relied on multiple forms of forensic evidence comparison and resulting expert opinion testimony— DNA, fiber, fingerprint, and handwriting comparison (not to say medical examiner analysis) — but appellant and amicus challenge only the admissibility of Maldonado’s opinion that appellant wrote the note found on Ms. Byrd’s body. This challenge, however, particularly as made in the briefs of amicus (who chiefly represented appellant on this issue at oral argument), amounts to an attack on the “ ‘pattern-matching’ [forensic] disciplines in general” except for “nuclear DNA analysis.” Reply Br. for Amicus Curiae at 11-12. According to amicus, the recent NRC Report— which we discuss in part II. E., infra — has concluded that “none of the pattern-matching disciplines, including handwriting identification, satisf[y the basic] requirements” of science. Id. at 12 (emphasis by ami-cus ). But the issue before this division is the admissibility of expert opinions derived from one such discipline, forensic handwriting identification, and we imply nothing about whether other pattern-matching disciplines meet the foundational test for admissibility in this jurisdiction.
We first summarize the standards governing the admission of evidence of this kind; then recite in detail the evidence received by the trial court (Judge Kravitz) at the pretrial hearing on the issue, and explain our agreement with his conclusion that such evidence is admissible; and lastly discuss why the NRC Report, not available to Judge Kravitz at the time, does not alter our conclusion of admissibility.
“In the District of Columbia,” we explained recently,
before expert testimony about a new scientific principle [may] be admitted, the testing methodology must have become sufficiently established to have gained general acceptance in the particular field in which it belongs. The issue is consensus versus controversy over a particular technique, not its validity. Moreover, general acceptance does not require unanimous approval. Once a technique has gained such general acceptance, we will accept it as presumptively reliable and thus generally admissible into evidence. The party opposing the evidence, of course, may challenge the weight the jury ought to give it.
(Ricardo) Jones v. United States,
The government nonetheless concedes that, as this court has never decided whether handwriting identification meets Frye’s general acceptance standard, it is proper for us to do so here — especially given the trial court’s lengthy consideration of the issue — whether or not handwriting identification is a “novel” forensic science. See Br. for Appellee at 15 n. 15. We therefore proceed to the merits of appellant’s challenge, guided by three general principles. First, since the government asks us “to establish the law of the jurisdiction for future cases” involving admissibility of handwriting identification, we “review the trial court’s analysis de novo” rather than for abuse of discretion. (Nathaniel ) Jones v. United States,
B.
The government’s main witness at the admissibility hearing was FBI supervisory document analyst Diana Harrison, a certified forensic document examiner. Besides her experience of ten years as an FBI document analyst, Harrison was a member of the Mid-Atlantic Association of Forensic Scientists, ASTM International (ASTM standing for the American Standards for Testing and Materials), and the Scientific Working Group for Documents (SWGDOC), an organization of forensic document examiners from federal, state, and private document examination laboratories that develops standards for the document examination field.
Harrison testified that professional standards for forensic document examiners are published through ASTM International and subject to peer review.
According to Harrison, document examiners operate on the principle that handwriting is unique, meaning that “no two people share the same writing characteristics,” although “some of the handwriting” of twins has been shown to be “very similar.” Further, since no one writes “with machine[-]like precision,” variations are seen within a person’s own handwriting even within the same document. But “given [a] sufficient quantity and quality of
Harrison described the four-step process followed generally (and by FBI document examiners) in expert comparison of handwriting. The procedure, known as the ACE-V method (Analysis, Comparison, Evaluation, and Verification), begins with the examiner analyzing the known writing to decide whether it was “freely and naturally prepared” rather than a simulation or tracing of other writing. Using magnification, the examiner also establishes “a range of variation” on the known writer’s part, ie., “deviations [from a person’s] repetitive handwriting characteristics ... expected in natural, normal writing.” Once the known and questioned writings are studied separately, the examiner “com-paréis them] to determine if there are similarities present between the two writings, if there are differences present ... and ... if the [range of] variation that was observed in the one writing is also observed in the [other] writing.”
[I]f you would write the word “the” usually in copybook, the T is shorter than the H and it all sits nice and evenly on the baseline. It’s evenly spaced. Straying from that, the T would be taller than the H. The E would slant down below the baseline. The T crossing wouldn’t be centered on the T. It would be heavy to the right or to the left and crossing through the H. These are significant characteristics.
Harrison conceded that there is no “standard for how many individualizing characteristics need to be present” in either the known or questioned writing for a conclusion of authorship to be reached, but she was clear that “[t]he identification of authorship is not based on one single characteristic”: “we have to have sufficient identifying characteristics in common [and] ... no significant differences and variation between the two writings.”
Harrison admitted that the analytic method she described left the decision whether two handwritings were done by the same person ultimately to the trained judgment of individual examiners.
The first of these was a series of studies done by Dr. Moshe Kam, a professor of electronic and computer engineering at Drexel University. Over a decade starting in 1994, Kam did five studies examining the error rates of professional document examiners as compared to lay persons. In the first study, 86 handwriting samples from twenty different writers were analyzed by seven document examiners and ten laymen; the document examiners made a total of four errors in identifying a document’s author, the laymen 247. Kam thus concluded that “professional document examiners ... are significantly better in performing writer identification than college-educated nonexperts” and that, based on this study, “the probability [is] less than 0.001” that “professionals and nonprofessionals are equally proficient at performing writer identification.”
Kam later compared the accuracy rates of 105 professional document examiners, eight document examiner trainees, and 41 laymen in determining the authorship of a particular writing.
Beyond such performance studies,
The defense’s lone witness at the Frye hearing was Mark Denbeaux, an evidence professor at Seton Hall University Law School, whom the trial judge let testify as “an expert on the general acceptance of forensic document examination in the relevant scientific fields,” while commenting that this was “not dissimilar from qualifying [Denbeaux] as an expert on a legal question.”
In 1989, Professor Denbeaux had coauthored a law review article urging courts to “cease to admit” expert testimony on handwriting comparison, primarily because proficiency tests conducted by the Forensic Science Foundation (FSF) in the 1970s and 1980s showed that document examiners were wrong at least 43 percent of the time.
Admitting that the principle that “[n]o two people write alike” is generally accepted by forensic document examiners, Professor Denbeaux had “never seen a study that proves that,” and he believed there should be empirical testing to quantify handwriting characteristics before conclusions of a handwriting match should be allowed in evidence. Dr. Srihari’s studies, in his view, “certainly [had] not accepted” the proposition that handwriting comparison methodology “can be used to reach an absolute conclusion” of authorship, since “there’s at least a five percent error rate his computer can’t explain.” Despite Sri-hari’s experiments and the consensus of the forensic science community, Denbeaux disputed that handwriting comparison is accepted as a legitimate science, because, among other things, it has no quantifiable measures for defining “significant” handwriting characteristics and distinguishing intra- from inter-writer differences.
C.
Carefully evaluating the testimony and studies presented to him, Judge Kravitz first found “uncontradicted” FBI examiner Harrison’s testimony that the forensic document examiner community accepts two points: that no two people write exactly alike, and that a document examiner can determine if the writer of a known writing also wrote a questioned writing given sufficient samples for comparison. Further, the judge found the FBI’s laboratory method for making comparisons to be “generally accepted in the relevant scientific community” because it “follows the steps recommended by ASTM [International], ... a voluntary standards development organization with [30,000] members which publishes standards for materials, products, systems, and services in many technological, engineering and testing fields including standards of forensic document analysis.”
Further, the judge reasoned, the methodology had been tested and shown to be sound in two ways by scientists outside the field of forensic science. Dr. Kam’s studies (particularly II and V) showed that document examiners using the method “are skilled in such a way that they can identify matches more accurately than lay persons are able to, with a lower rate of false positives.” And Dr. Srihari’s tests using computerized equivalents of features employed by examiners to make comparisons confirmed Srihari’s belief that “the
The judge thus concluded that the FBI’s methodology for handwriting identification “meets a baseline standard of reliability” and general acceptance in the relevant field, even though it “leaves much room for subjective analysis,” “lacks standards and guidelines for determining significance,” and suffers from an inability of the FBI to “mak[e its] error rate [ ]known.”
D.
On the basis of the record before the trial judge, we agree that handwriting comparison and identification as practiced by FBI examiners passes the Frye test for admissibility. “[Scientists significant either in number or experience [must] public[ly] oppose a new technique or method as unreliable” before that “technique or method does not pass muster under Frye.” United States v. Jenkins,
FBI document examiners, as Harrison testified, are trained according to and employ national standards recommended by ASTM International, a body of forensic scientists, academics, and lawyers who vote on the adoption and revision of professional standards for numerous disciplines, including handwriting analysis. The FBI laboratory is accredited and its analysts, like forensic document examiners generally, undergo peer review through organizations including the American Academy of Science, which has a questioned document section. FBI examiners follow the general four-step (ACE-V) pro
Thus, the FBI’s methodology for handwriting comparison is well-established and accepted in the forensic science community generally. Moreover, evidence showed that in recent years the method’s adherence to objective, replicable standards and its capacity to reach accurate conclusions of identification have been tested outside the forensic science community in two ways. Fh’st, Dr. Kam’s controlled studies beginning in 1994 have shown that trained document examiners consistently have lower error rates, by a wide margin, than lay persons attempting handwriter identification. Dr. Srihari’s computer experiments, in turn, have shown that multiple features of handwriting regularly used by examiners can be converted to quantitative measurements
All told, then, the government’s evidence at the hearing, rebutted only by the testimony of a non-scientist, Professor Den-beaux,
E.
It remains for us to consider, however, appellant’s and amicus’s argument that the 2009 NRC Report,
The NRC Report was the end-product of a comprehensive, congressionally-com-missioned study of the forensic sciences by a Committee of the National Academy of Sciences, which Congress instructed (among other things) to “assess the present and future resource needs of the forensic science community,” “make recommendations for maximizing the use of forensic technologies and techniques to solve crimes,” and “disseminate best practices and guidelines concerning the collection and analysis of forensic evidence to help ensure quality and consistency in [its] use.” NRC Report at 1-2. The Committee was made up of “members of the forensic science community, members of the legal community, and a diverse group of scientists.” Id. at 2. Its Report numbers 286 pages, excluding appendices. Notably, however, of these many pages only five concern “Questioned Document Examination,” and just four paragraphs discuss handwriting comparison en route to the following “Summary Assessment” (minus footnotes):
The scientific basis for handwriting comparisons needs to be strengthened. Recent studies have increased our understanding of the individuality and consistency of handwriting[,] and computer studies ... suggest that there may be a scientific basis for handwriting comparison, at least in the absence of intentional obfuscation or forgery. Although there has been only limited research to quantify the reliability and replicability of the practices used by trained document examiners, the committee agrees that there may be some value in handwriting analysis.
Id. at 166-67 (footnotes omitted). That assessment, while hardly an unqualified endorsement of “a scientific basis for handwriting comparison,” just as clearly does not spell “public[ ] opposition] by the science community,” Jenkins, supra, to the reliability and hence admissibility of expert handwriting identification.
Appellant and amicus do not appear to argue otherwise. Instead, they argue that the Report taken as a whole amounts to a critique, and repudiation, of the supposed science underlying all forensic analysis based on pattern-matching, except for DNA. See Reply Br. for Amicus at 12 (the Report “concluded that none of the pattern-matching disciplines, including handwriting identification, satisfied [the basic requirements of science].”) They rely especially on the Report’s statement in Summary that, “[w]ith the exception of nuclear DNA analysis, ... no forensic method [of ‘matching’] has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a
The Report is much more nuanced than that. It ranges over a wide variety of forensic science disciplines and identifies weaknesses (and some strengths) of varying degrees in each. Thus, while pointing to the “simple reality ... that the interpretation of forensic evidence is not always based on scientific studies to determine its validity,” it finds “important variations [in terms of validity] among the disciplines relying on expert interpretation [of observed patterns].” Id. at 7-8. At one end of the spectrum (almost by itself) is DNA analysis, but “[t]he goal is not to hold other disciplines to DNA’s high standards,” since “it is unlikely that most other current forensic methods will ever produce evidence as discriminating as DNA.” Id. at 101. Closer to the other end (and discussed under the heading “Questionable or Questioned Science”) may be disciplines such as toolmark or bitemark identification, which “have never been exposed to stringent scientific inquiry” and thus “have yet to establish either the validity of their approach or the accuracy of their conclusions.” Id. at 42, 53. Yet, in virtually no instance — and certainly not as to handwriting analysis, which ultimately is all that concerns us here — does the Report imply that evidence of forensic expert identifications should be excluded from judicial proceedings until the particular methodology has been better validated.
Appellant argues, however, that the Report takes direct aim at a key aspect of the methodology the FBI employs in handwriting analysis, as described by Harrison. Discussing friction ridge analysis (exemplified by fingerprint comparison), the Report comments on the ACE-V or four-step process of analysis, comparison, evaluation, and verification that document examiners typically follow:
ACE-V provides a broadly stated framework for conducting friction ridge analyses. However, this framework is not specific enough to qualify as a validated method for this type of analysis. ACE-V does not guard against bias; is too broad to ensure repeatability and transparency; and does not guarantee that two analysts following it will obtainthe same results. For these reasons, merely following the steps of ACE-V does not imply that one is proceeding in a scientific manner producing reliable results.
Id. at 142. This criticism is unanswerable, we think, if the methodology in question is no more concrete in practice than a four-step sequence. Even to a lay observer, a technique defined only as saying, “first we analyze, then we compare, etc.,” can scarcely lay claim to scientific reliability— to yielding consistently accurate and con-firmable results. But, as the trial judge recognized, the FBI’s method of analyzing handwriting goes beyond those four sequential steps: Harrison’s testimony, supported by the studies of Srihari, showed that at each of the four ACE-V steps document examiners descend to the specific by using multiple standard (and published) handwriting characteristics to reach conclusions of or against identification. Although this still leaves considerable room for the examiner’s subjective judgment as to significance, that is very different from saying that the process employed is no more than a skeletal ACE-V set of steps.
In sum, the NRC Report, while it finds “a tremendous need for the forensic science community to improve” generally and identifies flaws in the methodology of “a number of forensic science disciplines,” expressly “avoid[s] being too prescriptive in its recommendations,”
F.
We therefore uphold Judge Krav-itz’s ruling on admissibility. As in all such cases, however, it is important — and is reflected in the preponderance of the evidence standard — that appellant was not denied a second opportunity to challenge FBI examiner Maldonado’s expert opinion, this time before the jury. Rejecting the view of those “overly pessimistic about the capabilities of the jury and of the adversary system generally,” the Supreme Court has reminded us that “[vigorous cross-examination, presentation of contrary evidence, and careful instruction on the burden of proof are the traditional and appropriate means of attacking shaky but admissible evidence.” Daubert, supra note 5,
I fully expect the defense to conduct a thorough cross-examination that will expose any and all inadequacies and points of unreliability of the ACE-V method asa general matter, as well as the ... inadequacies and points of ... unreliability in the application of that method in this case.
In consultation with its own experts the defense has fully investigated the points of weakness of the FBI laboratory’s approach to handwriting analysis, including its underlying theoretical premises, its general methodology and the methodology as applied in this case. If defense counsel exposes those weaknesses to the jury with the same thoroughness and clarity with which it exposed them at the Frye hearing, then in my view there is no reason for any concern that the jury in this case will give undue weight to the FBI document examiner’s testimony.
The vigorous challenge appellant in fact made to Maldonado’s testimony before the jury bears this out.
III.
Following his arrest a week after the murder, appellant was interviewed by the police and eventually confessed to stealing Martha Byrd’s car, strangling and stabbing her, and writing the murder scene note. On his motion to suppress the statement, and after a lengthy evidentiary hearing, the trial judge found that during the videotaped interview, at a point before appellant had said anything inculpatory, he signaled his desire to remain silent and the police then failed to “scrupulously honor” his assertion of the right. See Michigan v. Mosley,
He nonetheless argues that the statement was coerced, and that its permitted use for impeachment had the effect of keeping him off the stand when he otherwise would have testified. The government counters that he waived the issue of voluntariness by not testifying, relying on Luce v. United States,
IV.
Accordingly, the judgments of conviction are affirmed, except that on remand the trial court must vacate those convictions which the parties agree merged into others.
So ordered.
Notes
. To illustrate, Maldonado explained that for the word "You,” both the note and appellant’s known writings showed (1) that the uppercase letter "Y” was made with three strokes; (2) similar space separated the letters "ou” from the vertical staff of the "Y”; (3) the letters ”o” and "u” were connected at the middle of their formations; and (4) the finishing stroke of the “u” went to the right, forming a "hook” over the baseline. Maldonado showed the jury several other examples of similarities between the known and questioned writings.
. By contrast, he stated, if a questioned writing’s lowercase “d” is comprised of an "eyelet” made counterclockwise, and the known writings contain only a lowercase ”d” made with a clockwise motion, "that is considered a significant difference.”
. See, e.g., Keyser v. Pickrell,
. Neither party asks us to depart from the Frye test (a course necessitating en banc action in any event) in favor of Daubert v. Mer-
. ASTM has 30,000 members, and within the membership are several committees, including a committee on forensic sciences and a subcommittee on forensic document examination. ASTM members, who include not just document examiners but also other forensic scientists, academics, and lawyers, vote on the adoption or revision of professional standards.
. Professional journals in which forensic document examiners publish include Forensic Science Communication, the Journal of Forensic Sciences, the Canadian Society of Forensic Science Journal, and the Journal of the American Society of Questioned Document Examiners.
. If the variations observed in one writing are not seen in the other, Harrison said, this "may render an inconclusive ... or less than definitive result.”
. "[A]re two letters in a word sitting right on top of each other or are they spaced evenly apart or is there more space between these two letters in the word than [in] the rest of the letters in the word?”
. ”[F]or example, ... you could have a handwritten N that has one stroke going straight down, then almost a U stroke” — "a two-stroke N” — in contrast to a "three-stroke N” with "strokes ... down, diagonal, and up.”
. "[I]t's the combination of characteristics. You could have a five-page handwritten letter that you wouldn’t be able to identify, if it's all
. "Q.... [s]o it’s totally up to the document examiner to determine whether or not ... the [observed] characteristic is an identifying characteristic and . .. whether there’s a sufficient number of those characteristics to call it a match, right?
A. Correct.”
. Moshe Kam et al., Proficiency of Professional Document Examiners in Writer Identification, 39 J. Forensic Sci. 5 (1994) (Kam I).
. Moshe Kam et al., Writer Identification by Professional Document Examiners, 42 J. Forensic Sci. 778 (1997) (Kam II).
. In response to criticism of Kam II, a third study was conducted based on varying the rates of compensation for participating laymen. See Moshe Kam et al., Effects of Monetary Incentives on Performance of Nonprofessionals in Document-Examination Proficiency Tests, 43 J. Forensic Sci. 1000 (1998) (Kam III). Kam III found no statistically different results in the laymens’ error rates when dif
. Moshe Kam et al., Signature Authentication by Forensic Document Examiners, 46 J. Forensic Sci. 884 (2001) (Kam IV). Finally, in "Kam V," Writer Identification Using Hand-Printed and Non-Hand-Printed Questioned Documents, 48 J. Forensic Sci 1391 (2003), Kam reevaluated the results from Kam II to focus on error rates for hand-printed (versus cursive) samples and found that, for hand-printed writings, document examiners had a false positive error rate of 9.3 percent compared to a 40.45 percent error rate for laymen.
. The government further cited an Australian study using "a signature comparison task” which found that forensic document examiners had a substantially lower rate of false positive identifications (3.4 percent) than laymen (19.3 percent). J. Sita et al., Forensic Hand-Writing Examiners’ Expertise for Signature Comparison, 47 J. Forensic Sci. 1117 (2002).
. Sargur N. Srihari et al, Individuality of Handwriting, 47 J. Forensic Sci. 1 (2002). In Dr. Srihari's view at the time the study was published, the hypothesis of uniqueness had "not been subjected to rigorous scrutiny with the accompanying experimentation, testing, and peer review.”
. For this Srihari cited Roy Huber and A.M. Headrick, Handwriting Identification: Facts and Fundamentals (1999). Dr. Srihari's study listed these as:
arrangement; class of allograph; connections; design of allographs (alphabets) and their construction; dimensions (vertical and horizontal); slant or slope; spacings, intraword and interword; abbreviations; baseline alignment; initial and terminal strokes; punctuation (presence, style, and location); embellishments; legibility or writing quality; line continuity; line quality; pen control; writing movement (arched, angular, interminable); natural variations or consistency; persistency; lateral expansion; and word proportions.
. The computer was evaluated for accuracy in performing two tasks with the writing samples: (1) identifying a writer from among a possible set of writers, which it did with 98 percent accuracy (the "identification” method); and (2) determining whether two documents were written by the same writer, which the computer did with 96 percent accuracy (the "verification” method). For both models, the more handwriting that was compared, the higher was the rate of resulting accuracy in making a match.
. Dr. Srihari later conducted another computer study examining handwriting of twins and non-twins. See Sargur N. Srihari et al, On the Discriminability of the Handwriting of Twins, 53 J. Forensic Sci. 1 (2008). In this study, writing samples were obtained from 206 pairs of fraternal or identical twins as well as from non-twins. Using the verification method — in which two writings are compared to determine if they are from the same writer — the computer had an identification accuracy rate of 87 percent when comparing twins' handwriting, and an accuracy rate of 96 percent for non-twins. The computer’s combined error rate was nine percent. Three professional document examiner trainees and nine laymen also participated in the study. The trainee-examiners outperformed the computer, with an average error rate of less than five percent. The laymens' average error rate, 16.5 percent, was higher than that of the computer or the trainee-examiners.
. When the judge asked Denbeaux on voir dire, "Why should I not view you as just some lawyer who knows about all this stuff,” Den-beaux replied: "If people know enough about some stuff they’re experts.... [Ojne of my premises is that you can become an expert by training, skill, experience, knowledge or education.”
. Denbeaux had no formal training (and had taken no classes) in handwriting comparison, had no experience in research methodology, was not a statistician, and had not trained in computer science; and he was not a member of ASTM International, which accepts some attorneys as members and which he described as a "highly respected” organization.
. See D. Michael Risinger, Mark P. Den-beaux & Michael J. Saks, Exorcism of Ignorance as a Proxy for Rational Knowledge: The Lessons of Handwriting Identification “Expertise," 137 U. Pa. L.Rev. 731 (1989).
. Harrison had testified that there is no known error rate for the methodology because "it’s very difficult to separate the methodology error rate” from the "error rate of the particular examiner.”
. As Judge Kravitz found, even a survey of professional document examiners relied on by Professor Denbeaux at the hearing revealed that they strongly accept the principle that no two people write exactly alike.
. Srihari’s 2002 study defined the programmable "features” as "quantitative measurements that can be obtained from a handwriting sample in order to obtain a meaningful characterization of the writing style.”
. Appellant claims the trial judge erred by excluding Denbeaux, an evidence professor, from the relevant scientific community for Frye purposes. Judge Kravitz recognized that that community "is certainly not limited to forensic scientists," but believed it was “also pretty clear that it doesn’t extend as far as law professors or other people who have simply studied an issue but don’t have the relevant scientific background and training” in the field of expertise. Whether the judge excluded. Denbeaux from those able to contribute to the Fiye discussion or instead — as we think more likely the case, since he qualified Den-beaux as an expert and heard his testimony at length — gave his opinions greatly reduced weight, is of no consequence to the conclusion we reach after independently reviewing the evidence.
. National Research Council, Committee on Identifying the Needs of the Forensic Science Community, Strengthening Forensic Science in the United States: A Path Forward (2009) (NRC Report).
. Amicus finds it unremarkable that a critique mainly by scientists of the supposed science of pattern-matching expresses no opinion on legal admissibility. But the Report includes a lengthy discussion of the legal standards of admissibility under Frye and especially Daubert, pointedly criticizing some post-Daubert decisions of federal appellate courts for lax treatment of forensic science admissibility. Thus, the Report’s agnosticism, at worst, on the admissibility of handwriting evidence in court is significant, in our view. Cf. United States v. Rose,
. Among those recommendations were congressional funding to create an independent federal entity, the National Institute of Forensic Science, which in turn “should competitively fund peer-reviewed research into” (among other things) "the scientific bases demonstrating the validity of forensic methods.” Id. at 22-23.
. In (Ricardo ) Jones, supra, we held that the NRC Report, although "not properly before us,” provided no reason for rejecting the admissibility of firearms identification evidence. See
. The judge found, among other things, that "at no time was there even any hint of physical coercion” nor yelling or "rais[ing] their voices” by the police; they gave appellant sodas and allowed him to smoke; he received "several breaks” to use the bathroom and was allowed to meet with his grandmother; "despite his relative youth [appellant] had extensive experience with the criminal and juvenile justice systems,” and "clearly understood his [initially given] Miranda rights”; and during his "back and forth” discussions with the police he "never appeared to be intimidated by the police” but rather seemed "fully in control of his emotions and ... [to be] making rational decisions.”
