Skip to content

Forensics- Art or Science?

Is the study of forensics art or science? The National Research Council recently released a report entitled “Strengthening Forensic Science in the United States: A Path Forward” in which it outlined many of the problems and criticisms of modern forensic science. The report attacked the scientific foundations of many forensic disciplines, and criticized the lack of research being done to scientifically validate the reliability of the principles and techniques used by forensic scientists. It is a very comprehensive report and will turn some heads in the scientific community, as well as fuel major debates on the subject in the coming months and years. Here are some of the “highlights” of the report:

  • Fingerprint science “does not guarantee that two analysts following it will obtain the same results”.
  • Shoeprint and tire-print matching methods lack statistical backing, making it “impossible to assess”.
  • Hair analyses show “no scientific support for the use of hair comparisons for individualization in the absence of (DNA).”
  • Bullet match reviews show “scientific knowledge base for tool mark and firearms analysis is fairly limited.”
  • Bite-mark matches display “no scientific studies to support (their) assessment, and no large population studies have been conducted.”

The art of fingerprint comparisons

There is no denying that a fingerprint examination is a subjective process. When an examiner conducts a common source determination, he or she will compare points of individuality in the two prints and evaluate if their is sufficient quantity and quality of detail in agreement between the unknown and the known prints to reach a conclusion. These assessments are largely based on the examiner’s interpretation of the evidence, and there are no specific measurements or standard tests to benchmark against, except for counting the number of points in agreement. In the U.S., the courts deliberately eliminated a threshold standard of agreement, so that the determination can remain a subjective matter, taking into account the examiner’s training and experience, and both the quantity and quality of comparable details. That is to say that an examiner does not have to observe a minimum number of points in agreement between the unknown and the known print to reach a common source conclusion, but can rely of his or her specific training and experience to guide the examiner to the right conclusion.

As you may have gathered, an element of a fingerprint comparison is dependent on the person doing the examination. An examiner needs to be able to see the fingerprint in a specific way, store that in his or her memory, and then recall it when comparing it to another fingerprint. A person’s ability to recognize shapes, small details, and other spatial relationships is crucial to the job of a fingerprint examiner, and can be considered an art form. It is a learned skill, requiring practice and concentration, and one that is ultimately susceptible to variation.

So, if fingerprint examination is an art, where is the science?

The science of fingerprint comparisons

Fingerprint examinations are deeply rooted in scientific principles. The whole fingerprint field is based on the recognition that no two people will ever have the same fingerprints, and that an individual’s fingerprints will remain unchanged throughout life. Based on these premises, the science of fingerprint comparisons is one that should, in theory, have no error. Either a fingerprint found at a crime scene is the same as the known print or it is not – there is no in-between. The examiner’s methodology is also based on the scientific method. An examiner follows the ACE-V methodology for fingerprint comparisons, which is an acronym for the stages of the examination. Analysis is the first step, where the examiner will look at the unknown print and determine if there is sufficient quantity and quality of friction ridge detail to be able to conduct an examination. Next comes the comparison, where the examiner will look for class and individual characteristics in the unknown fingerprint and compare them with the known prints. In the third stage, the examiner will evaluate the comparison, determine how much weight to attribute to the different similarities and dissimilarities between the prints, and reach a conclusion. The conclusion an examiner can reach is either exclusion (the prints came from different sources), individualization (the prints came from a common source), or no conclusion (there is insufficient amount of information to be able to reach a conclusion). The final stage is the verification stage, where a second examiner will conduct an independent examination on the same evidence and verify the first conclusion. Technically speaking, the examiners should reach the same conclusion since the science behind the examination should lead them to the same results. In this way, fingerprint comparisons should be reproducible and accurate because they are founded on scientific principles.

Understanding the argument

So, why does the report state that fingerprint science “does not guarantee that two analysts following it will obtain the same results”? Well, because essentially it does not. The emphasis on that statement is on the word “guarantee”. Whenever humans are involved in anything, there is the chance for error, even though in theory there should not be any for fingerprint science.  The tone of the statement indicates that each time an examiner conducts an examination on the same evidence there is an equal chance that they will each reach a completely different conclusion, but the science dictates that they should reach the same conclusion every time. Should that discredit fingerprint science? Some people believe that it should, especially since the error rate for fingerprint science has not been uncovered. The scientific community goes through great lengths to try to eliminate human error, by instituting random proficiency testing, continuing education, and board certifications as some of the measures to reveal the human error rate and to prevent unqualified individuals from becoming experts in the field. But as we have previously discussed, there is a human element to fingerprint science that even the courts have promoted as being crucial to allowing fingerprint examiners to do their job effectively. In fact, there are automated forms of fingerprint comparisons, such as AFIS (Automated Fingerprint Identification System), which is used to scan databases for a fingerprint match; however, the computers are not “authorized” to conduct an examination en lieu of humans, since the computers can not go to court and testify as to the reliability of their examinations. The U.S. has still mandated that a fingerprint comparison must be done by a person, since that is the most reliable way of ensuring that it is done properly. Computers can not deal with variation like humans can and they can not explain differences that are present in fingerprints from the same person. These philosophies dictate that a system that is too rigid is actually detrimental to fingerprint science, since it restricts the degree to which a person’s training and experience can contribute to a fingerprint comparison.

That leaves us with a “Catch 22”. On the one hand, humans are seceptible to error, even if measures are taken to eliminate those errors. On the other hand, computers, which do not make errors, are not qualified to do fingerprint comparisons. So, we are left with a situation that demands perfection from imperfect beings. While that does not justify the fact that errors occur, it should help us understand why they may occur and it should be a force driving us to reach perfection in our profession. I agree with the report’s call for more research in the human error rate, proficiency testing, advanced training, and stringent certification of examiners and labs, but let’s take the argument into context and understand how it can effectively be applied.




This Post Has 0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top