National Board of Forensic Evaluators
Determination of Cutoff Score NBFE Written Examination
by Ronald W. Morrison, Ph.D., Research Psychologist
The National Board of Forensic Evaluators (NBFE) examination for certification consists of 100 multiple choice questions that test the forensic candidates knowledge in 20 forensic domains. The forensic domains that are tested include the following:
In order to determine the applicability of this exam to serve as a certification tool for forensic experts, rigorous statistical testing for examination validity, item analysis, variability, and cutoff score determination was completed. These statistical measures and procedures used, and the results, are summarized below.
Validity refers to the degree to which an exam truly measures what it is intended to measure. Essentially, the term validity is global in that it is comprised of 3 validity sub-types; content validity, concurrent validity, and predictive validity.
According to content validity, a test or exam is valid to the extent that it represents all of the content of a particular construct. The current National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination for NBFE certification measures knowledge representative of 20 forensic domains (constructs). The exam was constructed to measure a candidate's knowledge across a broad spectrum of current forensic practice. The result of this is an increase in the exam's content validity and utility when used as a certification tool.
According to concurrent validity, a test or exam is valid to the extent that it varies directly with a measure of the same construct, or indirectly with a measure of an opposite construct. The current National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination for NBFE certification utilized the input of 16 forensic subject matter experts (SME's) (e.g., forensic practitioners) who served as judges and exam evaluators. These forensic SME's carefully reviewed the content of the exam, question by question, to determine if the questions tested the same domains (constructs) as other forensic measures do. The forensic SME's concluded that the items that comprise the current version of the National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination for NBFE certification correlate strongly with other measures of forensic expertise.
According to predictive validity, a valid exam must be strongly correlated with another (valid) measure such that one can make a valid prediction by simply knowing the score on only one of the measures. A measure of predictive ability associated with the current National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination was achieved by having 16 forensic SME's take the exam. Each judge estimated what score they would receive on the exam based on their expertise, and their scores on other forensic measures. The statistical results imply that the current National Board of Forensic Evaluators (NBFE) examination can well predict a candidate's knowledge of essential forensic constructs.
Tested Forensic Domains
Item analysis is a procedure that is used to improve the validity and reliability of multiple-choice tests. Item analysis describes the statistical analyses which allow measurement of the effectiveness of individual test items. Basically, a useful, valid and reliable test is one that covers a specific portion of material such that it would be unlikely that an examinee would do well on one part of the exam, and poorly on another part. Item analysis is an iterative process where statistics are collected for each exam question, and "poor" questions are replaced with "good" questions. A question is determined to be "poor" when the likelihood of an examinee answering the question correctly is not positively correlated with the examinee's overall exam score or knowledge of the material. By replacing "poor" questions with "good" ones, an exam can ultimately discriminate "good" candidates from "poor" candidates. The current National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination for NBFE certification has undergone several stages of item analysis. The current examination is statistically sound in regard to item analysis.
Variability is a measure of central tendency. Variability basically refers to the extent to which the scores in a distribution differ from each other, or the extent to which the scores in a distribution differ from their mean. According to the descriptive statistics for the current National Board of Forensic Evaluators (NBFE) examination for NBFE certification, the mean and median scores for the 16 forensic SME's were almost identical (e.g., 86.63 and 87.0 respectively). This suggests a normal distribution. Also, the standard deviation for the test scores associated with the 16 forensic SME's was 4.63, indicating that the scores for the forensic SME's were tightly clustered (e.g., low variability). This suggests that the current National Board of Forensic Evaluators (NBFE)examination for NBFE certification is a valid and reliable measure of forensic domain knowledge.
In certification testing, it is a common practice to use a criterion-referenced approach to establish a cutoff score for an examination. The purpose of using a cutoff score is to determine which candidates possess sufficient knowledge (minimal criterion) for certification. To determine a cutoff score for the current National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination for NBFE certification, a modified Angoff method was used. According to the modified Angoff method, every item on the exam is assessed in terms of how likely a minimally acceptable or competent candidate is to answer that item correctly. Specifically, a group of judges are asked to independently think of a group of minimally competent candidates who would be representative of candidates who would receive the cutoff score on the exam. The judges, working independently, then estimate what proportion of that sample of minimally acceptable candidates would answer each exam item correctly. These "estimated values" are summed for each judge. This "estimated value" represents the cutoff score for the judge. Each of "estimated values" or cutoff score for all judges are then summed and averaged. The average "estimated values" for all judges then becomes the exam's accepted cutoff score.
To determine an acceptable cutoff score for the current National Board of Forensic Evaluators (NBFE) examination for NBFE certification, 16 forensic SME's serving as judges, performed a modified Angoff method as described above. According to the results, the judges averaged cutoff score for the exam was 72.6. To increase the validity of this estimate, the standard error of the mean (SEM) was computed, and a 95% confidence interval for the cutoff score was computed. According to this analysis, the standard deviation (SD) for the judges estimates of a cutoff score was 4.63. A 95% confidence interval was then computed as follows:
To determine the appropriate cutoff score from the above analysis, the median score associated with the 95% Confidence Interval was used. This score was computed to be 78.6. Since the current National Board of Forensic Evaluators (NBFE) and the Hoffman Institute's examination for NBFE certification contains 100 multiple choice questions, the computed cutoff score was rounded to the nearest whole number. Therefore, the final cutoff score is computed to be 79.