Dienst van SURF
© 2025 SURF
Forensic reports use various types of conclusions, such as a categorical (CAT) conclusion or a likelihood ratio (LR). In order to correctly assess the evidence, users of forensic reports need to understand the conclusion and its evidential strength. The aim of this paper is to study the interpretation of the evidential strength of forensic conclusions by criminal justice professionals. In an online questionnaire 269 professionals assessed 768 reports on fingerprint examination and answered questions that measured self-proclaimed and actual understanding of the reports and conclusions. The reports entailed CAT, verbal LR and numerical LR conclusions with low or high evidential strength and were assessed by crime scene investigators, police detectives, public prosecutors, criminal lawyers, and judges. The results show that about a quarter of all questions measuring actual understanding of the reports were answered incorrectly. The CAT conclusion was best understood for the weak conclusions, the three strong conclusions were all assessed similarly. The weak CAT conclusion correctly emphasizes the uncertainty of any conclusion type used. However, most participants underestimated the strength of this weak CAT conclusion compared to the other weak conclusion types. Looking at the self-proclaimed understanding of all professionals, they in general overestimated their actual understanding of all conclusion types.
In this study, we assessed to what extent data on the subject of TPPR (transfer, persistence, prevalence, recovery) that are obtained through an older STR typing kit can be used in an activity-level evaluation for a case profiled with a more modern STR kit. Newer kits generally hold more loci and may show higher sensitivity especially when reduced reaction volumes are used, and this could increase the evidential value at the source level. On the other hand, the increased genotyping information may invoke a higher number of contributors in the weight of evidence calculations, which could affect the evidential values as well. An activity scenario well explored in earlier studies [1,2] was redone using volunteers with known DNA profiles. DNA extracts were analyzed with three different approaches, namely using the optimal DNA input for (1) an older and (2) a newer STR typing system, and (3) using a standard, volume-based input combined with replicate PCR analysis with only the newer STR kit. The genotyping results were analyzed for various aspects such as percentage detected alleles and relative peak height contribution for background and the contributors known to be involved in the activity. Next, source-level LRs were calculated and the same trends were observed with regard to inclusionary and exclusionary LRs for persons who had or had not been in direct contact with the sampled areas. We subsequently assessed the impact on the outcome of the activity-level evaluation in an exemplary case by applying the assigned probabilities to a Bayesian network. We infer that data from different STR kits can be combined in the activity-level evaluations.
BACKGROUND: Approximately 5%-10% of elementary school children show delayed development of fine motor skills. To address these problems, detection is required. Current assessment tools are time-consuming, require a trained supervisor, and are not motivating for children. Sensor-augmented toys and machine learning have been presented as possible solutions to address this problem.OBJECTIVE: This study examines whether sensor-augmented toys can be used to assess children's fine motor skills. The objectives were to (1) predict the outcome of the fine motor skill part of the Movement Assessment Battery for Children Second Edition (fine MABC-2) and (2) study the influence of the classification model, game, type of data, and level of difficulty of the game on the prediction.METHODS: Children in elementary school (n=95, age 7.8 [SD 0.7] years) performed the fine MABC-2 and played 2 games with a sensor-augmented toy called "Futuro Cube." The game "roadrunner" focused on speed while the game "maze" focused on precision. Each game had several levels of difficulty. While playing, both sensor and game data were collected. Four supervised machine learning classifiers were trained with these data to predict the fine MABC-2 outcome: k-nearest neighbor (KNN), logistic regression (LR), decision tree (DT), and support vector machine (SVM). First, we compared the performances of the games and classifiers. Subsequently, we compared the levels of difficulty and types of data for the classifier and game that performed best on accuracy and F1 score. For all statistical tests, we used α=.05.RESULTS: The highest achieved mean accuracy (0.76) was achieved with the DT classifier that was trained on both sensor and game data obtained from playing the easiest and the hardest level of the roadrunner game. Significant differences in performance were found in the accuracy scores between data obtained from the roadrunner and maze games (DT, P=.03; KNN, P=.01; LR, P=.02; SVM, P=.04). No significant differences in performance were found in the accuracy scores between the best performing classifier and the other 3 classifiers for both the roadrunner game (DT vs KNN, P=.42; DT vs LR, P=.35; DT vs SVM, P=.08) and the maze game (DT vs KNN, P=.15; DT vs LR, P=.62; DT vs SVM, P=.26). The accuracy of only the best performing level of difficulty (combination of the easiest and hardest level) achieved with the DT classifier trained with sensor and game data obtained from the roadrunner game was significantly better than the combination of the easiest and middle level (P=.046).CONCLUSIONS: The results of our study show that sensor-augmented toys can efficiently predict the fine MABC-2 scores for children in elementary school. Selecting the game type (focusing on speed or precision) and data type (sensor or game data) is more important for determining the performance than selecting the machine learning classifier or level of difficulty.