Service of SURF
© 2025 SURF
From the article: Abstract The Information Axiom in axiomatic design states that minimising information is always desirable. Information in design may be considered to be a form of chaos and therefore is unwanted. Chaos leads to a lack of regularities in the design and unregulated issues tend to behave stochastically. Obviously, it is hard to satisfy the FRs of a design when it behaves stochastically. Following a recently presented and somewhat broader categorization of information, it appears to cause the most complication when information moves from the unrecognised to the recognised. The paper investigates how unrecognised information may be found and if it is found, how it can be addressed. Best practices for these investigations are derived from the Cynefin methodology. The Axiomatic Maturity Diagram is applied to address unrecognised information and to investigate how order can be restored. Two cases are applied as examples to explain the vexatious behaviour of unrecognised information.
MULTIFILE
Currently, promising new tools are under development that will enable crime scene investigators to analyze fingerprints or DNA-traces at the crime scene. While these technologies could help to find a perpetrator early in the investigation, they may also strengthen confirmation bias when an incorrect scenario directs the investigation this early. In this study, 40 experienced Crime scene investigators (CSIs) investigated a mock crime scene to study the influence of rapid identification technologies on the investigation. This initial study shows that receiving identification information during the investigation results in more accurate scenarios. CSIs in general are not as much reconstructing the event that took place, but rather have a “who done it routine.” Their focus is on finding perpetrator traces with the risk of missing important information at the start of the investigation. Furthermore, identification information was mostly integrated in their final scenarios when the results of the analysis matched their expectations. CSIs have the tendency to look for confirmation, but the technology has no influence on this tendency. CSIs should be made aware of the risks of this strategy as important offender information could be missed or innocent people could be wrongfully accused.
Objective:Acknowledging study limitations in a scientific publication is a crucial element in scientific transparency and progress. However, limitation reporting is often inadequate. Natural language processing (NLP) methods could support automated reporting checks, improving research transparency. In this study, our objective was to develop a dataset and NLP methods to detect and categorize self-acknowledged limitations (e.g., sample size, blinding) reported in randomized controlled trial (RCT) publications.Methods:We created a data model of limitation types in RCT studies and annotated a corpus of 200 full-text RCT publications using this data model. We fine-tuned BERT-based sentence classification models to recognize the limitation sentences and their types. To address the small size of the annotated corpus, we experimented with data augmentation approaches, including Easy Data Augmentation (EDA) and Prompt-Based Data Augmentation (PromDA). We applied the best-performing model to a set of about 12K RCT publications to characterize self-acknowledged limitations at larger scale.Results:Our data model consists of 15 categories and 24 sub-categories (e.g., Population and its sub-category DiagnosticCriteria). We annotated 1090 instances of limitation types in 952 sentences (4.8 limitation sentences and 5.5 limitation types per article). A fine-tuned PubMedBERT model for limitation sentence classification improved upon our earlier model by about 1.5 absolute percentage points in F1 score (0.821 vs. 0.8) with statistical significance (). Our best-performing limitation type classification model, PubMedBERT fine-tuning with PromDA (Output View), achieved an F1 score of 0.7, improving upon the vanilla PubMedBERT model by 2.7 percentage points, with statistical significance ().Conclusion:The model could support automated screening tools which can be used by journals to draw the authors’ attention to reporting issues. Automatic extraction of limitations from RCT publications could benefit peer review and evidence synthesis, and support advanced methods to search and aggregate the evidence from the clinical trial literature.
MULTIFILE