Dienst van SURF
© 2025 SURF
Background: Advanced statistical modeling techniques may help predict health outcomes. However, it is not the case that these modeling techniques always outperform traditional techniques such as regression techniques. In this study, external validation was carried out for five modeling strategies for the prediction of the disability of community-dwelling older people in the Netherlands. Methods: We analyzed data from five studies consisting of community-dwelling older people in the Netherlands. For the prediction of the total disability score as measured with the Groningen Activity Restriction Scale (GARS), we used fourteen predictors as measured with the Tilburg Frailty Indicator (TFI). Both the TFI and the GARS are self-report questionnaires. For the modeling, five statistical modeling techniques were evaluated: general linear model (GLM), support vector machine (SVM), neural net (NN), recursive partitioning (RP), and random forest (RF). Each model was developed on one of the five data sets and then applied to each of the four remaining data sets. We assessed the performance of the models with calibration characteristics, the correlation coefficient, and the root of the mean squared error. Results: The models GLM, SVM, RP, and RF showed satisfactory performance characteristics when validated on the validation data sets. All models showed poor performance characteristics for the deviating data set both for development and validation due to the deviating baseline characteristics compared to those of the other data sets. Conclusion: The performance of four models (GLM, SVM, RP, RF) on the development data sets was satisfactory. This was also the case for the validation data sets, except when these models were developed on the deviating data set. The NN models showed a much worse performance on the validation data sets than on the development data sets.
BACKGROUND: Prediction models and prognostic scores have been increasingly popular in both clinical practice and clinical research settings, for example to aid in risk-based decision making or control for confounding. In many medical fields, a large number of prognostic scores are available, but practitioners may find it difficult to choose between them due to lack of external validation as well as lack of comparisons between them.METHODS: Borrowing methodology from network meta-analysis, we describe an approach to Multiple Score Comparison meta-analysis (MSC) which permits concurrent external validation and comparisons of prognostic scores using individual patient data (IPD) arising from a large-scale international collaboration. We describe the challenges in adapting network meta-analysis to the MSC setting, for instance the need to explicitly include correlations between the scores on a cohort level, and how to deal with many multi-score studies. We propose first using IPD to make cohort-level aggregate discrimination or calibration scores, comparing all to a common comparator. Then, standard network meta-analysis techniques can be applied, taking care to consider correlation structures in cohorts with multiple scores. Transitivity, consistency and heterogeneity are also examined.RESULTS: We provide a clinical application, comparing prognostic scores for 3-year mortality in patients with chronic obstructive pulmonary disease using data from a large-scale collaborative initiative. We focus on the discriminative properties of the prognostic scores. Our results show clear differences in performance, with ADO and eBODE showing higher discrimination with respect to mortality than other considered scores. The assumptions of transitivity and local and global consistency were not violated. Heterogeneity was small.CONCLUSIONS: We applied a network meta-analytic methodology to externally validate and concurrently compare the prognostic properties of clinical scores. Our large-scale external validation indicates that the scores with the best discriminative properties to predict 3 year mortality in patients with COPD are ADO and eBODE.
A review has been completed for a verification and validation (V&V) of the (Excel) BioGas simulator or EBS model. The EBS model calculates the environmental impact of biogas production pathways using Material and Energy Flow Analysis, time dependent dynamics, geographic information, and Life Cycle analysis. Within this article a V&V method is researched, selected and applied to validate the EBS model. Through the use of the method described within this article: mistakes in the model are resolved, the strengths and weaknesses of the model are found, and the concept of the model is tested and strengthened. The validation process does not only improve the model but also helps the modelers in widening their focus and scope. This article can, therefore, also be used in the validation process of similar models. The main result from the V&V process indicates that the EBS model is valid; however, it should be considered as an expert model and should only be used by expert users.
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.
The bi-directional communication link with the physical system is one of the main distinguishing features of the Digital Twin paradigm. This continuous flow of data and information, along its entire life cycle, is what makes a Digital Twin a dynamic and evolving entity and not merely a high-fidelity copy. There is an increasing realisation of the importance of a well functioning digital twin in critical infrastructures, such as water networks. Configuration of water network assets, such as valves, pumps, boosters and reservoirs, must be carefully managed and the water flows rerouted, often manually, which is a slow and costly process. The state of the art water management systems assume a relatively static physical model that requires manual corrections. Any change in the network conditions or topology due to degraded control mechanisms, ongoing maintenance, or changes in the external context situation, such as a heat wave, makes the existing model diverge from the reality. Our project proposes a unique approach to real-time monitoring of the water network that can handle automated changes of the model, based on the measured discrepancy of the model with the obtained IoT sensor data. We aim at an evolutionary approach that can apply detected changes to the model and update it in real-time without the need for any additional model validation and calibration. The state of the art deep learning algorithms will be applied to create a machine-learning data-driven simulation of the water network system. Moreover, unlike most research that is focused on detection of network problems and sensor faults, we will investigate the possibility of making a step further and continue using the degraded network and malfunctioning sensors until the maintenance and repairs can take place, which can take a long time. We will create a formal model and analyse the effect on data readings of different malfunctions, to construct a mitigating mechanism that is tailor-made for each malfunction type and allows to continue using the data, albeit in a limited capacity.
The Water Framework Directive imposes challenges regarding the environmental risk of plastic pollution. The quantification, qualification, monitoring, and risk assessment of nanoplastics and small microplastic (<20 µm) is crucial. Environmental nano- and micro-plastics (NMPs) are highly diverse, accounting for this diversity poses a big challenge in developing a comprehensive understanding of NMPs detection, quantification, fate, and risks. Two major issues currently limit progress within this field: (a) validation and broadening the current analytical tools (b) uncertainty with respect to NMPs occurrence and behaviour at small scales (< 20 micron). Tracking NMPs in environmental systems is currently limited to micron size plastics due to the size detection limit of the available analytical techniques. There are currently no methods that can detect nanoplastics in real environmental systems. A major bottleneck is the incompatibility between commercially available NMPs and those generated from plastic fragments degradation in the environment. To track nanoplastics in environmental and biological systems, some research groups synthesized metal-doped nanoplastics, often limited to one polymer type and using high concentrations of surfactants, rendering these synthesized nanoplastics to not be representative of nanoplatics found in real environment. NanoManu proposes using Electrohydrodynamic Atomization to generate metal doped NMPs of different polymers types, sizes, and shapes, which will be representative of the real environmental nanoplastics. The synthesized nanoplastics will be used as model particles in environmental studies. The synthesized nanoplastics will be characterized and tested using different analytical methods, e.g., SEM-EDX, TEX, GCpyrMS, FFF, µFTIR and SP-ICP-MS. NanoManu is a first and critical step towards generating a comprehensive state-of-the-art analytical and environmental knowledge on the environmental fate and risks of nanoplastics. This knowledge impacts current risk assessment tools, efficient interventions to limit emissions and adequate regulations related to NMPs.