Service of SURF
© 2025 SURF
From the article: Abstract Adjustment and testing of a combination of stochastic and nonstochastic observations is applied to the deformation analysis of a time series of 3D coordinates. Nonstochastic observations are constant values that are treated as if they were observations. They are used to formulate constraints on the unknown parameters of the adjustment problem. Thus they describe deformation patterns. If deformation is absent, the epochs of the time series are supposed to be related via affine, similarity or congruence transformations. S-basis invariant testing of deformation patterns is treated. The model is experimentally validated by showing the procedure for a point set of 3D coordinates, determined from total station measurements during five epochs. The modelling of two patterns, the movement of just one point in several epochs, and of several points, is shown. Full, rank deficient covariance matrices of the 3D coordinates, resulting from free network adjustments of the total station measurements of each epoch, are used in the analysis.
MULTIFILE
AIM: Implementation of a locally developed evidence based nursing shift handover blueprint with a bedside-safety-check to determine the effect size on quality of handover.METHODS: A mixed methods design with: (1) an interrupted time series analysis to determine the effect on handover quality in six domains; (2) descriptive statistics to analyze the intercepted discrepancies by the bedside-safety-check; (3) evaluation sessions to gather experiences with the new handover process.RESULTS: We observed a continued trend of improvement in handover quality and a significant improvement in two domains of handover: organization/efficiency and contents. The bedside-safety-check successfully identified discrepancies on drains, intravenous medications, bandages or general condition and was highly appreciated.CONCLUSION: Use of the nursing shift handover blueprint showed promising results on effectiveness as well as on feasibility and acceptability. However, to enable long term measurement on effectiveness, evaluation with large scale interrupted times series or statistical process control is needed.
INTRODUCTION: Delirium in critically-ill patients is a common multifactorial disorder that is associated with various negative outcomes. It is assumed that sleep disturbances can result in an increased risk of delirium. This study hypothesized that implementing a protocol that reduces overall nocturnal sound levels improves quality of sleep and reduces the incidence of delirium in Intensive Care Unit (ICU) patients.METHODS: This interrupted time series study was performed in an adult mixed medical and surgical 24-bed ICU. A pre-intervention group of 211 patients was compared with a post-intervention group of 210 patients after implementation of a nocturnal sound-reduction protocol. Primary outcome measures were incidence of delirium, measured by the Intensive Care Delirium Screening Checklist (ICDSC) and quality of sleep, measured by the Richards-Campbell Sleep Questionnaire (RCSQ). Secondary outcome measures were use of sleep-inducing medication, delirium treatment medication, and patient-perceived nocturnal noise.RESULTS: A significant difference in slope in the percentage of delirium was observed between the pre- and post-intervention periods (-3.7% per time period, p=0.02). Quality of sleep was unaffected (0.3 per time period, p=0.85). The post-intervention group used significantly less sleep-inducing medication (p<0.001). Nocturnal noise rating improved after intervention (median: 65, IQR: 50-80 versus 70, IQR: 60-80, p=0.02).CONCLUSIONS: The incidence of delirium in ICU patients was significantly reduced after implementation of a nocturnal sound-reduction protocol. However, reported sleep quality did not improve.
Production processes can be made ‘smarter’ by exploiting the data streams that are generated by the machines that are used in production. In particular these data streams can be mined to build a model of the production process as it was really executed – as opposed to how it was envisioned. This model can subsequently be analyzed and stress-tested to explore possible causes of production prob-lems and to analyze what-if scenarios, without disrupting the production process itself. It has been shown that such models can successfully be used to diagnose possible causes of production problems, including scrap products and machine defects. Ideally, they can even be used to model and analyze production processes that have not been implemented yet, based on data from existing production pro-cesses and techniques from artificial intelligence that can predict how the new process is likely to be-have in practice in terms of data that its machines generate. This is especially important in mass cus-tomization processes, where the process to create each product may be unique, and can only feasibly be tested using model- and data-driven techniques like the one proposed in this project. Against this background, the goal of this project is to develop a method and toolkit for mining, mod-elling and analyzing production processes, using the time series data that is generated by machines, to: (i) analyze the performance of an existing production process; (ii) diagnose causes of production prob-lems; and (iii) certify that a new – not yet implemented – production process leads to high-quality products. The method is developed by researching and combining techniques from the area of Artificial Intelli-gence with techniques from Operations Research. In particular, it uses: process mining to relate time series data to production processes; queueing networks to determine likely paths through the produc-tion processes and detect anomalies that may be the cause of production problems; and generative adversarial networks to generate likely future production scenarios and sample scenarios of production problems for diagnostic purposes. The techniques will be evaluated and adapted in implementations at the partners from industry, using a design science approach. In particular, implementations of the method are made for: explaining production problems; explaining machine defects; and certifying the correct operation of new production processes.
The maritime transport industry is facing a series of challenges due to the phasing out of fossil fuels and the challenges from decarbonization. The proposal of proper alternatives is not a straightforward process. While the current generation of ship design software offers results, there is a clear missed potential in new software technologies like machine learning and data science. This leads to the question: how can we use modern computational technologies like data analysis and machine learning to enhance the ship design process, considering the tools from the wider industry and the industry’s readiness to embrace new technologies and solutions? The obbjective of this PD project is to bridge the critical gap between the maritime industry's pressing need for innovative solutions for a more agile Ship Design Process; and the current limitations in software tools and methodologies available via the implementation into Ship Design specific software of the new generation of computational technologies available, as big data science and machine learning.
In order to stay competitive and respond to the increasing demand for steady and predictable aircraft turnaround times, process optimization has been identified by Maintenance, Repair and Overhaul (MRO) SMEs in the aviation industry as their key element for innovation. Indeed, MRO SMEs have always been looking for options to organize their work as efficient as possible, which often resulted in applying lean business organization solutions. However, their aircraft maintenance processes stay characterized by unpredictable process times and material requirements. Lean business methodologies are unable to change this fact. This problem is often compensated by large buffers in terms of time, personnel and parts, leading to a relatively expensive and inefficient process. To tackle this problem of unpredictability, MRO SMEs want to explore the possibilities of data mining: the exploration and analysis of large quantities of their own historical maintenance data, with the meaning of discovering useful knowledge from seemingly unrelated data. Ideally, it will help predict failures in the maintenance process and thus better anticipate repair times and material requirements. With this, MRO SMEs face two challenges. First, the data they have available is often fragmented and non-transparent, while standardized data availability is a basic requirement for successful data analysis. Second, it is difficult to find meaningful patterns within these data sets because no operative system for data mining exists in the industry. This RAAK MKB project is initiated by the Aviation Academy of the Amsterdam University of Applied Sciences (Hogeschool van Amsterdan, hereinafter: HvA), in direct cooperation with the industry, to help MRO SMEs improve their maintenance process. Its main aim is to develop new knowledge of - and a method for - data mining. To do so, the current state of data presence within MRO SMEs is explored, mapped, categorized, cleaned and prepared. This will result in readable data sets that have predictive value for key elements of the maintenance process. Secondly, analysis principles are developed to interpret this data. These principles are translated into an easy-to-use data mining (IT)tool, helping MRO SMEs to predict their maintenance requirements in terms of costs and time, allowing them to adapt their maintenance process accordingly. In several case studies these products are tested and further improved. This is a resubmission of an earlier proposal dated October 2015 (3rd round) entitled ‘Data mining for MRO process optimization’ (number 2015-03-23M). We believe the merits of the proposal are substantial, and sufficient to be awarded a grant. The text of this submission is essentially unchanged from the previous proposal. Where text has been added – for clarification – this has been marked in yellow. Almost all of these new text parts are taken from our rebuttal (hoor en wederhoor), submitted in January 2016.