BACKGROUND:Endotracheal suctioning causes discomfort, is associated with adverse effects, and is resource-demanding. An artificial secretion removal method, known as an automated cough, has been developed, which applies rapid, automated deflation, and inflation of the endotracheal tube cuff during the inspiratory phase of mechanical ventilation. This method has been evaluated in the hands of researchers but not when used by attending nurses. The aim of this study was to explore the efficacy of the method over the course of patient management as part of routine care.METHODS:This prospective, longitudinal, interventional study recruited 28 subjects who were intubated and mechanically ventilated. For a maximum of 7 d and on clinical need for endotracheal suctioning, the automatic cough procedure was applied. The subjects were placed in a pressure-regulated ventilation mode with elevated inspiratory pressure, and automated cuff deflation and inflation were performed 3 times, with this repeated if deemed necessary. Success was determined by resolution of the clinical need for suctioning as determined by the attending nurse. Adverse effects were recorded.RESULTS:A total of 84 procedures were performed. In 54% of the subjects, the artificial cough procedure was successful on > 70% of occasions, with 56% of all procedures considered successful. Ninety percent of all the procedures were performed in subjects who were spontaneously breathing and on pressure-support ventilation with peak inspiratory pressures of 20 cm H2O. Rates of adverse events were similar to those seen in the application of endotracheal suctioning.CONCLUSIONS:This study solely evaluated the efficacy of an automated artificial cough procedure, which illustrated the potential for reducing the need for endotracheal suctioning when applied by attending nurses in routine care.
Current methods for energy diagnosis in heating, ventilation and air conditioning (HVAC) systems are not consistent with process and instrumentation diagrams (P&IDs) as used by engineers to design and operate these systems, leading to very limited application of energy performance diagnosis in practice. In a previous paper, a generic reference architecture – hereafter referred to as the 4S3F (four symptoms and three faults) framework – was developed. Because it is closely related to the way HVAC experts diagnose problems in HVAC installations, 4S3F largely overcomes the problem of limited application. The present article addresses the fault diagnosis process using automated fault identification (AFI) based on symptoms detected with a diagnostic Bayesian network (DBN). It demonstrates that possible faults can be extracted from P&IDs at different levels and that P&IDs form the basis for setting up effective DBNs. The process was applied to real sensor data for a whole year. In a case study for a thermal energy plant, control faults were successfully isolated using balance, energy performance and operational state symptoms. Correction of the isolated faults led to annual primary energy savings of 25%. An analysis showed that the values of set probabilities in the DBN model are not outcome-sensitive. Link to the formal publication via its DOI https://doi.org/10.1016/j.enbuild.2020.110289
Current methods for energy diagnosis in heating, ventilation and air conditioning (HVAC) systems are not consistent with process and instrumentation diagrams (P&IDs) as used by engineers to design and operate these systems, leading to very limited application of energy performance diagnosis in practice. In a previous paper, a generic reference architecture – hereafter referred to as the 4S3F (four symptoms and three faults) framework – was developed. Because it is closely related to the way HVAC experts diagnose problems in HVAC installations, 4S3F largely overcomes the problem of limited application. The present article addresses the fault diagnosis process using automated fault identification (AFI) based on symptoms detected with a diagnostic Bayesian network (DBN). It demonstrates that possible faults can be extracted from P&IDs at different levels and that P&IDs form the basis for setting up effective DBNs. The process was applied to real sensor data for a whole year. In a case study for a thermal energy plant, control faults were successfully isolated using balance, energy performance and operational state symptoms. Correction of the isolated faults led to annual primary energy savings of 25%. An analysis showed that the values of set probabilities in the DBN model are not outcome-sensitive. Link to the formal publication via its DOI https://doi.org/10.1016/j.enbuild.2020.110289
Developing a framework that integrates Advanced Language Models into the qualitative research process.Qualitative research, vital for understanding complex phenomena, is often limited by labour-intensive data collection, transcription, and analysis processes. This hinders scalability, accessibility, and efficiency in both academic and industry contexts. As a result, insights are often delayed or incomplete, impacting decision-making, policy development, and innovation. The lack of tools to enhance accuracy and reduce human error exacerbates these challenges, particularly for projects requiring large datasets or quick iterations. Addressing these inefficiencies through AI-driven solutions like AIDA can empower researchers, enhance outcomes, and make qualitative research more inclusive, impactful, and efficient.The AIDA project enhances qualitative research by integrating AI technologies to streamline transcription, coding, and analysis processes. This innovation enables researchers to analyse larger datasets with greater efficiency and accuracy, providing faster and more comprehensive insights. By reducing manual effort and human error, AIDA empowers organisations to make informed decisions and implement evidence-based policies more effectively. Its scalability supports diverse societal and industry applications, from healthcare to market research, fostering innovation and addressing complex challenges. Ultimately, AIDA contributes to improving research quality, accessibility, and societal relevance, driving advancements across multiple sectors.
The focus of the research is 'Automated Analysis of Human Performance Data'. The three interconnected main components are (i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis . Human Performance is both the process and result of the person interacting with context to engage in tasks, whereas the performance range is determined by the interaction between the person and the context. Cheap and reliable wearable sensors allow for gathering large amounts of data, which is very useful for understanding, and possibly predicting, the performance of the user. Given the amount of data generated by such sensors, manual analysis becomes infeasible; tools should be devised for performing automated analysis looking for patterns, features, and anomalies. Such tools can help transform wearable sensors into reliable high resolution devices and help experts analyse wearable sensor data in the context of human performance, and use it for diagnosis and intervention purposes. Shyr and Spisic describe Automated Data Analysis as follows: Automated data analysis provides a systematic process of inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions and supporting decision making for further analysis. Their philosophy is to do the tedious part of the work automatically, and allow experts to focus on performing their research and applying their domain knowledge. However, automated data analysis means that the system has to teach itself to interpret interim results and do iterations. Knuth stated: Science is knowledge which we understand so well that we can teach it to a computer; and if we don't fully understand something, it is an art to deal with it.[Knuth, 1974]. The knowledge on Human Performance and its Monitoring is to be 'taught' to the system. To be able to construct automated analysis systems, an overview of the essential processes and components of these systems is needed.Knuth Since the notion of an algorithm or a computer program provides us with an extremely useful test for the depth of our knowledge about any given subject, the process of going from an art to a science means that we learn how to automate something.
Aanleiding Nieuwsuitgeverijen bevinden zich in zwaar weer. Economische malaise en toegenomen concurrentie in het pluriforme medialandschap dwingen uitgeverijen om enerzijds kosten te besparen en tegelijkertijd te investeren in innovatie. De verdere automatisering van de nieuwsredactie vormt hierbij een uitdaging. Buiten de branche ontstaan technieken die uitgeverijen hierbij zouden kunnen gebruiken. Deze zijn nog niet 'vertaald' naar gebruiksvriendelijke systemen voor redactieprocessen. De deelnemers aan het project formuleren voor dit braakliggend terrein een praktijkgericht onderzoek. Doelstelling Dit onderzoek wil antwoord geven op de vraag: Hoe kunnen bewezen en nieuw te ontwikkelen technieken uit het domein van 'natural language processing' een bijdrage leveren aan de automatisering van een nieuwsredactie en het journalistieke product? 'Natural language processing' - het automatisch genereren van taal - is het onderwerp van het onderzoek. In het werkveld staat deze ontwikkeling bekend als 'automated journalism' of 'robotjournalistiek'. Het onderzoek richt zich enerzijds op ontwikkeling van algoritmes ('robots') en anderzijds op de impact van deze technologische ontwikkelingen op het nieuwsveld. De impact wordt onderzocht uit zowel het perspectief van de journalist als de nieuwsconsument. De projectdeelnemers ontwikkelen binnen dit onderzoek twee prototypes die samen het automated-journalismsysteem vormen. Dit systeem gaat tijdens en na het project gebruikt worden door onderzoekers, journalisten, docenten en studenten. Beoogde resultaten Het concrete resultaat van het project is een prototype van een geautomatiseerd redactiesysteem. Verder levert het project inzicht op in de verankering van dit soort systemen binnen een nieuwsredactie. Het onderzoek biedt een nieuw perspectief op de manier waarop de nieuwsconsument de ontwikkeling van 'automated journalism' in Nederland waardeert. Het projectteam deelt de onderzoekresultaten door middel van presentaties voor de uitgeverijbranche, presentaties op wetenschappelijke conferenties, publicaties in (vak)tijdschriften, reflectiebijeenkomsten met collega-opleidingen en een samenvattende white paper.