© 2025 SURF
Artikel verschenen in NVVR/MemoRad: Het missen van fracturen kan leiden tot onnodige problemen voor patiënten. Verschillende complicaties en bijkomende kosten kunnen het gevolg zijn. Wat is de waarde van kunstmatige intelligentie (Al) bij röntgenbeelden voor het opsporen van fracturen? De toepassing van Al-systemen in het werkveld kan radiologen ondersteunen. Uit de literatuurstudie van Patricia Dinkgreve komt naar voren dat Al met hoge nauwkeurigheid fracturen kan opsporen.1 Al presteert zelfs beter dan medisch specialisten met/of zonder de hulp van Al
Artikel verschenen in NVVR/MemoRad: Het missen van fracturen kan leiden tot onnodige problemen voor patiënten. Verschillende complicaties en bijkomende kosten kunnen het gevolg zijn. Wat is de waarde van kunstmatige intelligentie (Al) bij röntgenbeelden voor het opsporen van fracturen? De toepassing van Al-systemen in het werkveld kan radiologen ondersteunen. Uit de literatuurstudie van Patricia Dinkgreve komt naar voren dat Al met hoge nauwkeurigheid fracturen kan opsporen.1 Al presteert zelfs beter dan medisch specialisten met/of zonder de hulp van Al
The goal of this study was to develop an automated monitoring system for the detection of pigs’ bodies, heads and tails. The aim in the first part of the study was to recognize individual pigs (in lying and standing positions) in groups and their body parts (head/ears, and tail) by using machine learning algorithms (feature pyramid network). In the second part of the study, the goal was to improve the detection of tail posture (tail straight and curled) during activity (standing/moving around) by the use of neural network analysis (YOLOv4). Our dataset (n = 583 images, 7579 pig posture) was annotated in Labelbox from 2D video recordings of groups (n = 12–15) of weaned pigs. The model recognized each individual pig’s body with a precision of 96% related to threshold intersection over union (IoU), whilst the precision for tails was 77% and for heads this was 66%, thereby already achieving human-level precision. The precision of pig detection in groups was the highest, while head and tail detection precision were lower. As the first study was relatively time-consuming, in the second part of the study, we performed a YOLOv4 neural network analysis using 30 annotated images of our dataset for detecting straight and curled tails. With this model, we were able to recognize tail postures with a high level of precision (90%)
MULTIFILE
The goal of this study was to develop an automated monitoring system for the detection of pigs’ bodies, heads and tails. The aim in the first part of the study was to recognize individual pigs (in lying and standing positions) in groups and their body parts (head/ears, and tail) by using machine learning algorithms (feature pyramid network). In the second part of the study, the goal was to improve the detection of tail posture (tail straight and curled) during activity (standing/moving around) by the use of neural network analysis (YOLOv4). Our dataset (n = 583 images, 7579 pig posture) was annotated in Labelbox from 2D video recordings of groups (n = 12–15) of weaned pigs. The model recognized each individual pig’s body with a precision of 96% related to threshold intersection over union (IoU), whilst the precision for tails was 77% and for heads this was 66%, thereby already achieving human-level precision. The precision of pig detection in groups was the highest, while head and tail detection precision were lower. As the first study was relatively time-consuming, in the second part of the study, we performed a YOLOv4 neural network analysis using 30 annotated images of our dataset for detecting straight and curled tails. With this model, we were able to recognize tail postures with a high level of precision (90%)
MULTIFILE
The number of applications in which industrial robots share their working environment with people is increasing. Robots appropriate for such applications are equipped with safety systems according to ISO/TS 15066:2016 and are often referred to as collaborative robots (cobots). Due to the nature of human-robot collaboration, the working environment of cobots is subjected to unforeseeable modifications caused by people. Vision systems are often used to increase the adaptability of cobots, but they usually require knowledge of the objects to be manipulated. The application of machine learning techniques can increase the flexibility by enabling the control system of a cobot to continuously learn and adapt to unexpected changes in the working environment. In this paper we address this issue by investigating the use of Reinforcement Learning (RL) to control a cobot to perform pick-and-place tasks. We present the implementation of a control system that can adapt to changes in position and enables a cobot to grasp objects which were not part of the training. Our proposed system uses deep Q-learning to process color and depth images and generates an (Formula presented.) -greedy policy to define robot actions. The Q-values are estimated using Convolution Neural Networks (CNNs) based on pre-trained models for feature extraction. To reduce training time, we implement a simulation environment to first train the RL agent, then we apply the resulting system on a real cobot. System performance is compared when using the pre-trained CNN models ResNext, DenseNet, MobileNet, and MNASNet. Simulation and experimental results validate the proposed approach and show that our system reaches a grasping success rate of 89.9% when manipulating a never-seen object operating with the pre-trained CNN model MobileNet.
The number of applications in which industrial robots share their working environment with people is increasing. Robots appropriate for such applications are equipped with safety systems according to ISO/TS 15066:2016 and are often referred to as collaborative robots (cobots). Due to the nature of human-robot collaboration, the working environment of cobots is subjected to unforeseeable modifications caused by people. Vision systems are often used to increase the adaptability of cobots, but they usually require knowledge of the objects to be manipulated. The application of machine learning techniques can increase the flexibility by enabling the control system of a cobot to continuously learn and adapt to unexpected changes in the working environment. In this paper we address this issue by investigating the use of Reinforcement Learning (RL) to control a cobot to perform pick-and-place tasks. We present the implementation of a control system that can adapt to changes in position and enables a cobot to grasp objects which were not part of the training. Our proposed system uses deep Q-learning to process color and depth images and generates an (Formula presented.) -greedy policy to define robot actions. The Q-values are estimated using Convolution Neural Networks (CNNs) based on pre-trained models for feature extraction. To reduce training time, we implement a simulation environment to first train the RL agent, then we apply the resulting system on a real cobot. System performance is compared when using the pre-trained CNN models ResNext, DenseNet, MobileNet, and MNASNet. Simulation and experimental results validate the proposed approach and show that our system reaches a grasping success rate of 89.9% when manipulating a never-seen object operating with the pre-trained CNN model MobileNet.
In recent years, drones have increasingly supported First Responders (FRs) in monitoring incidents and providing additional information. However, analysing drone footage is time-intensive and cognitively demanding. In this research, we investigate the use of AI models for the detection of humans in drone footage to aid FRs in tasks such as locating victims. Detecting small-scale objects, particularly humans from high altitudes, poses a challenge for AI systems. We present first steps of introducing and evaluating a series of YOLOv8 Convolutional Neural Networks (CNNs) for human detection from drone images. The models are fine-tuned on a created drone image dataset of the Dutch Fire Services and were able to achieve a 53.1% F1-Score, identifying 439 out of 825 humans in the test dataset. These preliminary findings, validated by an incident commander, highlight the promising utility of these models. Ongoing efforts aim to further refine the models and explore additional technologies.
MULTIFILE