Dienst van SURF
© 2025 SURF
Due to a lack of transparency in both algorithm and validation methodology, it is diffcult for researchers and clinicians to select the appropriate tracker for their application. The aim of this work is to transparently present an adjustable physical activity classification algorithm that discriminates between dynamic, standing, and sedentary behavior. By means of easily adjustable parameters, the algorithm performance can be optimized for applications using different target populations and locations for tracker wear. Concerning an elderly target population with a tracker worn on the upper leg, the algorithm is optimized and validated under simulated free-living conditions. The fixed activity protocol (FAP) is performed by 20 participants; the simulated free-living protocol (SFP) involves another 20. Data segmentation window size and amount of physical activity threshold are optimized. The sensor orientation threshold does not vary. The validation of the algorithm is performed on 10 participants who perform the FAP and on 10 participants who perform the SFP. Percentage error (PE) and absolute percentage error (APE) are used to assess the algorithm performance. Standing and sedentary behavior are classified within acceptable limits (+/- 10% error) both under fixed and simulated free-living conditions. Dynamic behavior is within acceptable limits under fixed conditions but has some limitations under simulated free-living conditions. We propose that this approach should be adopted by developers of activity trackers to facilitate the activity tracker selection process for researchers and clinicians. Furthermore, we are convinced that the adjustable algorithm potentially could contribute to the fast realization of new applications.
Previous research shows that automatic tendency to approach alcohol plays a causal role in problematic alcohol use and can be retrained by Approach Bias Modification (ApBM). ApBM has been shown to be effective for patients diagnosed with alcohol use disorder (AUD) in inpatient treatment. This study aimed to investigate the effectiveness of adding an online ApBM to treatment as usual (TAU) in an outpatient setting compared to receiving TAU with an online placebo training. 139 AUD patients receiving face-to-face or online treatment as usual (TAU) participated in the study. The patients were randomized to an active or placebo version of 8 sessions of online ApBM over a 5-week period. The weekly consumed standard units of alcohol (primary outcome) was measured at pre-and post-training, 3 and 6 months follow-up. Approach tendency was measured pre-and-post ApBM training. No additional effect of ApBM was found on alcohol intake, nor other outcomes such as craving, depression, anxiety, or stress. A significant reduction of the alcohol approach bias was found. This research showed that approach bias retraining in AUD patients in an outpatient treatment setting reduces the tendency to approach alcohol, but this training effect does not translate into a significant difference in alcohol reduction between groups. Explanations for the lack of effects of ApBM on alcohol consumption are treatment goal and severity of AUD. Future ApBM research should target outpatients with an abstinence goal and offer alternative, more user-friendly modes of delivering ApBM training.
This article interrogates platform-specific bias in the contemporary algorithmic media landscape through a comparative study of the representation of pregnancy on the Web and social media. Online visual materials such as social media content related to pregnancy are not void of bias, nor are they very diverse. The case study is a cross-platform analysis of social media imagery for the topic of pregnancy, through which distinct visual platform vernaculars emerge. The authors describe two visualization methods that can support comparative analysis of such visual vernaculars: the image grid and the composite image. While platform-specific perspectives range from lists of pregnancy tips on Pinterest to pregnancy information and social support systems on Twitter, and pregnancy humour on Reddit, each of the platforms presents a predominantly White, able-bodied and heteronormative perspective on pregnancy.
Receiving the first “Rijbewijs” is always an exciting moment for any teenager, but, this also comes with considerable risks. In the Netherlands, the fatality rate of young novice drivers is five times higher than that of drivers between the ages of 30 and 59 years. These risks are mainly because of age-related factors and lack of experience which manifests in inadequate higher-order skills required for hazard perception and successful interventions to react to risks on the road. Although risk assessment and driving attitude is included in the drivers’ training and examination process, the accident statistics show that it only has limited influence on the development factors such as attitudes, motivations, lifestyles, self-assessment and risk acceptance that play a significant role in post-licensing driving. This negatively impacts traffic safety. “How could novice drivers receive critical feedback on their driving behaviour and traffic safety? ” is, therefore, an important question. Due to major advancements in domains such as ICT, sensors, big data, and Artificial Intelligence (AI), in-vehicle data is being extensively used for monitoring driver behaviour, driving style identification and driver modelling. However, use of such techniques in pre-license driver training and assessment has not been extensively explored. EIDETIC aims at developing a novel approach by fusing multiple data sources such as in-vehicle sensors/data (to trace the vehicle trajectory), eye-tracking glasses (to monitor viewing behaviour) and cameras (to monitor the surroundings) for providing quantifiable and understandable feedback to novice drivers. Furthermore, this new knowledge could also support driving instructors and examiners in ensuring safe drivers. This project will also generate necessary knowledge that would serve as a foundation for facilitating the transition to the training and assessment for drivers of automated vehicles.
Moderatie van lezersreacties onder nieuwsartikelen is erg arbeidsintensief. Met behulp van kunstmatige intelligentie wordt moderatie mogelijk tegen een redelijke prijs. Aangezien elke toepassing van kunstmatige intelligentie eerlijk en transparant moet zijn, is het belangrijk om te onderzoeken hoe media hieraan kunnen voldoen.Doel Dit promotieproject zal zich richten op de rechtvaardigheid, accountability en transparantie van algoritmische systemen voor het modereren van lezersreacties. Het biedt een theoretisch kader en bruikbare matregelen die nieuwsorganisaties zullen ondersteunen in het naleven van recente beleidsvorming voor een waardegedreven implementatie van AI. Nu steeds meer nieuwsmedia AI gaan gebruiken, moeten ze rechtvaardigheid, accountability en transparantie in hun gebruik van algoritmen meenemen in hun werkwijzen. Resultaten Hoewel moderatie met AI zeer aantrekkelijk is vanuit economisch oogpunt, moeten nieuwsmedia weten hoe ze onnauwkeurigheid en bias kunnen verminderen (fairness), de werking van hun AI bekendmaken (accountability) en de gebruikers laten begrijpen hoe beslissingen via AI worden genomen (transparancy). Dit proefschrift bevordert de kennis over deze onderwerpen. Looptijd 01 februari 2022 - 01 februari 2025 Aanpak De centrale onderzoeksvraag van dit promotieonderzoek is: Hoe kunnen en moeten nieuwsmedia rechtvaardigheid, accountability en transparantie in hun gebruik van algoritmes voor commentmoderatie? Om deze vraag te beantwoorden is het onderzoek opgesplitst in vier deelvragen. Hoe gebruiken nieuwsmedia algoritmes voor het modereren van reacties? Wat kunnen nieuwsmedia doen om onnauwkeurigheid en bias bij het modereren via AI van reacties te verminderen? Wat moeten nieuwsmedia bekendmaken over hun gebruik van moderatie via AI? Wat maakt uitleg van moderatie via AI begrijpelijk voor gebruikers van verschillende niveaus van digitale competentie?
Moderatie van lezersreacties onder nieuwsartikelen is erg arbeidsintensief. Met behulp van kunstmatige intelligentie wordt moderatie mogelijk tegen een redelijke prijs. Aangezien elke toepassing van kunstmatige intelligentie eerlijk en transparant moet zijn, is het belangrijk om te onderzoeken hoe media hieraan kunnen voldoen.Doel Dit promotieproject zal zich richten op de rechtvaardigheid, accountability en transparantie van algoritmische systemen voor het modereren van lezersreacties. Het biedt een theoretisch kader en bruikbare matregelen die nieuwsorganisaties zullen ondersteunen in het naleven van recente beleidsvorming voor een waardegedreven implementatie van AI. Nu steeds meer nieuwsmedia AI gaan gebruiken, moeten ze rechtvaardigheid, accountability en transparantie in hun gebruik van algoritmen meenemen in hun werkwijzen. Resultaten Hoewel moderatie met AI zeer aantrekkelijk is vanuit economisch oogpunt, moeten nieuwsmedia weten hoe ze onnauwkeurigheid en bias kunnen verminderen (fairness), de werking van hun AI bekendmaken (accountability) en de gebruikers laten begrijpen hoe beslissingen via AI worden genomen (transparancy). Dit proefschrift bevordert de kennis over deze onderwerpen. Looptijd 01 februari 2022 - 01 februari 2025 Aanpak De centrale onderzoeksvraag van dit promotieonderzoek is: Hoe kunnen en moeten nieuwsmedia rechtvaardigheid, accountability en transparantie in hun gebruik van algoritmes voor commentmoderatie? Om deze vraag te beantwoorden is het onderzoek opgesplitst in vier deelvragen. Hoe gebruiken nieuwsmedia algoritmes voor het modereren van reacties? Wat kunnen nieuwsmedia doen om onnauwkeurigheid en bias bij het modereren via AI van reacties te verminderen? Wat moeten nieuwsmedia bekendmaken over hun gebruik van moderatie via AI? Wat maakt uitleg van moderatie via AI begrijpelijk voor gebruikers van verschillende niveaus van digitale competentie?