Dienst van SURF
© 2025 SURF
Scientific research on the quality of face detection systems keeps finding the same result: no matter how, when, and with which system testing is done, every time it is found that faces of people with a darker skin tone are not detected as well as the faces of people with a lighter skin tone.
LINK
The guidance offered here is intended to assist social workers in thinking through the specific ethical challenges that arise whilst practising during a pandemic or other type of crisis. In crisis conditions, people who need social work services, and social workers themselves, face increased and unusual risks. These challenging conditions are further compounded by scarce or reallocated governmental and social resources. While the ethical principles underpinning social work remain unchanged by crises, unique and evolving circumstances may demand that they be prioritised differently. A decision or action that might be regarded as ethically wrong in ‘normal’ times, may be judged to be right in a time of crisis. Examples include: prioritising individual and public health considerations by restricting people’s freedom of movement; not consulting people about treatment and services; or avoiding face-to-face meetings.
MULTIFILE
We investigated the effects of reflex-based self-defence training on police performance in simulated high-pressure arrest situations. Police officers received this training as well as a regular police arrest and self-defence skills training (control training) in a crossover design. Officers' performance was tested on several variables in six reality-based scenarios before and after each training intervention. Results showed improved performance after the reflex-based training, while there was no such effect of the regular police training. Improved performance could be attributed to better communication, situational awareness (scanning area, alertness), assertiveness, resolution, proportionality, control and converting primary responses into tactical movements. As officers trained complete violent situations (and not just physical skills), they learned to use their actions before physical contact for de-escalation but also for anticipation on possible attacks. Furthermore, they learned to respond against attacks with skills based on their primary reflexes. The results of this study seem to suggest that reflex-based self-defence training better prepares officers for performing in high-pressure arrest situations than the current form of police arrest and self-defence skills training. Practitioner Summary: Police officers' performance in high-pressure arrest situations improved after a reflex-based self-defence training, while there was no such effect of a regular police training. As officers learned to anticipate on possible attacks and to respond with skills based on their primary reflexes, they were better able to perform effectively.
Mondkapjes, of mondmaskers, zijn door de SARS-COV-2 pandemie niet meer uit het straatbeeld weg te denken. De kwaliteit en comfort van de pasvorm van medische en niet-medische mondmaskers wordt bepaald door hoe goed het mondmasker overeenkomt met de afmetingen van het gezicht van de drager. Echter is er geen goed overzicht van de antropometrie van het gelaat van de Nederlandse bevolking waardoor de pasvorm van mondmaskers nu vaak niet optimaal is. Er is dus vraag naar een laagdrempelige en veilige manier om gezichtskenmerken in kaart te brengen en betere ontwerprichtlijnen voor mondkapjes. Driedimensionaal (3D) scannen doormiddel van Light Detection and Ranging (LiDaR) technologie in combinatie met slimme algoritmes lijkt wellicht een manier om gezichtskenmerken snel en laagdrempelig vast te leggen bij grote groepen mensen. Daarnaast geeft het 3D scannen van gezichten de mogelijkheid om niet enkel de afmetingen van gezichten te meten, maar ook 3D pasvisualisaties uit te voeren. Hoewel 3D scannen geen nieuwe technologie is, is de LiDaR technologie pas sinds 2020 geïntegreerd in de Ipad en Iphone waardoor het toegankelijk gemaakt is voor consumenten. Doormiddel van een research through design benadering zal onderzocht worden of deze technologie gebruikt kan worden om betrouwbare en valide opnames te maken van gezichten en of er op basis hiervan ontwerprichtlijnen ontwikkeld kunnen worden. In dit KIEM GoCi-project zal daarnaast ingezet worden om een kennisbasis en netwerk op te bouwen voor een vervolg aanvraag over de inzet van 3D technologieën in de mode-industrie.
Within the film and theater world, special effects make-up is used to adapt the appearance of actors for visual storytelling. Currently the creation of special effects makeup is a time-consuming process which creates a lot of waste that doesn’t fit in with the goals of a sustainable industry. Combine with the trend of the digitization of the movie and theater industry which require faster and more iterative workflows, the current ways of creating special effects makeup requires changing. Within this project we would like to explore if the traditional way of working can be converted to a digital production process. Our research consists of three parts. Firstly, we would like to explore if a mobile face scanning rig can be used to create digital copies of actors, and such eliminate the need to creates molds. Secondly, we would like to see if digital sculpting can replace the traditional methods of sculpting molds, casts and prosthetics. Here we would like to compare both methods in terms of creativity and time consumption. The third part of our project will be to explore the use of 3D printing for the creation of molds and prosthetics.
In this project, the AGM R&D team developed and refined the use of a facial scanning rig. The rig is a physical device comprising multiple cameras and lighting that are mounted on scaffolding around a 'scanning volume'. This is an area at which objects are placed before being photographed from multiple angles. The object is typically a person's head, but it can be anything of this approximate size. Software compares the photographs to create a digital 3D recreation - this process is called photogrammetry. The 3D model is then processed by further pieces of software and eventually becomes a face that can be animated inside in Unreal Engine, which is a popular piece of game development software made by the company Epic. This project was funded by Epic's 'Megagrant' system, and the focus of the work is on streamlining and automating the processing pipeline, and on improving the quality of the resulting output. Additional work has been done on skin shaders (simulating the quality of real skin in a digital form) and the use of AI to re/create lifelike hair styles. The R&D work has produced significant savings in regards to the processing time and the quality of facial scans, has produced a system that has benefitted the educational offering of BUas, and has attracted collaborators from the commercial entertainment/simulation industries. This work complements and extends previous work done on the VIBE project, where the focus was on creating lifelike human avatars for the medical industry.