Service of SURF
© 2025 SURF
© 2025 SURF
Abstract In this paper several meaningful audio icons of classic arcade games such as Pong, Donkey Kong, Mario World and Pac-Man are analyzed, using the PRAAT software for speech analysis and musical theory. The analysis results are used to describe how these examples of best practice sound design obtain their meaning in the player's perception. Some aspects can be related to the use of tonal hierarchy (e.g. Donkey Kong and Mario World) which is a western culture related aspect of musical meaning. Other aspects are related to universal expressions of meaning such as the theory of misattribution, prosody, vocalization and cross-modal perceptions such as brightness and the uncanny valley hypothesis. Recent studies in the field of cognitive neuroscience support the universal and meaningful potential of all these aspects. The relationship between language related prosody, vocalization and phonology, and music seems to be an especially successful design principle for universally meaningful music icons in game sound design.
EU president Ursula von der Leyen wants Europe to tap into its inner avant-garde. In her inaugural State of the Union speech from September 16, 2020, she pledged to revive the historical Bauhaus - the experimental art school that married artistic form with functional design, founded a century ago in Weimar, Germany. Their objective was to democratize the experience of aesthetics and design through affordable commodity objects for the masses. Today, the European Union sees a chance to create a new common aesthetic born out of a need to renovate and construct more energy-efficient buildings. “I want NextGenerationEU to kickstart a European renovation wave and make our Union a leader in the circular economy,” von der Leyen said. The new Bauhaus is not just an environmental or economic project, “it needs to be a new cultural project for Europe. Every movement has its own look and feel. And we need to give our systemic change its own distinct aesthetic—to match style with sustainability. This is why we will set up a New European Bauhaus—a co-creation space where architects, artists, students, engineers, designers work together to make that happen. This is shaping the world we want to live in. A world served by an economy that cuts emissions, boosts competitiveness, reduces energy poverty, creates rewarding jobs and improves quality of life. A world where we use digital technologies to build a healthier, greener society.”
LINK
Now that collaborative robots are becoming more widespread in industry, the question arises how we can make them better co-workers and team members. Team members cooperate and collaborate to attain common goals. Consequently they provide and receive information, often non-linguistic, necessary to accomplish the work at hand and coordinate their activities. The cooperative behaviour needed to function as a team also entails that team members have to develop a certain level of trust towards each other. In this paper we argue that for cobots to become trusted, successful co-workers in an industrial setting we need to develop design principles for cobot behaviour to provide legible, that is understandable, information and to generate trust. Furthermore, we are of the opinion that modelling such non-verbal cobot behaviour after animal co-workers may provide useful opportunities, even though additional communication may be needed for optimal collaboration. Marijke Bergman, Elsbeth de Joode, +1 author Janienke Sturm Published in CHIRA 2019 Computer Science
MULTIFILE
In de afgelopen jaren hebben technologische ontwikkelingen de aard van dienstverlening ingrijpend veranderd (Huang & Rust, 2018). Technologie wordt steeds vaker ingezet om menselijke servicemedewerkers te vervangen of te ondersteunen (Larivière et al., 2017; Wirtz et al., 2018). Dit stelt dienstverleners in staat om meer klanten te bedienen met minder werknemers, waardoor de operationele efficiëntie toeneemt (Beatson et al., 2007). Deze operationele efficiëntie leidt weer tot lagere kosten en een groter concurrentievermogen. Ook voor klanten kan de inzet van technologie voordelen hebben, zoals betere toegankelijkheid en consistentie, tijd- en kostenbesparing en (de perceptie van) meer controle over het serviceproces (Curran & Meuter, 2005). Mede vanwege deze beoogde voordelen is de inzet van technologie in service-interacties de afgelopen twee decennia exponentieel gegroeid. De inzet van zogenaamde conversational agents is een van de belangrijkste manieren waarop dienstverleners technologie kunnen inzetten om menselijke servicemedewerkers te ondersteunen of vervangen (Gartner, 2021). Conversational agents zijn geautomatiseerde gesprekspartners die menselijk communicatief gedrag nabootsen (Laranjo et al., 2018; Schuetzler et al., 2018). Er bestaan grofweg drie soorten conversational agents: chatbots, avatars, en robots. Chatbots zijn applicaties die geen virtuele of fysieke belichaming hebben en voornamelijk communiceren via gesproken of geschreven verbale communicatie (Araujo, 2018;Dale, 2016). Avatars hebben een virtuele belichaming, waardoor ze ook non-verbale signalen kunnen gebruiken om te communiceren, zoals glimlachen en knikken (Cassell, 2000). Robots, ten slotte, hebben een fysieke belichaming, waardoor ze ook fysiek contact kunnen hebben met gebruikers (Fink, 2012). Conversational agents onderscheiden zich door hun vermogen om menselijk gedrag te vertonen in service-interacties, maar op de vraag ‘hoe menselijk is wenselijk?’ bestaat nog geen eenduidig antwoord. Conversational agents als sociale actoren Om succesvol te zijn als dienstverlener, is kwalitatief hoogwaardige interactie tussen servicemedewerkers en klanten van cruciaal belang (Palmatier et al., 2006). Dit komt omdat klanten hun percepties van een servicemedewerker (bijv. vriendelijkheid, bekwaamheid) ontlenen aan diens uiterlijk en verbale en non verbale gedrag (Nickson et al., 2005; Specht et al., 2007; Sundaram & Webster, 2000). Deze klantpercepties beïnvloeden belangrijke aspecten van de relatie tussen klanten en dienstverleners, zoals vertrouwen en betrokkenheid, die op hun beurt intentie tot gebruik, mond-tot-mondreclame, loyaliteit en samenwerking beïnvloeden (Hennig-Thurau, 2004; Palmatier et al., 2006).Er is groeiend bewijs dat de uiterlijke kenmerken en communicatieve gedragingen (hierna: menselijke communicatieve gedragingen) die percepties van klanten positief beïnvloeden, ook effectief zijn wanneer ze worden toegepast door conversational agents (B.R. Duffy, 2003; Holtgraves et al., 2007). Het zogenaamde ‘Computers Als Sociale Actoren’ (CASA paradigma vertrekt vanuit de aanname dat mensen de neiging hebben om onbewust sociale regels en gedragingen toe te passen in interacties met computers, ondanks het feit dat ze weten dat deze computers levenloos zijn (Nass et al., 1994). Dit kan verder worden verklaard door het fenomeen antropomorfisme (Epley et al., 2007; Novak & Hoffman, 2019). Antropomorfisme houdt in dat de aanwezigheid van mensachtige kenmerken of gedragingen in niet-menselijke agenten, onbewust cognitieve schema's voor menselijke interactie activeert (Aggarwal & McGill, 2007; M.K. Lee et al., 2010). Door computers te antropomorfiseren komen mensen tegemoet aan hun eigen behoefte aan sociale verbinding en begrip van de sociale omgeving (Epley et al., 2007; Waytz et al., 2010). Dit heeft echter ook tot gevolg dat mensen cognitieve schema’s voor sociale perceptie toepassen op conversational agents.
Robot tutors provide new opportunities for education. However, they also introduce moral challenges. This study reports a systematic literature re-view (N = 256) aimed at identifying the moral considerations related to ro-bots in education. While our findings suggest that robot tutors hold great potential for improving education, there are multiple values of both (special needs) children and teachers that are impacted (positively and negatively) by its introduction. Positive values related to robot tutors are: psychological welfare and happiness, efficiency, freedom from bias and usability. However, there are also concerns that robot tutors may negatively impact these same values. Other concerns relate to the values of friendship and attachment, human contact, deception and trust, privacy, security, safety and accountability. All these values relate to children and teachers. The moral values of other stakeholder groups, such as parents, are overlooked in the existing literature. The results suggest that, while there is a potential for ap-plying robot tutors in a morally justified way, there are imported stake-holder groups that need to be consulted to also take their moral values into consideration by implementing tutor robots in an educational setting. (from Narcis.nl)
MULTIFILE