Background:It is unclear why some physical activity (PA) mobile health (mHealth) interventions successfully promote PA whereas others do not. One possible explanation is the variety in PA mHealth interventions—not only do interventions differ in the selection of persuasive strategies but also the design and implementation of persuasive strategies can vary. However, limited studies have examined the different designs and technical implementations of strategies or explored if they indeed influenced the effectiveness of the intervention.Objective:This scoping review sets out to explore the different technical implementations and design characteristics of common and likely most effective persuasive strategies, namely, goal setting, monitoring, reminders, rewards, sharing, and social comparison. Furthermore, this review aims to explore whether previous mHealth studies examined the influence of the different design characteristics and technical operationalizations of common persuasive strategies on the effectiveness of the intervention to persuade the user to engage in PA.Methods:An unsystematic snowball and gray literature search was performed to identify the literature that evaluated the persuasive strategies in experimental trials (eg, randomized controlled trial, pre-post test). Studies were included if they targeted adults, if they were (partly) delivered by a mobile system, if they reported PA outcomes, if they used an experimental trial, and when they specifically compared the effect of different designs or implementations of persuasive strategies. The study methods, implementations, and designs of persuasive strategies, and the study results were systematically extracted from the literature by the reviewers.Results:A total of 29 experimental trials were identified. We found a heterogeneity in how the strategies are being implemented and designed. Moreover, the findings indicated that the implementation and design of the strategy has an influence on the effectiveness of the PA intervention. For instance, the effectiveness of rewarding was shown to vary between types of rewards; rewarding goal achievement seems to be more effective than rewarding each step taken. Furthermore, studies comparing different ways of goal setting suggested that assigning a goal to users might appear to be more effective than letting the user set their own goal, similar to using adaptively tailored goals as opposed to static generic goals. This study further demonstrates that only a few studies have examined the influence of different technical implementations on PA behavior.Conclusions:The different implementations and designs of persuasive strategies in mHealth interventions should be critically considered when developing such interventions and before drawing conclusions on the effectiveness of the strategy as a whole. Future efforts are needed to examine which implementations and designs are most effective to improve the translation of theory-based persuasive strategies into practical delivery forms.
Background:It is unclear why some physical activity (PA) mobile health (mHealth) interventions successfully promote PA whereas others do not. One possible explanation is the variety in PA mHealth interventions—not only do interventions differ in the selection of persuasive strategies but also the design and implementation of persuasive strategies can vary. However, limited studies have examined the different designs and technical implementations of strategies or explored if they indeed influenced the effectiveness of the intervention.Objective:This scoping review sets out to explore the different technical implementations and design characteristics of common and likely most effective persuasive strategies, namely, goal setting, monitoring, reminders, rewards, sharing, and social comparison. Furthermore, this review aims to explore whether previous mHealth studies examined the influence of the different design characteristics and technical operationalizations of common persuasive strategies on the effectiveness of the intervention to persuade the user to engage in PA.Methods:An unsystematic snowball and gray literature search was performed to identify the literature that evaluated the persuasive strategies in experimental trials (eg, randomized controlled trial, pre-post test). Studies were included if they targeted adults, if they were (partly) delivered by a mobile system, if they reported PA outcomes, if they used an experimental trial, and when they specifically compared the effect of different designs or implementations of persuasive strategies. The study methods, implementations, and designs of persuasive strategies, and the study results were systematically extracted from the literature by the reviewers.Results:A total of 29 experimental trials were identified. We found a heterogeneity in how the strategies are being implemented and designed. Moreover, the findings indicated that the implementation and design of the strategy has an influence on the effectiveness of the PA intervention. For instance, the effectiveness of rewarding was shown to vary between types of rewards; rewarding goal achievement seems to be more effective than rewarding each step taken. Furthermore, studies comparing different ways of goal setting suggested that assigning a goal to users might appear to be more effective than letting the user set their own goal, similar to using adaptively tailored goals as opposed to static generic goals. This study further demonstrates that only a few studies have examined the influence of different technical implementations on PA behavior.Conclusions:The different implementations and designs of persuasive strategies in mHealth interventions should be critically considered when developing such interventions and before drawing conclusions on the effectiveness of the strategy as a whole. Future efforts are needed to examine which implementations and designs are most effective to improve the translation of theory-based persuasive strategies into practical delivery forms.
Background: In the past years, a mobile health (mHealth) app called the Dutch Talking Touch Screen Questionnaire (DTTSQ) was developed in The Netherlands. The aim of development was to enable Dutch physical therapy patients to autonomously complete a health-related questionnaire regardless of their level of literacy and digital skills. Objective: The aim of this study was to evaluate the usability (defined as the effectiveness, efficiency, and satisfaction) of the prototype of the DTTSQ for Dutch physical therapy patients with diverse levels of experience in using mobile technology. Methods: The qualitative Three-Step Test-Interview method, including both think-aloud and retrospective probing techniques, was used to gain insight into the usability of the DTTSQ. A total of 24 physical therapy patients were included. The interview data were analyzed using a thematic content analysis approach aimed at analyzing the accuracy and completeness with which participants completed the questionnaire (effectiveness), the time it took the participants to complete the questionnaire (efficiency), and the extent to which the participants were satisfied with the ease of use of the questionnaire (satisfaction). The problems encountered by the participants in this study were given a severity rating that was used to provide a rough estimate of the need for additional usability efforts. Results: All participants within this study were very satisfied with the ease of use of the DTTSQ. Overall, 9 participants stated that the usability of the app exceeded their expectations. The group of 4 average-/high-experienced participants encountered only 1 problem in total, whereas the 11 little-experienced participants encountered an average of 2 problems per person and the 9 inexperienced participants an average of 3 problems per person. A total of 13 different kind of problems were found during this study. Of these problems, 4 need to be addressed before the DTTSQ will be released because they have the potential to negatively influence future usage of the tool. The other 9 problems were less likely to influence future usage of the tool substantially. Conclusions: The usability of the DTTSQ needs to be improved before it can be released. No problems were found with satisfaction or efficiency during the usability test. The effectiveness needs to be improved by (1) making it easier to navigate through screens without the possibility of accidentally skipping one, (2) enabling the possibility to insert an answer by tapping on the text underneath a photograph instead of just touching the photograph itself, and (3) making it easier to correct wrong answers. This study shows the importance of including less skilled participants in a usability study when striving for inclusive design and the importance of measuring not just satisfaction but also efficiency and effectiveness during such studies.
LINK