Service of SURF
© 2025 SURF
Background and purpose: Automatic approaches are widely implemented to automate dose optimization in radiotherapy treatment planning. This study systematically investigates how to configure automatic planning in order to create the best possible plans. Materials and methods: Automatic plans were generated using protocol based automatic iterative optimization. Starting from a simple automation protocol which consisted of the constraints for targets and organs at risk (OAR), the performance of the automatic approach was evaluated in terms of target coverage, OAR sparing, conformity, beam complexity, and plan quality. More complex protocols were systematically explored to improve the quality of the automatic plans. The protocols could be improved by adding a dose goal on the outer 2 mm of the PTV, by setting goals on strategically chosen subparts of OARs, by adding goals for conformity, and by limiting the leaf motion. For prostate plans, development of an automated post-optimization procedure was required to achieve precise control over the dose distribution. Automatic and manually optimized plans were compared for 20 head and neck (H&N), 20 prostate, and 20 rectum cancer patients. Results: Based on simple automation protocols, the automatic optimizer was not always able to generate adequate treatment plans. For the improved final configurations for the three sites, the dose was lower in automatic plans compared to the manual plans in 12 out of 13 considered OARs. In blind tests, the automatic plans were preferred in 80% of cases. Conclusions: With adequate, advanced, protocols the automatic planning approach is able to create high-quality treatment plans.
Retail industry consists of the establishment of selling consumer goods (i.e. technology, pharmaceuticals, food and beverages, apparels and accessories, home improvement etc.) and services (i.e. specialty and movies) to customers through multiple channels of distribution including both the traditional brickand-mortar and online retailing. Managing corporate reputation of retail companies is crucial as it has many advantages, for instance, it has been proven to impact generated revenues (Wang et al., 2016). But, in order to be able to manage corporate reputation, one has to be able to measure it, or, nowadays even better, listen to relevant social signals that are out there on the public web. One of the most extensive and widely used frameworks for measuring corporate reputation is through conducting elaborated surveys with respective stakeholders (Fombrun et al., 2015). This approach is valuable but deemed to be laborious and resource-heavy and will not allow to generate automatic alerts and quick and live insights that are extremely needed in this era of internet. For these purposes a social listening approach is needed that can be tailored to online data such as consumer reviews as the main data source. Online review datasets are a form of electronic Word-of-Mouth (WOM) that, when a data source is picked that is relevant to retail, commonly contain relevant information about customers’ perceptions regarding products (Pookulangara, 2011) and that are massively available. The algorithm that we have built in our application provides retailers with reputation scores for all variables that are deemed to be relevant to retail in the model of Fombrun et al. (2015). Examples of such variables for products and services are high quality, good value, stands behind, and meets customer needs. We propose a new set of subvariables with which these variables can be operationalized for retail in particular. Scores are being calculated using proportions of positive opinion pairs such as <fast, delivery> or <rude, staff> that have been designed per variable. With these important insights extracted, companies can act accordingly and proceed to improve their corporate reputation. It is important to emphasize that, once the design is complete and implemented, all processing can be performed completely automatic and unsupervised. The application makes use of a state of the art aspect-based sentiment analysis (ABSA) framework because of ABSA’s ability to generate sentiment scores for all relevant variables and aspects. Since most online data is in open form and we deliberately want to avoid labelling any data by human experts, the unsupervised aspectator algorithm has been picked. It employs a lexicon to calculate sentiment scores and uses syntactic dependency paths to discover candidate aspects (Bancken et al., 2014). We have applied our approach to a large number of online review datasets that we sampled from a list of 50 top global retailers according to National Retail Federation (2020), including both offline and online operation, and that we scraped from trustpilot, a public website that is well-known to retailers. The algorithm has carefully been evaluated by manually annotating a randomly sampled subset of the datasets for validation purposes by two independent annotators. The Kappa’s score on this subset was 80%.
MULTIFILE
The huge number of images shared on the Web makes effective cataloguing methods for efficient storage and retrieval procedures specifically tailored on the end-user needs a very demanding and crucial issue. In this paper, we investigate the applicability of Automatic Image Annotation (AIA) for image tagging with a focus on the needs of database expansion for a news broadcasting company. First, we determine the feasibility of using AIA in such a context with the aim of minimizing an extensive retraining whenever a new tag needs to be incorporated in the tag set population. Then, an image annotation tool integrating a Convolutional Neural Network model (AlexNet) for feature extraction and a K-Nearest-Neighbours classifier for tag assignment to images is introduced and tested. The obtained performances are very promising addressing the proposed approach as valuable to tackle the problem of image tagging in the framework of a broadcasting company, whilst not yet optimal for integration in the business process.